[PATCH v2] Add static_key_feature_checks_initialized flag

2024-04-07 Thread Nicholas Miehlbradt
JUMP_LABEL_FEATURE_CHECK_DEBUG used static_key_intialized to determine
whether {cpu,mmu}_has_feature() is used before static keys were
initialized. However, {cpu,mmu}_has_feature() should not be used before
setup_feature_keys() is called but static_key_initialized is set well
before this by the call to jump_label_init() in early_init_devtree().
This creates a window in which JUMP_LABEL_FEATURE_CHECK_DEBUG will not
detect misuse and report errors. Add a flag specifically to indicate
when {cpu,mmu}_has_feature() is safe to use.

Signed-off-by: Nicholas Miehlbradt 
---
v2: Reword commit message

v1: 
https://lore.kernel.org/linuxppc-dev/20240327045911.64543-1-nicho...@linux.ibm.com/
---
 arch/powerpc/include/asm/cpu_has_feature.h | 2 +-
 arch/powerpc/include/asm/feature-fixups.h  | 2 ++
 arch/powerpc/include/asm/mmu.h | 2 +-
 arch/powerpc/lib/feature-fixups.c  | 8 
 4 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/cpu_has_feature.h 
b/arch/powerpc/include/asm/cpu_has_feature.h
index 727d4b321937..0efabccd820c 100644
--- a/arch/powerpc/include/asm/cpu_has_feature.h
+++ b/arch/powerpc/include/asm/cpu_has_feature.h
@@ -29,7 +29,7 @@ static __always_inline bool cpu_has_feature(unsigned long 
feature)
 #endif
 
 #ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG
-   if (!static_key_initialized) {
+   if (!static_key_feature_checks_initialized) {
printk("Warning! cpu_has_feature() used prior to jump label 
init!\n");
dump_stack();
return early_cpu_has_feature(feature);
diff --git a/arch/powerpc/include/asm/feature-fixups.h 
b/arch/powerpc/include/asm/feature-fixups.h
index 77824bd289a3..17d168dd8b49 100644
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -291,6 +291,8 @@ extern long __start___rfi_flush_fixup, 
__stop___rfi_flush_fixup;
 extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
 extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
 
+extern bool static_key_feature_checks_initialized;
+
 void apply_feature_fixups(void);
 void update_mmu_feature_fixups(unsigned long mask);
 void setup_feature_keys(void);
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 3b72c7ed24cf..24f830cf9bb4 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -251,7 +251,7 @@ static __always_inline bool mmu_has_feature(unsigned long 
feature)
 #endif
 
 #ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG
-   if (!static_key_initialized) {
+   if (!static_key_feature_checks_initialized) {
printk("Warning! mmu_has_feature() used prior to jump label 
init!\n");
dump_stack();
return early_mmu_has_feature(feature);
diff --git a/arch/powerpc/lib/feature-fixups.c 
b/arch/powerpc/lib/feature-fixups.c
index 4f82581ca203..b7201ba50b2e 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -25,6 +25,13 @@
 #include 
 #include 
 
+/*
+ * Used to generate warnings if mmu or cpu feature check functions that
+ * use static keys before they are initialized.
+ */
+bool static_key_feature_checks_initialized __read_mostly;
+EXPORT_SYMBOL_GPL(static_key_feature_checks_initialized);
+
 struct fixup_entry {
unsigned long   mask;
unsigned long   value;
@@ -679,6 +686,7 @@ void __init setup_feature_keys(void)
jump_label_init();
cpu_feature_keys_init();
mmu_feature_keys_init();
+   static_key_feature_checks_initialized = true;
 }
 
 static int __init check_features(void)
-- 
2.40.1



Re: [PATCH] Add static_key_feature_checks_initialized flag

2024-04-01 Thread Nicholas Miehlbradt




On 28/3/2024 2:20 am, Christophe Leroy wrote:



Le 27/03/2024 à 05:59, Nicholas Miehlbradt a écrit :

JUMP_LABEL_FEATURE_CHECK_DEBUG used static_key_initialized to determine
whether {cpu,mmu}_has_feature() was used before static keys were
initialized. However, {cpu,mmu}_has_feature() should not be used before
setup_feature_keys() is called. As static_key_initalized is set much
earlier during boot there is a window in which JUMP_LABEL_FEATURE_CHECK_DEBUG
will not report errors. Add a flag specifically to indicate when
{cpu,mmu}_has_feature() is safe to use.


What do you mean by "much earlier" ?

As far as I can see, static_key_initialized is set by jump_label_init()
as cpu_feature_keys_init() and mmu_feature_keys_init() are call
immediately after. I don't think it is possible to do anything inbetween.

Or maybe you mean the problem is the call to jump_label_init() in
early_init_devtree() ? You should make it explicit in the message, and
see if it wouldn't be better to call cpu_feature_keys_init() and
mmu_feature_keys_init() as well in early_init_devtree() in that case ?


The jump_label_init() call in early_init_devtree() is exactly the issue.
I don't think it's possible to move the call to mmu_feature_keys_init()
earlier without significant refactoring since mmu features are being set
as late as setup_kup().

I'll still sent a v2 with a better worded commit message.

Nicholas


Christophe


Re: [PATCH] Add static_key_feature_checks_initialized flag

2024-04-01 Thread Nicholas Miehlbradt




On 28/3/2024 2:20 am, Christophe Leroy wrote:



Le 27/03/2024 à 05:59, Nicholas Miehlbradt a écrit :

JUMP_LABEL_FEATURE_CHECK_DEBUG used static_key_initialized to determine
whether {cpu,mmu}_has_feature() was used before static keys were
initialized. However, {cpu,mmu}_has_feature() should not be used before
setup_feature_keys() is called. As static_key_initalized is set much
earlier during boot there is a window in which JUMP_LABEL_FEATURE_CHECK_DEBUG
will not report errors. Add a flag specifically to indicate when
{cpu,mmu}_has_feature() is safe to use.


What do you mean by "much earlier" ?

As far as I can see, static_key_initialized is set by jump_label_init()
as cpu_feature_keys_init() and mmu_feature_keys_init() are call
immediately after. I don't think it is possible to do anything inbetween.

Or maybe you mean the problem is the call to jump_label_init() in
early_init_devtree() ? You should make it explicit in the message, and
see if it wouldn't be better to call cpu_feature_keys_init() and
mmu_feature_keys_init() as well in early_init_devtree() in that case ?

The jump_label_init() call in early_init_devtree() is exactly the issue. 
I don't think it's possible to move the call to mmu_feature_keys_init() 
earlier without significant refactoring since mmu features are being set 
as late as setup_kup().


I'll still sent a v2 with a better worded commit message.

Nicholas


Christophe


[PATCH] Add static_key_feature_checks_initialized flag

2024-03-26 Thread Nicholas Miehlbradt
JUMP_LABEL_FEATURE_CHECK_DEBUG used static_key_initialized to determine
whether {cpu,mmu}_has_feature() was used before static keys were
initialized. However, {cpu,mmu}_has_feature() should not be used before
setup_feature_keys() is called. As static_key_initalized is set much
earlier during boot there is a window in which JUMP_LABEL_FEATURE_CHECK_DEBUG
will not report errors. Add a flag specifically to indicate when
{cpu,mmu}_has_feature() is safe to use.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/include/asm/cpu_has_feature.h | 2 +-
 arch/powerpc/include/asm/feature-fixups.h  | 2 ++
 arch/powerpc/include/asm/mmu.h | 2 +-
 arch/powerpc/lib/feature-fixups.c  | 8 
 4 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/cpu_has_feature.h 
b/arch/powerpc/include/asm/cpu_has_feature.h
index 727d4b321937..0efabccd820c 100644
--- a/arch/powerpc/include/asm/cpu_has_feature.h
+++ b/arch/powerpc/include/asm/cpu_has_feature.h
@@ -29,7 +29,7 @@ static __always_inline bool cpu_has_feature(unsigned long 
feature)
 #endif
 
 #ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG
-   if (!static_key_initialized) {
+   if (!static_key_feature_checks_initialized) {
printk("Warning! cpu_has_feature() used prior to jump label 
init!\n");
dump_stack();
return early_cpu_has_feature(feature);
diff --git a/arch/powerpc/include/asm/feature-fixups.h 
b/arch/powerpc/include/asm/feature-fixups.h
index 77824bd289a3..17d168dd8b49 100644
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -291,6 +291,8 @@ extern long __start___rfi_flush_fixup, 
__stop___rfi_flush_fixup;
 extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
 extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
 
+extern bool static_key_feature_checks_initialized;
+
 void apply_feature_fixups(void);
 void update_mmu_feature_fixups(unsigned long mask);
 void setup_feature_keys(void);
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 3b72c7ed24cf..24f830cf9bb4 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -251,7 +251,7 @@ static __always_inline bool mmu_has_feature(unsigned long 
feature)
 #endif
 
 #ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG
-   if (!static_key_initialized) {
+   if (!static_key_feature_checks_initialized) {
printk("Warning! mmu_has_feature() used prior to jump label 
init!\n");
dump_stack();
return early_mmu_has_feature(feature);
diff --git a/arch/powerpc/lib/feature-fixups.c 
b/arch/powerpc/lib/feature-fixups.c
index 4f82581ca203..b7201ba50b2e 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -25,6 +25,13 @@
 #include 
 #include 
 
+/*
+ * Used to generate warnings if mmu or cpu feature check functions that
+ * use static keys before they are initialized.
+ */
+bool static_key_feature_checks_initialized __read_mostly;
+EXPORT_SYMBOL_GPL(static_key_feature_checks_initialized);
+
 struct fixup_entry {
unsigned long   mask;
unsigned long   value;
@@ -679,6 +686,7 @@ void __init setup_feature_keys(void)
jump_label_init();
cpu_feature_keys_init();
mmu_feature_keys_init();
+   static_key_feature_checks_initialized = true;
 }
 
 static int __init check_features(void)
-- 
2.40.1



Re: [PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack

2024-01-09 Thread Nicholas Miehlbradt




On 14/12/2023 8:00 pm, Christophe Leroy wrote:



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :

Functions which walk the stack read parts of the stack which cannot be
instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
these functions to prevent false positives.


Do other architectures have to do it as well ?

I don't see it for show_stack(), is that a specific need for powerpc ?
Other archs have the annotation on functions called by show_stack(). For 
x86 it's on show_trace_log_lvl() and for s390 it's on __unwind_start() 
and unwind_next_frame().




Re: [PATCH 12/13] powerpc/string: Add KMSAN support

2024-01-09 Thread Nicholas Miehlbradt




On 14/12/2023 8:25 pm, Christophe Leroy wrote:



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :

KMSAN expects functions __mem{set,cpy,move} so add aliases pointing to
the respective functions.

Disable use of architecture specific memset{16,32,64} to ensure that
metadata is correctly updated and strn{cpy,cmp} and mem{chr,cmp} which
are implemented in assembly and therefore cannot be instrumented to
propagate/check metadata.

Alias calls to mem{set,cpy,move} to __msan_mem{set,cpy,move} in
instrumented code to correctly propagate metadata.

Signed-off-by: Nicholas Miehlbradt 
---
   arch/powerpc/include/asm/kmsan.h   |  7 +++
   arch/powerpc/include/asm/string.h  | 18 --
   arch/powerpc/lib/Makefile  |  2 ++
   arch/powerpc/lib/mem_64.S  |  5 -
   arch/powerpc/lib/memcpy_64.S   |  2 ++
   .../selftests/powerpc/copyloops/asm/kmsan.h|  0
   .../selftests/powerpc/copyloops/linux/export.h |  1 +
   7 files changed, 32 insertions(+), 3 deletions(-)
   create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h

diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
index bc84f6ff2ee9..fc59dc24e170 100644
--- a/arch/powerpc/include/asm/kmsan.h
+++ b/arch/powerpc/include/asm/kmsan.h
@@ -7,6 +7,13 @@
   #ifndef _ASM_POWERPC_KMSAN_H
   #define _ASM_POWERPC_KMSAN_H
   
+#ifdef CONFIG_KMSAN

+#define EXPORT_SYMBOL_KMSAN(fn) SYM_FUNC_ALIAS(__##fn, fn) \
+   EXPORT_SYMBOL(__##fn)
+#else
+#define EXPORT_SYMBOL_KMSAN(fn)
+#endif
+
   #ifndef __ASSEMBLY__
   #ifndef MODULE
   
diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h

index 60ba22770f51..412626ce619b 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -4,7 +4,7 @@
   
   #ifdef __KERNEL__
   
-#ifndef CONFIG_KASAN

+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
   #define __HAVE_ARCH_STRNCPY
   #define __HAVE_ARCH_STRNCMP
   #define __HAVE_ARCH_MEMCHR
@@ -56,8 +56,22 @@ void *__memmove(void *to, const void *from, __kernel_size_t 
n);
   #endif /* CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX */
   #endif /* CONFIG_KASAN */
   
+#ifdef CONFIG_KMSAN

+
+void *__memset(void *s, int c, __kernel_size_t count);
+void *__memcpy(void *to, const void *from, __kernel_size_t n);
+void *__memmove(void *to, const void *from, __kernel_size_t n);
+


The same is done for KASAN, can't you reuse it ?

I tried this but I believe it makes the file more disorganised and 
difficult to edit since there ends up being a set of definitions for 
each intersection of features e.g. the definitions needed for both KASAN 
and KMSAN, just KASAN, just KMSAN, etc.


This way it's clearer what each sanitizer needs and changing definitions 
for one one sanitizer won't require refactors affecting other sanitizers.



+#ifdef __SANITIZE_MEMORY__
+#include 
+#define memset __msan_memset
+#define memcpy __msan_memcpy
+#define memmove __msan_memmove
+#endif


Will that work as you wish ?
What about the calls to memset() or memcpy() emited directly by GCC ?

These are handled by the compiler instrumentation which replaces these 
with calls to the instrumented equivalent.



+#endif /* CONFIG_KMSAN */
+
   #ifdef CONFIG_PPC64
-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
   #define __HAVE_ARCH_MEMSET32
   #define __HAVE_ARCH_MEMSET64
   
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile

index 51ad0397c17a..fc3ea3eebbd6 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -32,9 +32,11 @@ obj-y += code-patching.o feature-fixups.o pmem.o
   obj-$(CONFIG_CODE_PATCHING_SELFTEST) += test-code-patching.o
   
   ifndef CONFIG_KASAN

+ifndef CONFIG_KMSAN
   obj-y+=  string.o memcmp_$(BITS).o
   obj-$(CONFIG_PPC32)  += strlen_32.o
   endif
+endif
   
   obj-$(CONFIG_PPC32)	+= div64.o copy_32.o crtsavres.o
   
diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S

index 6fd06cd20faa..a55f2fac49b3 100644
--- a/arch/powerpc/lib/mem_64.S
+++ b/arch/powerpc/lib/mem_64.S
@@ -9,8 +9,9 @@
   #include 
   #include 
   #include 
+#include 
   
-#ifndef CONFIG_KASAN

+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
   _GLOBAL(__memset16)
rlwimi  r4,r4,16,0,15
/* fall through */
@@ -96,6 +97,7 @@ _GLOBAL_KASAN(memset)
blr
   EXPORT_SYMBOL(memset)
   EXPORT_SYMBOL_KASAN(memset)
+EXPORT_SYMBOL_KMSAN(memset)
   
   _GLOBAL_TOC_KASAN(memmove)

cmplw   0,r3,r4
@@ -140,3 +142,4 @@ _GLOBAL(backwards_memcpy)
b   1b
   EXPORT_SYMBOL(memmove)
   EXPORT_SYMBOL_KASAN(memmove)
+EXPORT_SYMBOL_KMSAN(memmove)
diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
index b5a67e20143f..1657861618cc 100644
--- a/arch/powerpc/lib/memcpy_64.S
+++ b/arch/powerpc/lib/memcpy_64.S
@@ -8,6 +8,7 @

Re: [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap

2024-01-09 Thread Nicholas Miehlbradt




On 14/12/2023 8:17 pm, Christophe Leroy wrote:



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :

Splits the vmalloc region into four. The first quarter is the new
vmalloc region, the second is used to store shadow metadata and the
third is used to store origin metadata. The fourth quarter is unused.

Do the same for the ioremap region.

Module data is stored in the vmalloc region so alias the modules
metadata addresses to the respective vmalloc metadata addresses. Define
MODULES_VADDR and MODULES_END to the start and end of the vmalloc
region.

Since MODULES_VADDR was previously only defined on ppc32 targets checks
for if this macro is defined need to be updated to include
defined(CONFIG_PPC32).


Why ?

In your case MODULES_VADDR is above PAGE_OFFSET so there should be no
difference.

Christophe

On 64 bit builds the BUILD_BUG always triggers since MODULES_VADDR 
expands to __vmalloc_start which is defined in a different translation 
unit. I can restrict the #ifdef CONFIG_PPC32 to just around the 
BUILD_BUG since as you pointed out there is no difference otherwise.


Signed-off-by: Nicholas Miehlbradt 
---
   arch/powerpc/include/asm/book3s/64/pgtable.h | 42 
   arch/powerpc/kernel/module.c |  2 +-
   2 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cb77eddca54b..b3a02b8d96e3 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -249,7 +249,38 @@ enum pgtable_index {
   extern unsigned long __vmalloc_start;
   extern unsigned long __vmalloc_end;
   #define VMALLOC_START__vmalloc_start
+
+#ifndef CONFIG_KMSAN
   #define VMALLOC_END  __vmalloc_end
+#else
+/*
+ * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4
+ * are used to keep the metadata for virtual pages. The memory formerly
+ * belonging to vmalloc area is now laid out as follows:
+ *
+ * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area
+ * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to
+ *  KMSAN_VMALLOC_SHADOW_START+VMALLOC_LEN - vmalloc area shadow
+ * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to
+ *  KMSAN_VMALLOC_ORIGIN_START+VMALLOC_LEN - vmalloc area origins
+ * 4th quarter: unused
+ */
+#define VMALLOC_LEN ((__vmalloc_end - __vmalloc_start) >> 2)
+#define VMALLOC_END (VMALLOC_START + VMALLOC_LEN)
+
+#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
+#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_END + VMALLOC_LEN)
+
+/*
+ * Module metadata is stored in the corresponding vmalloc metadata regions
+ */
+#define KMSAN_MODULES_SHADOW_START KMSAN_VMALLOC_SHADOW_START
+#define KMSAN_MODULES_ORIGIN_START KMSAN_VMALLOC_ORIGIN_START
+#endif /* CONFIG_KMSAN */
+
+#define MODULES_VADDR VMALLOC_START
+#define MODULES_END VMALLOC_END
+#define MODULES_LEN(MODULES_END - MODULES_VADDR)
   
   static inline unsigned int ioremap_max_order(void)

   {
@@ -264,7 +295,18 @@ extern unsigned long __kernel_io_start;
   extern unsigned long __kernel_io_end;
   #define KERN_VIRT_START __kernel_virt_start
   #define KERN_IO_START  __kernel_io_start
+#ifndef CONFIG_KMSAN
   #define KERN_IO_END __kernel_io_end
+#else
+/*
+ * In KMSAN builds IO space is 4 times smaller, the remaining space is used to
+ * store metadata. See comment for vmalloc regions above.
+ */
+#define KERN_IO_LEN ((__kernel_io_end - __kernel_io_start) >> 2)
+#define KERN_IO_END (KERN_IO_START + KERN_IO_LEN)
+#define KERN_IO_SHADOW_STARTKERN_IO_END
+#define KERN_IO_ORIGIN_START(KERN_IO_SHADOW_START + KERN_IO_LEN)
+#endif /* !CONFIG_KMSAN */
   
   extern struct page *vmemmap;

   extern unsigned long pci_io_base;
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index f6d6ae0a1692..5043b959ad4d 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -107,7 +107,7 @@ __module_alloc(unsigned long size, unsigned long start, 
unsigned long end, bool
   
   void *module_alloc(unsigned long size)

   {
-#ifdef MODULES_VADDR
+#if defined(MODULES_VADDR) && defined(CONFIG_PPC32)
unsigned long limit = (unsigned long)_etext - SZ_32M;
void *ptr = NULL;
   


[PATCH 01/13] kmsan: Export kmsan_handle_dma

2023-12-13 Thread Nicholas Miehlbradt
kmsan_handle_dma is required by virtio drivers. Export kmsan_handle_dma
so that the drivers can be compiled as modules.

Signed-off-by: Nicholas Miehlbradt 
---
 mm/kmsan/hooks.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
index 7a30274b893c..3532d9275ca5 100644
--- a/mm/kmsan/hooks.c
+++ b/mm/kmsan/hooks.c
@@ -358,6 +358,7 @@ void kmsan_handle_dma(struct page *page, size_t offset, 
size_t size,
size -= to_go;
}
 }
+EXPORT_SYMBOL(kmsan_handle_dma);
 
 void kmsan_handle_dma_sg(struct scatterlist *sg, int nents,
 enum dma_data_direction dir)
-- 
2.40.1



[PATCH 05/13] powerpc: Unpoison buffers populated by hcalls

2023-12-13 Thread Nicholas Miehlbradt
plpar_hcall provides to the hypervisor a buffer where return data should be
placed. The hypervisor initializes the buffers which is not visible to
KMSAN so unpoison them manually.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/platforms/pseries/hvconsole.c | 2 ++
 arch/powerpc/sysdev/xive/spapr.c   | 3 +++
 2 files changed, 5 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/hvconsole.c 
b/arch/powerpc/platforms/pseries/hvconsole.c
index 1ac52963e08b..7ad66acd5db8 100644
--- a/arch/powerpc/platforms/pseries/hvconsole.c
+++ b/arch/powerpc/platforms/pseries/hvconsole.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -32,6 +33,7 @@ int hvc_get_chars(uint32_t vtermno, char *buf, int count)
unsigned long *lbuf = (unsigned long *)buf;
 
ret = plpar_hcall(H_GET_TERM_CHAR, retbuf, vtermno);
+   kmsan_unpoison_memory(retbuf, sizeof(retbuf));
lbuf[0] = be64_to_cpu(retbuf[1]);
lbuf[1] = be64_to_cpu(retbuf[2]);
 
diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
index e45419264391..a9f48a336e4d 100644
--- a/arch/powerpc/sysdev/xive/spapr.c
+++ b/arch/powerpc/sysdev/xive/spapr.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -191,6 +192,8 @@ static long plpar_int_get_source_info(unsigned long flags,
return rc;
}
 
+   kmsan_unpoison_memory(retbuf, sizeof(retbuf));
+
*src_flags = retbuf[0];
*eoi_page  = retbuf[1];
*trig_page = retbuf[2];
-- 
2.40.1



[PATCH 07/13] powerpc/kprobes: Unpoison instruction in kprobe struct

2023-12-13 Thread Nicholas Miehlbradt
KMSAN does not unpoison the ainsn field of a kprobe struct correctly.
Manually unpoison it to prevent false positives.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/kernel/kprobes.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index b20ee72e873a..1cbec54f2b6a 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -27,6 +27,7 @@
 #include 
 #include 
 #include 
+#include 
 
 DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
 DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
@@ -179,6 +180,7 @@ int arch_prepare_kprobe(struct kprobe *p)
 
if (!ret) {
patch_instruction(p->ainsn.insn, insn);
+   kmsan_unpoison_memory(p->ainsn.insn, sizeof(kprobe_opcode_t));
p->opcode = ppc_inst_val(insn);
}
 
-- 
2.40.1



[PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap

2023-12-13 Thread Nicholas Miehlbradt
Splits the vmalloc region into four. The first quarter is the new
vmalloc region, the second is used to store shadow metadata and the
third is used to store origin metadata. The fourth quarter is unused.

Do the same for the ioremap region.

Module data is stored in the vmalloc region so alias the modules
metadata addresses to the respective vmalloc metadata addresses. Define
MODULES_VADDR and MODULES_END to the start and end of the vmalloc
region.

Since MODULES_VADDR was previously only defined on ppc32 targets checks
for if this macro is defined need to be updated to include
defined(CONFIG_PPC32).

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/include/asm/book3s/64/pgtable.h | 42 
 arch/powerpc/kernel/module.c |  2 +-
 2 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cb77eddca54b..b3a02b8d96e3 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -249,7 +249,38 @@ enum pgtable_index {
 extern unsigned long __vmalloc_start;
 extern unsigned long __vmalloc_end;
 #define VMALLOC_START  __vmalloc_start
+
+#ifndef CONFIG_KMSAN
 #define VMALLOC_END__vmalloc_end
+#else
+/*
+ * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4
+ * are used to keep the metadata for virtual pages. The memory formerly
+ * belonging to vmalloc area is now laid out as follows:
+ *
+ * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area
+ * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to
+ *  KMSAN_VMALLOC_SHADOW_START+VMALLOC_LEN - vmalloc area shadow
+ * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to
+ *  KMSAN_VMALLOC_ORIGIN_START+VMALLOC_LEN - vmalloc area origins
+ * 4th quarter: unused
+ */
+#define VMALLOC_LEN ((__vmalloc_end - __vmalloc_start) >> 2)
+#define VMALLOC_END (VMALLOC_START + VMALLOC_LEN)
+
+#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
+#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_END + VMALLOC_LEN)
+
+/*
+ * Module metadata is stored in the corresponding vmalloc metadata regions
+ */
+#define KMSAN_MODULES_SHADOW_START KMSAN_VMALLOC_SHADOW_START
+#define KMSAN_MODULES_ORIGIN_START KMSAN_VMALLOC_ORIGIN_START
+#endif /* CONFIG_KMSAN */
+
+#define MODULES_VADDR VMALLOC_START
+#define MODULES_END VMALLOC_END
+#define MODULES_LEN(MODULES_END - MODULES_VADDR)
 
 static inline unsigned int ioremap_max_order(void)
 {
@@ -264,7 +295,18 @@ extern unsigned long __kernel_io_start;
 extern unsigned long __kernel_io_end;
 #define KERN_VIRT_START __kernel_virt_start
 #define KERN_IO_START  __kernel_io_start
+#ifndef CONFIG_KMSAN
 #define KERN_IO_END __kernel_io_end
+#else
+/*
+ * In KMSAN builds IO space is 4 times smaller, the remaining space is used to
+ * store metadata. See comment for vmalloc regions above.
+ */
+#define KERN_IO_LEN ((__kernel_io_end - __kernel_io_start) >> 2)
+#define KERN_IO_END (KERN_IO_START + KERN_IO_LEN)
+#define KERN_IO_SHADOW_STARTKERN_IO_END
+#define KERN_IO_ORIGIN_START(KERN_IO_SHADOW_START + KERN_IO_LEN)
+#endif /* !CONFIG_KMSAN */
 
 extern struct page *vmemmap;
 extern unsigned long pci_io_base;
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index f6d6ae0a1692..5043b959ad4d 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -107,7 +107,7 @@ __module_alloc(unsigned long size, unsigned long start, 
unsigned long end, bool
 
 void *module_alloc(unsigned long size)
 {
-#ifdef MODULES_VADDR
+#if defined(MODULES_VADDR) && defined(CONFIG_PPC32)
unsigned long limit = (unsigned long)_etext - SZ_32M;
void *ptr = NULL;
 
-- 
2.40.1



[PATCH 11/13] powerpc: Implement architecture specific KMSAN interface

2023-12-13 Thread Nicholas Miehlbradt
arch_kmsan_get_meta_or_null finds the metadata addresses for addresses
in the ioremap region which is mapped separately on powerpc.

kmsan_vir_addr_valid is the same as virt_addr_valid except excludes the
check that addr is less than high_memory since this function can be
called on addresses higher than this.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/include/asm/kmsan.h | 44 
 1 file changed, 44 insertions(+)
 create mode 100644 arch/powerpc/include/asm/kmsan.h

diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
new file mode 100644
index ..bc84f6ff2ee9
--- /dev/null
+++ b/arch/powerpc/include/asm/kmsan.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * powerpc KMSAN support.
+ *
+ */
+
+#ifndef _ASM_POWERPC_KMSAN_H
+#define _ASM_POWERPC_KMSAN_H
+
+#ifndef __ASSEMBLY__
+#ifndef MODULE
+
+#include 
+#include 
+#include 
+
+/*
+ * Functions below are declared in the header to make sure they are inlined.
+ * They all are called from kmsan_get_metadata() for every memory access in
+ * the kernel, so speed is important here.
+ */
+
+/*
+ * No powerpc specific metadata locations
+ */
+static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
+{
+   unsigned long addr64 = (unsigned long)addr, off;
+   if (KERN_IO_START <= addr64 && addr64 < KERN_IO_END) {
+   off = addr64 - KERN_IO_START;
+   return (void *)off + (is_origin ? KERN_IO_ORIGIN_START : 
KERN_IO_SHADOW_START);
+   } else {
+   return 0;
+   }
+}
+
+static inline bool kmsan_virt_addr_valid(void *addr)
+{
+   return (unsigned long)addr >= PAGE_OFFSET && 
pfn_valid(virt_to_pfn(addr));
+}
+
+#endif /* !MODULE */
+#endif /* !__ASSEMBLY__ */
+#endif /* _ASM_POWERPC_KMSAN_H */
-- 
2.40.1



[PATCH 06/13] powerpc/pseries/nvram: Unpoison buffer populated by rtas_call

2023-12-13 Thread Nicholas Miehlbradt
rtas_call provides a buffer where the return data should be placed. Rtas
initializes the buffer which is not visible to KMSAN so unpoison it
manually.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/platforms/pseries/nvram.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/nvram.c 
b/arch/powerpc/platforms/pseries/nvram.c
index 8130c37962c0..21a27d459347 100644
--- a/arch/powerpc/platforms/pseries/nvram.c
+++ b/arch/powerpc/platforms/pseries/nvram.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -41,6 +42,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, 
loff_t *index)
int done;
unsigned long flags;
char *p = buf;
+   size_t l;
 
 
if (nvram_size == 0 || nvram_fetch == RTAS_UNKNOWN_SERVICE)
@@ -53,6 +55,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, 
loff_t *index)
if (i + count > nvram_size)
count = nvram_size - i;
 
+   l = count;
spin_lock_irqsave(&nvram_lock, flags);
 
for (; count != 0; count -= len) {
@@ -73,6 +76,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, 
loff_t *index)
}
 
spin_unlock_irqrestore(&nvram_lock, flags);
+   kmsan_unpoison_memory(buf, l);

*index = i;
return p - buf;
-- 
2.40.1



[PATCH 12/13] powerpc/string: Add KMSAN support

2023-12-13 Thread Nicholas Miehlbradt
KMSAN expects functions __mem{set,cpy,move} so add aliases pointing to
the respective functions.

Disable use of architecture specific memset{16,32,64} to ensure that
metadata is correctly updated and strn{cpy,cmp} and mem{chr,cmp} which
are implemented in assembly and therefore cannot be instrumented to
propagate/check metadata.

Alias calls to mem{set,cpy,move} to __msan_mem{set,cpy,move} in
instrumented code to correctly propagate metadata.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/include/asm/kmsan.h   |  7 +++
 arch/powerpc/include/asm/string.h  | 18 --
 arch/powerpc/lib/Makefile  |  2 ++
 arch/powerpc/lib/mem_64.S  |  5 -
 arch/powerpc/lib/memcpy_64.S   |  2 ++
 .../selftests/powerpc/copyloops/asm/kmsan.h|  0
 .../selftests/powerpc/copyloops/linux/export.h |  1 +
 7 files changed, 32 insertions(+), 3 deletions(-)
 create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h

diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
index bc84f6ff2ee9..fc59dc24e170 100644
--- a/arch/powerpc/include/asm/kmsan.h
+++ b/arch/powerpc/include/asm/kmsan.h
@@ -7,6 +7,13 @@
 #ifndef _ASM_POWERPC_KMSAN_H
 #define _ASM_POWERPC_KMSAN_H
 
+#ifdef CONFIG_KMSAN
+#define EXPORT_SYMBOL_KMSAN(fn) SYM_FUNC_ALIAS(__##fn, fn) \
+   EXPORT_SYMBOL(__##fn)
+#else
+#define EXPORT_SYMBOL_KMSAN(fn)
+#endif
+
 #ifndef __ASSEMBLY__
 #ifndef MODULE
 
diff --git a/arch/powerpc/include/asm/string.h 
b/arch/powerpc/include/asm/string.h
index 60ba22770f51..412626ce619b 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -4,7 +4,7 @@
 
 #ifdef __KERNEL__
 
-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
 #define __HAVE_ARCH_STRNCPY
 #define __HAVE_ARCH_STRNCMP
 #define __HAVE_ARCH_MEMCHR
@@ -56,8 +56,22 @@ void *__memmove(void *to, const void *from, __kernel_size_t 
n);
 #endif /* CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX */
 #endif /* CONFIG_KASAN */
 
+#ifdef CONFIG_KMSAN
+
+void *__memset(void *s, int c, __kernel_size_t count);
+void *__memcpy(void *to, const void *from, __kernel_size_t n);
+void *__memmove(void *to, const void *from, __kernel_size_t n);
+
+#ifdef __SANITIZE_MEMORY__
+#include 
+#define memset __msan_memset
+#define memcpy __msan_memcpy
+#define memmove __msan_memmove
+#endif
+#endif /* CONFIG_KMSAN */
+
 #ifdef CONFIG_PPC64
-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
 #define __HAVE_ARCH_MEMSET32
 #define __HAVE_ARCH_MEMSET64
 
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 51ad0397c17a..fc3ea3eebbd6 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -32,9 +32,11 @@ obj-y += code-patching.o feature-fixups.o pmem.o
 obj-$(CONFIG_CODE_PATCHING_SELFTEST) += test-code-patching.o
 
 ifndef CONFIG_KASAN
+ifndef CONFIG_KMSAN
 obj-y  +=  string.o memcmp_$(BITS).o
 obj-$(CONFIG_PPC32)+= strlen_32.o
 endif
+endif
 
 obj-$(CONFIG_PPC32)+= div64.o copy_32.o crtsavres.o
 
diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
index 6fd06cd20faa..a55f2fac49b3 100644
--- a/arch/powerpc/lib/mem_64.S
+++ b/arch/powerpc/lib/mem_64.S
@@ -9,8 +9,9 @@
 #include 
 #include 
 #include 
+#include 
 
-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
 _GLOBAL(__memset16)
rlwimi  r4,r4,16,0,15
/* fall through */
@@ -96,6 +97,7 @@ _GLOBAL_KASAN(memset)
blr
 EXPORT_SYMBOL(memset)
 EXPORT_SYMBOL_KASAN(memset)
+EXPORT_SYMBOL_KMSAN(memset)
 
 _GLOBAL_TOC_KASAN(memmove)
cmplw   0,r3,r4
@@ -140,3 +142,4 @@ _GLOBAL(backwards_memcpy)
b   1b
 EXPORT_SYMBOL(memmove)
 EXPORT_SYMBOL_KASAN(memmove)
+EXPORT_SYMBOL_KMSAN(memmove)
diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
index b5a67e20143f..1657861618cc 100644
--- a/arch/powerpc/lib/memcpy_64.S
+++ b/arch/powerpc/lib/memcpy_64.S
@@ -8,6 +8,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #ifndef SELFTEST_CASE
 /* For big-endian, 0 == most CPUs, 1 == POWER6, 2 == Cell */
@@ -228,3 +229,4 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
 #endif
 EXPORT_SYMBOL(memcpy)
 EXPORT_SYMBOL_KASAN(memcpy)
+EXPORT_SYMBOL_KMSAN(memcpy)
diff --git a/tools/testing/selftests/powerpc/copyloops/asm/kmsan.h 
b/tools/testing/selftests/powerpc/copyloops/asm/kmsan.h
new file mode 100644
index ..e69de29bb2d1
diff --git a/tools/testing/selftests/powerpc/copyloops/linux/export.h 
b/tools/testing/selftests/powerpc/copyloops/linux/export.h
index e6b80d5fbd14..6379624bbf9b 100644
--- a/tools/testing/selftests/powerpc/copyloops/linux/export.h
+++ b/tools/testing/selftests/powerpc/copyloops/linux/export.h
@@ -2,3 +2,4 @@
 #define EXPORT_SYMBOL(x)
 #define EXPORT_SYMBOL_GPL(x)
 #define EXPORT_SYMBOL_KASAN(x)
+#define EXPORT_SYMBOL_KMSAN(x)
-- 
2.40.1



[PATCH 13/13] powerpc: Enable KMSAN on powerpc

2023-12-13 Thread Nicholas Miehlbradt
Enable KMSAN in the Kconfig.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index e33e3250c478..71cc7d2a0a72 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -217,6 +217,7 @@ config PPC
select HAVE_ARCH_KASAN_VMALLOC  if HAVE_ARCH_KASAN
select HAVE_ARCH_KCSAN
select HAVE_ARCH_KFENCE if ARCH_SUPPORTS_DEBUG_PAGEALLOC
+select HAVE_ARCH_KMSAN  if PPC64
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
select HAVE_ARCH_WITHIN_STACK_FRAMES
select HAVE_ARCH_KGDB
-- 
2.40.1



[PATCH 08/13] powerpc: Unpoison pt_regs

2023-12-13 Thread Nicholas Miehlbradt
pt_regs is initialized ppc_save_regs which is implemented in assembly
and therefore does not mark the struct as initialized. Unpoison it so
that it will not generate false positives.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/include/asm/interrupt.h | 2 ++
 arch/powerpc/kernel/irq_64.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/interrupt.h 
b/arch/powerpc/include/asm/interrupt.h
index a4196ab1d016..a9bb09633689 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -68,6 +68,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -170,6 +171,7 @@ static inline void interrupt_enter_prepare(struct pt_regs 
*regs)
__hard_RI_enable();
}
/* Enable MSR[RI] early, to support kernel SLB and hash faults */
+   kmsan_unpoison_entry_regs(regs);
 #endif
 
if (!arch_irq_disabled_regs(regs))
diff --git a/arch/powerpc/kernel/irq_64.c b/arch/powerpc/kernel/irq_64.c
index 938e66829eae..3d441f1b8c49 100644
--- a/arch/powerpc/kernel/irq_64.c
+++ b/arch/powerpc/kernel/irq_64.c
@@ -45,6 +45,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -117,6 +118,7 @@ static __no_kcsan void __replay_soft_interrupts(void)
local_paca->irq_happened |= PACA_IRQ_REPLAYING;
 
ppc_save_regs(®s);
+   kmsan_unpoison_entry_regs(®s);
regs.softe = IRQS_ENABLED;
regs.msr |= MSR_EE;
 
-- 
2.40.1



[PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack

2023-12-13 Thread Nicholas Miehlbradt
Functions which walk the stack read parts of the stack which cannot be
instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
these functions to prevent false positives.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/kernel/process.c|  6 +++---
 arch/powerpc/kernel/stacktrace.c | 10 ++
 arch/powerpc/perf/callchain.c|  2 +-
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 392404688cec..3dc88143c3b2 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -2276,9 +2276,9 @@ static bool empty_user_regs(struct pt_regs *regs, struct 
task_struct *tsk)
 
 static int kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH;
 
-void __no_sanitize_address show_stack(struct task_struct *tsk,
- unsigned long *stack,
- const char *loglvl)
+void __no_sanitize_address __no_kmsan_checks show_stack(struct task_struct 
*tsk,
+   unsigned long *stack,
+   const char *loglvl)
 {
unsigned long sp, ip, lr, newsp;
int count = 0;
diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
index e6a958a5da27..369b8b2a1bcd 100644
--- a/arch/powerpc/kernel/stacktrace.c
+++ b/arch/powerpc/kernel/stacktrace.c
@@ -24,8 +24,9 @@
 
 #include 
 
-void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn 
consume_entry, void *cookie,
-  struct task_struct *task, struct 
pt_regs *regs)
+void __no_sanitize_address __no_kmsan_checks
+   arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+   struct task_struct *task, struct pt_regs *regs)
 {
unsigned long sp;
 
@@ -62,8 +63,9 @@ void __no_sanitize_address 
arch_stack_walk(stack_trace_consume_fn consume_entry,
  *
  * If the task is not 'current', the caller *must* ensure the task is inactive.
  */
-int __no_sanitize_address arch_stack_walk_reliable(stack_trace_consume_fn 
consume_entry,
-  void *cookie, struct 
task_struct *task)
+int __no_sanitize_address __no_kmsan_checks
+   arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void 
*cookie,
+struct task_struct *task)
 {
unsigned long sp;
unsigned long newsp;
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index 6b4434dd0ff3..c7610b38e9b8 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -40,7 +40,7 @@ static int valid_next_sp(unsigned long sp, unsigned long 
prev_sp)
return 0;
 }
 
-void __no_sanitize_address
+void __no_sanitize_address __no_kmsan_checks
 perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs 
*regs)
 {
unsigned long sp, next_sp;
-- 
2.40.1



[PATCH 00/13] kmsan: Enable on powerpc

2023-12-13 Thread Nicholas Miehlbradt
This series provides the minimal support for Kernal Memory Sanitizer on 
powerpc pseries le guests. Kernal Memory Sanitizer is a tool which detects
uses of uninitialized memory. Currently KMSAN is clang only.

The clang support for powerpc has not yet been merged, the pull request can
be found here [1].

In addition to this series, there are a number of changes required in
generic kmsan code. These changes are already on mailing lists as part of
the series implementing KMSAN for s390 [2]. This series is intended to be
rebased on top of the s390 series.

In addition, I found a bug in the rtc driver used on powerpc. I have sent
a fix to this in a seperate series [3].

With this series and the two series mentioned above, I can successfully
boot pseries le defconfig without KMSAN warnings. I have not tested other
powerpc platforms.

[1] https://github.com/llvm/llvm-project/pull/73611
[2] https://lore.kernel.org/linux-mm/20231121220155.1217090-1-...@linux.ibm.com/
[3] 
https://lore.kernel.org/linux-rtc/20231129073647.2624497-1-nicho...@linux.ibm.com/

Nicholas Miehlbradt (13):
  kmsan: Export kmsan_handle_dma
  hvc: Fix use of uninitialized array in udbg_hvc_putc
  powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory
  powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled
  powerpc: Unpoison buffers populated by hcalls
  powerpc/pseries/nvram: Unpoison buffer populated by rtas_call
  powerpc/kprobes: Unpoison instruction in kprobe struct
  powerpc: Unpoison pt_regs
  powerpc: Disable KMSAN checks on functions which walk the stack
  powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
  powerpc: Implement architecture specific KMSAN interface
  powerpc/string: Add KMSAN support
  powerpc: Enable KMSAN on powerpc

 arch/powerpc/Kconfig  |  3 +-
 arch/powerpc/include/asm/book3s/64/pgtable.h  | 42 +++
 arch/powerpc/include/asm/interrupt.h  |  2 +
 arch/powerpc/include/asm/kmsan.h  | 51 +++
 arch/powerpc/include/asm/string.h | 18 ++-
 arch/powerpc/kernel/Makefile  |  2 +
 arch/powerpc/kernel/irq_64.c  |  2 +
 arch/powerpc/kernel/kprobes.c |  2 +
 arch/powerpc/kernel/module.c  |  2 +-
 arch/powerpc/kernel/process.c |  6 +--
 arch/powerpc/kernel/stacktrace.c  | 10 ++--
 arch/powerpc/kernel/vdso/Makefile |  1 +
 arch/powerpc/lib/Makefile |  2 +
 arch/powerpc/lib/mem_64.S |  5 +-
 arch/powerpc/lib/memcpy_64.S  |  2 +
 arch/powerpc/perf/callchain.c |  2 +-
 arch/powerpc/platforms/pseries/hvconsole.c|  2 +
 arch/powerpc/platforms/pseries/nvram.c|  4 ++
 arch/powerpc/purgatory/Makefile   |  1 +
 arch/powerpc/sysdev/xive/spapr.c  |  3 ++
 drivers/tty/hvc/hvc_vio.c |  2 +-
 mm/kmsan/hooks.c  |  1 +
 .../selftests/powerpc/copyloops/asm/kmsan.h   |  0
 .../powerpc/copyloops/linux/export.h  |  1 +
 24 files changed, 152 insertions(+), 14 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kmsan.h
 create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h

-- 
2.40.1



[PATCH 02/13] hvc: Fix use of uninitialized array in udbg_hvc_putc

2023-12-13 Thread Nicholas Miehlbradt
All elements of bounce_buffer are eventually read and passed to the
hypervisor so it should probably be fully initialized.

Signed-off-by: Nicholas Miehlbradt 
---
 drivers/tty/hvc/hvc_vio.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c
index 736b230f5ec0..1e88bfcdde20 100644
--- a/drivers/tty/hvc/hvc_vio.c
+++ b/drivers/tty/hvc/hvc_vio.c
@@ -227,7 +227,7 @@ static const struct hv_ops hvterm_hvsi_ops = {
 static void udbg_hvc_putc(char c)
 {
int count = -1;
-   unsigned char bounce_buffer[16];
+   unsigned char bounce_buffer[16] = { 0 };
 
if (!hvterm_privs[0])
return;
-- 
2.40.1



[PATCH 03/13] powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory

2023-12-13 Thread Nicholas Miehlbradt
Other sanitizers are disabled for these, disable KMSAN too.

prom_init.o can only reference a limited set of external symbols. KMSAN
adds additional references which are not permitted so disable
sanitization.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/kernel/Makefile  | 2 ++
 arch/powerpc/kernel/vdso/Makefile | 1 +
 arch/powerpc/purgatory/Makefile   | 1 +
 3 files changed, 4 insertions(+)

diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 2919433be355..78ea441f7e18 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -61,6 +61,8 @@ KCSAN_SANITIZE_btext.o := n
 KCSAN_SANITIZE_paca.o := n
 KCSAN_SANITIZE_setup_64.o := n
 
+KMSAN_SANITIZE_prom_init.o := n
+
 #ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET
 # Remove stack protector to avoid triggering unneeded stack canary
 # checks due to randomize_kstack_offset.
diff --git a/arch/powerpc/kernel/vdso/Makefile 
b/arch/powerpc/kernel/vdso/Makefile
index 0c7d82c270c3..86fa6ff1ee51 100644
--- a/arch/powerpc/kernel/vdso/Makefile
+++ b/arch/powerpc/kernel/vdso/Makefile
@@ -52,6 +52,7 @@ KCOV_INSTRUMENT := n
 UBSAN_SANITIZE := n
 KASAN_SANITIZE := n
 KCSAN_SANITIZE := n
+KMSAN_SANITIZE := n
 
 ccflags-y := -fno-common -fno-builtin
 ldflags-y := -Wl,--hash-style=both -nostdlib -shared -z noexecstack 
$(CLANG_FLAGS)
diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
index 78473d69cd2b..4b267061bf84 100644
--- a/arch/powerpc/purgatory/Makefile
+++ b/arch/powerpc/purgatory/Makefile
@@ -2,6 +2,7 @@
 
 KASAN_SANITIZE := n
 KCSAN_SANITIZE := n
+KMSAN_SANITIZE := n
 
 targets += trampoline_$(BITS).o purgatory.ro
 
-- 
2.40.1



[PATCH 04/13] powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled

2023-12-13 Thread Nicholas Miehlbradt
Word sized accesses may read uninitialized data when optimizing loads.
Disable this optimization when KMSAN is enabled to prevent false
positives.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 6f105ee4f3cf..e33e3250c478 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -182,7 +182,7 @@ config PPC
select BUILDTIME_TABLE_SORT
select CLONE_BACKWARDS
select CPUMASK_OFFSTACK if NR_CPUS >= 8192
-   select DCACHE_WORD_ACCESS   if PPC64 && CPU_LITTLE_ENDIAN
+   select DCACHE_WORD_ACCESS   if PPC64 && CPU_LITTLE_ENDIAN 
&& !KMSAN
select DMA_OPS_BYPASS   if PPC64
select DMA_OPS  if PPC64
select DYNAMIC_FTRACE   if FUNCTION_TRACER
-- 
2.40.1



[PATCH v3] powerpc: Implement arch_within_stack_frames

2023-02-27 Thread Nicholas Miehlbradt
Walks the stack when copy_{to,from}_user address is in the stack to
ensure that the object being copied is entirely a single stack frame and
does not contain stack metadata.

Substantially similar to the x86 implementation. The back chain is used
to traverse the stack and identify stack frame boundaries.

Signed-off-by: Nicholas Miehlbradt 
---
v3: Move STACK_FRAME_PARAMS macros to ppc_asm.h
Add diagram in comments to show position of pointers into the stack during 
execution

v2: Rename PARAMETER_SAVE_OFFSET to STACK_FRAME_PARAMS
Add definitions of STACK_FRAME_PARAMS for PPC32 and remove dependancy on 
PPC64
Ignore the current stack frame and start with it's parent, similar to x86

v1: 
https://lore.kernel.org/linuxppc-dev/20221214044252.1910657-1-nicho...@linux.ibm.com/
v2: 
https://lore.kernel.org/linuxppc-dev/20230119053127.17782-1-nicho...@linux.ibm.com/
---
 arch/powerpc/Kconfig   |  1 +
 arch/powerpc/include/asm/ppc_asm.h |  8 ++
 arch/powerpc/include/asm/thread_info.h | 38 ++
 3 files changed, 47 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2c9cdf1d8761..3ce17f9aa90a 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -200,6 +200,7 @@ config PPC
select HAVE_ARCH_KCSAN  if PPC_BOOK3S_64
select HAVE_ARCH_KFENCE if ARCH_SUPPORTS_DEBUG_PAGEALLOC
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
+   select HAVE_ARCH_WITHIN_STACK_FRAMES
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
diff --git a/arch/powerpc/include/asm/ppc_asm.h 
b/arch/powerpc/include/asm/ppc_asm.h
index d2f44612f4b0..1f1a64b780e3 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -837,4 +837,12 @@ END_FTR_SECTION_NESTED(CPU_FTR_CELL_TB_BUG, 
CPU_FTR_CELL_TB_BUG, 96)
 #define BTB_FLUSH(reg)
 #endif /* CONFIG_PPC_E500 */
 
+#if defined(CONFIG_PPC64_ELF_ABI_V1)
+#define STACK_FRAME_PARAMS 48
+#elif defined(CONFIG_PPC64_ELF_ABI_V2)
+#define STACK_FRAME_PARAMS 32
+#elif defined(CONFIG_PPC32)
+#define STACK_FRAME_PARAMS 8
+#endif
+
 #endif /* _ASM_POWERPC_PPC_ASM_H */
diff --git a/arch/powerpc/include/asm/thread_info.h 
b/arch/powerpc/include/asm/thread_info.h
index af58f1ed3952..07f1901da8fd 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -45,6 +45,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define SLB_PRELOAD_NR 16U
 /*
@@ -186,6 +187,43 @@ static inline bool test_thread_local_flags(unsigned int 
flags)
 #define is_elf2_task() (0)
 #endif
 
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ * GOOD_FRAME  if within a frame
+ * BAD_STACK   if placed across a frame boundary (or outside stack)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+  const void * const stackend,
+  const void *obj, unsigned long len)
+{
+   const void *params;
+   const void *frame;
+
+   params = *(const void * const *)current_stack_pointer + 
STACK_FRAME_PARAMS;
+   frame = **(const void * const * const *)current_stack_pointer;
+
+/*
+ * low ---> high
+ * [backchain][metadata][params][local vars][saved registers][backchain]
+ *  ^^
+ *  |  allows copies only in this region |
+ *  ||
+ *params   frame
+ * The metadata region contains the saved LR, CR etc.
+ */
+   while (stack <= frame && frame < stackend) {
+   if (obj + len <= frame)
+   return obj >= params ? GOOD_FRAME : BAD_STACK;
+   params = frame + STACK_FRAME_PARAMS;
+   frame = *(const void * const *)frame;
+   }
+
+   return BAD_STACK;
+}
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __KERNEL__ */
-- 
2.34.1



[PATCH v2] powerpc: Implement arch_within_stack_frames

2023-01-18 Thread Nicholas Miehlbradt
Walks the stack when copy_{to,from}_user address is in the stack to
ensure that the object being copied is entirely within a single stack
frame.

Substatially similar to the x86 implementation except using the back
chain to traverse the stack and identify stack frame boundaries.

Signed-off-by: Nicholas Miehlbradt 
---
v2: Rename PARAMETER_SAVE_OFFSET to STACK_FRAME_PARAMS
Add definitions of STACK_FRAME_PARAMS for PPC32 and remove dependancy on 
PPC64
Ignore the current stack frame and start with it's parent, similar to x86

v1: 
https://lore.kernel.org/linuxppc-dev/20221214044252.1910657-1-nicho...@linux.ibm.com/
---
 arch/powerpc/Kconfig   |  1 +
 arch/powerpc/include/asm/thread_info.h | 36 ++
 2 files changed, 37 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2ca5418457ed..97ca54773521 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -198,6 +198,7 @@ config PPC
select HAVE_ARCH_KASAN_VMALLOC  if HAVE_ARCH_KASAN
select HAVE_ARCH_KFENCE if ARCH_SUPPORTS_DEBUG_PAGEALLOC
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
+   select HAVE_ARCH_WITHIN_STACK_FRAMES
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
diff --git a/arch/powerpc/include/asm/thread_info.h 
b/arch/powerpc/include/asm/thread_info.h
index af58f1ed3952..c5dce5f239c1 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -186,6 +186,42 @@ static inline bool test_thread_local_flags(unsigned int 
flags)
 #define is_elf2_task() (0)
 #endif
 
+#if defined(CONFIG_PPC64_ELF_ABI_V1)
+#define STACK_FRAME_PARAMS 48
+#elif defined(CONFIG_PPC64_ELF_ABI_V2)
+#define STACK_FRAME_PARAMS 32
+#elif defined(CONFIG_PPC32)
+#define STACK_FRAME_PARAMS 8
+#endif
+
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ * GOOD_FRAME  if within a frame
+ * BAD_STACK   if placed across a frame boundary (or outside stack)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+  const void * const stackend,
+  const void *obj, unsigned long len)
+{
+   const void *params;
+   const void *frame;
+
+   params = *(const void * const *)current_stack_pointer + 
STACK_FRAME_PARAMS;
+   frame = **(const void * const * const *)current_stack_pointer;
+
+   while (stack <= frame && frame < stackend) {
+   if (obj + len <= frame)
+   return obj >= params ? GOOD_FRAME : BAD_STACK;
+   params = frame + STACK_FRAME_PARAMS;
+   frame = *(const void * const *)frame;
+   }
+
+   return BAD_STACK;
+}
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __KERNEL__ */
-- 
2.34.1



Re: [PATCH] powerpc/64: Implement arch_within_stack_frames

2022-12-18 Thread Nicholas Miehlbradt




On 14/12/2022 10:39 pm, Nicholas Piggin wrote:

On Wed Dec 14, 2022 at 6:39 PM AEST, Christophe Leroy wrote:



Le 14/12/2022 à 05:42, Nicholas Miehlbradt a écrit :

Walks the stack when copy_{to,from}_user address is in the stack to
ensure that the object being copied is entirely within a single stack
frame.

Substatially similar to the x86 implementation except using the back
chain to traverse the stack and identify stack frame boundaries.

Signed-off-by: Nicholas Miehlbradt 
---
   arch/powerpc/Kconfig   |  1 +
   arch/powerpc/include/asm/thread_info.h | 38 ++
   2 files changed, 39 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2ca5418457ed..4c59d139ea83 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -198,6 +198,7 @@ config PPC
select HAVE_ARCH_KASAN_VMALLOC  if HAVE_ARCH_KASAN
select HAVE_ARCH_KFENCE if ARCH_SUPPORTS_DEBUG_PAGEALLOC
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
+   select HAVE_ARCH_WITHIN_STACK_FRAMESif PPC64


Why don't you do something that works for both PPC32 and PPC64 ?


+1


I'm not familiar with the 32bit ABI, but from a quick glance through it 
seems like the only thing that would need to change is to set then 
PARAMETER_SAVE_OFFSET (to be renamed in the next version as per 
suggestions) to 8 bytes, the layout of the stack and the back chain 
remains the same. Is there something else that I am missing or is that it?





select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
diff --git a/arch/powerpc/include/asm/thread_info.h 
b/arch/powerpc/include/asm/thread_info.h
index af58f1ed3952..efdf39e07884 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -186,6 +186,44 @@ static inline bool test_thread_local_flags(unsigned int 
flags)
   #define is_elf2_task() (0)
   #endif
   
+#ifdef CONFIG_PPC64

+
+#ifdef CONFIG_PPC64_ELF_ABI_V1
+#define PARAMETER_SAVE_OFFSET 48
+#else
+#define PARAMETER_SAVE_OFFSET 32
+#endif


Why not use STACK_INT_FRAME_REGS, defined in asm/ptrace.h ?


I think use a STACK_FRAME prefixed define in asm/ptrace.h, but maybe
avoid overloading the STACK_INT_ stuff for this.




+
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ * GOOD_FRAME  if within a frame
+ * BAD_STACK   if placed across a frame boundary (or outside stack)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+  const void * const stackend,
+  const void *obj, unsigned long len)
+{
+   const void *frame;
+   const void *oldframe;
+
+   oldframe = (const void *)current_stack_pointer;
+   frame = *(const void * const *)oldframe;


This is not the same as x86, they start with the parent of the current
frame. I assume because the way the caller is set up (with a noinline
function from an out of line call), then there must be at least one
stack frame that does not have to be checked, but if I'm wrong about
that and there is some reason we need to be different it should be
commented..



Yes, this is something that I overlooked, the current frame is created 
as a result of the call to copy_{to,from}_user and should therefore not 
contain any data being copied.



+
+   while (stack <= frame && frame < stackend) {
+   if (obj + len <= frame)
+   return obj >= oldframe + PARAMETER_SAVE_OFFSET ?
+   GOOD_FRAME : BAD_STACK;
+   oldframe = frame;
+   frame = *(const void * const *)oldframe;
+   }
+
+   return BAD_STACK;
+}


What about:

+   const void *frame;
+   const void *params;
+
+   params = (const void *)current_stack_pointer + STACK_INT_FRAME_REGS;
+   frame = *(const void * const *)current_stack_pointer;
+
+   while (stack <= frame && frame < stackend) {
+   if (obj + len <= frame)
+   return obj >= params ? GOOD_FRAME : BAD_STACK;
+   params = frame + STACK_INT_FRAME_REGS;
+   frame = *(const void * const *)frame;
+   }
+
+   return BAD_STACK;


What about just copying x86's implementation including using
__builtin_frame_address(1/2)? Are those builtins reliable for all
our targets and compiler versions?
From what I found it has undefined behavior. Since x86 has it's use 
guarded behind CONFIG_FRAME_POINTER which I couldn't find used in the 
ppc code I decided it was best to avoid them. Could be wrong though.



For bonus points, extract the x86 code out into asm-generic and
make it usable by both -

static inline int generic_within_stack_frames(unsigned int 

[PATCH] powerpc/64: Implement arch_within_stack_frames

2022-12-13 Thread Nicholas Miehlbradt
Walks the stack when copy_{to,from}_user address is in the stack to
ensure that the object being copied is entirely within a single stack
frame.

Substatially similar to the x86 implementation except using the back
chain to traverse the stack and identify stack frame boundaries.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/Kconfig   |  1 +
 arch/powerpc/include/asm/thread_info.h | 38 ++
 2 files changed, 39 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2ca5418457ed..4c59d139ea83 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -198,6 +198,7 @@ config PPC
select HAVE_ARCH_KASAN_VMALLOC  if HAVE_ARCH_KASAN
select HAVE_ARCH_KFENCE if ARCH_SUPPORTS_DEBUG_PAGEALLOC
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
+   select HAVE_ARCH_WITHIN_STACK_FRAMESif PPC64
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
diff --git a/arch/powerpc/include/asm/thread_info.h 
b/arch/powerpc/include/asm/thread_info.h
index af58f1ed3952..efdf39e07884 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -186,6 +186,44 @@ static inline bool test_thread_local_flags(unsigned int 
flags)
 #define is_elf2_task() (0)
 #endif
 
+#ifdef CONFIG_PPC64
+
+#ifdef CONFIG_PPC64_ELF_ABI_V1
+#define PARAMETER_SAVE_OFFSET 48
+#else
+#define PARAMETER_SAVE_OFFSET 32
+#endif
+
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ * GOOD_FRAME  if within a frame
+ * BAD_STACK   if placed across a frame boundary (or outside stack)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+  const void * const stackend,
+  const void *obj, unsigned long len)
+{
+   const void *frame;
+   const void *oldframe;
+
+   oldframe = (const void *)current_stack_pointer;
+   frame = *(const void * const *)oldframe;
+
+   while (stack <= frame && frame < stackend) {
+   if (obj + len <= frame)
+   return obj >= oldframe + PARAMETER_SAVE_OFFSET ?
+   GOOD_FRAME : BAD_STACK;
+   oldframe = frame;
+   frame = *(const void * const *)oldframe;
+   }
+
+   return BAD_STACK;
+}
+#endif /* CONFIG_PPC64 */
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __KERNEL__ */
-- 
2.34.1



[PATCH v3 2/4] powerpc/64s: Remove unneeded #ifdef CONFIG_DEBUG_PAGEALLOC in hash_utils

2022-09-26 Thread Nicholas Miehlbradt
From: Christophe Leroy 

debug_pagealloc_enabled() is always defined and constant folds to
'false' when CONFIG_DEBUG_PAGEALLOC is not enabled.

Remove the #ifdefs, the code and associated static variables will
be optimised out by the compiler when CONFIG_DEBUG_PAGEALLOC is
not defined.

Signed-off-by: Christophe Leroy 
Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/mm/book3s64/hash_utils.c | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index fc92613dc2bf..e63ff401a6ea 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -123,11 +123,8 @@ EXPORT_SYMBOL_GPL(mmu_slb_size);
 #ifdef CONFIG_PPC_64K_PAGES
 int mmu_ci_restrictions;
 #endif
-#ifdef CONFIG_DEBUG_PAGEALLOC
 static u8 *linear_map_hash_slots;
 static unsigned long linear_map_hash_count;
-static DEFINE_SPINLOCK(linear_map_hash_lock);
-#endif /* CONFIG_DEBUG_PAGEALLOC */
 struct mmu_hash_ops mmu_hash_ops;
 EXPORT_SYMBOL(mmu_hash_ops);
 
@@ -427,11 +424,9 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long 
vend,
break;
 
cond_resched();
-#ifdef CONFIG_DEBUG_PAGEALLOC
if (debug_pagealloc_enabled() &&
(paddr >> PAGE_SHIFT) < linear_map_hash_count)
linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80;
-#endif /* CONFIG_DEBUG_PAGEALLOC */
}
return ret < 0 ? ret : 0;
 }
@@ -1066,7 +1061,6 @@ static void __init htab_initialize(void)
 
prot = pgprot_val(PAGE_KERNEL);
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
if (debug_pagealloc_enabled()) {
linear_map_hash_count = memblock_end_of_DRAM() >> PAGE_SHIFT;
linear_map_hash_slots = memblock_alloc_try_nid(
@@ -1076,7 +1070,6 @@ static void __init htab_initialize(void)
panic("%s: Failed to allocate %lu bytes max_addr=%pa\n",
  __func__, linear_map_hash_count, &ppc64_rma_size);
}
-#endif /* CONFIG_DEBUG_PAGEALLOC */
 
/* create bolted the linear mapping in the hash table */
for_each_mem_range(i, &base, &end) {
@@ -1991,6 +1984,8 @@ long hpte_insert_repeating(unsigned long hash, unsigned 
long vpn,
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
+static DEFINE_SPINLOCK(linear_map_hash_lock);
+
 static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
 {
unsigned long hash;
-- 
2.34.1



[PATCH v3 4/4] powerpc/64s: Enable KFENCE on book3s64

2022-09-26 Thread Nicholas Miehlbradt
KFENCE support was added for ppc32 in commit 90cbac0e995d
("powerpc: Enable KFENCE for PPC32").
Enable KFENCE on ppc64 architecture with hash and radix MMUs.
It uses the same mechanism as debug pagealloc to
protect/unprotect pages. All KFENCE kunit tests pass on both
MMUs.

KFENCE memory is initially allocated using memblock but is
later marked as SLAB allocated. This necessitates the change
to __pud_free to ensure that the KFENCE pages are freed
appropriately.

Based on previous work by Christophe Leroy and Jordan Niethe.

Signed-off-by: Nicholas Miehlbradt 
---
v2: Refactor
v3: Simplified ABI version check
---
 arch/powerpc/Kconfig |  2 +-
 arch/powerpc/include/asm/book3s/64/pgalloc.h |  6 --
 arch/powerpc/include/asm/book3s/64/pgtable.h |  2 +-
 arch/powerpc/include/asm/kfence.h| 15 +++
 arch/powerpc/mm/book3s64/hash_utils.c| 10 +-
 arch/powerpc/mm/book3s64/radix_pgtable.c |  6 --
 6 files changed, 30 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index a4f8a5276e5c..f7dd0f49510d 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -194,7 +194,7 @@ config PPC
select HAVE_ARCH_KASAN  if PPC32 && PPC_PAGE_SHIFT <= 14
select HAVE_ARCH_KASAN  if PPC_RADIX_MMU
select HAVE_ARCH_KASAN_VMALLOC  if HAVE_ARCH_KASAN
-   select HAVE_ARCH_KFENCE if PPC_BOOK3S_32 || PPC_8xx || 
40x
+   select HAVE_ARCH_KFENCE if ARCH_SUPPORTS_DEBUG_PAGEALLOC
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h 
b/arch/powerpc/include/asm/book3s/64/pgalloc.h
index e1af0b394ceb..dd2cff53a111 100644
--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
+++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
@@ -113,9 +113,11 @@ static inline void __pud_free(pud_t *pud)
 
/*
 * Early pud pages allocated via memblock allocator
-* can't be directly freed to slab
+* can't be directly freed to slab. KFENCE pages have
+* both reserved and slab flags set so need to be freed
+* kmem_cache_free.
 */
-   if (PageReserved(page))
+   if (PageReserved(page) && !PageSlab(page))
free_reserved_page(page);
else
kmem_cache_free(PGT_CACHE(PUD_CACHE_INDEX), pud);
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cb9d5fd39d7f..fd5d800f2836 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1123,7 +1123,7 @@ static inline void vmemmap_remove_mapping(unsigned long 
start,
 }
 #endif
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
 static inline void __kernel_map_pages(struct page *page, int numpages, int 
enable)
 {
if (radix_enabled())
diff --git a/arch/powerpc/include/asm/kfence.h 
b/arch/powerpc/include/asm/kfence.h
index a9846b68c6b9..6fd2b4d486c5 100644
--- a/arch/powerpc/include/asm/kfence.h
+++ b/arch/powerpc/include/asm/kfence.h
@@ -11,11 +11,25 @@
 #include 
 #include 
 
+#ifdef CONFIG_PPC64_ELF_ABI_V1
+#define ARCH_FUNC_PREFIX "."
+#endif
+
 static inline bool arch_kfence_init_pool(void)
 {
return true;
 }
 
+#ifdef CONFIG_PPC64
+static inline bool kfence_protect_page(unsigned long addr, bool protect)
+{
+   struct page *page = virt_to_page(addr);
+
+   __kernel_map_pages(page, 1, !protect);
+
+   return true;
+}
+#else
 static inline bool kfence_protect_page(unsigned long addr, bool protect)
 {
pte_t *kpte = virt_to_kpte(addr);
@@ -29,5 +43,6 @@ static inline bool kfence_protect_page(unsigned long addr, 
bool protect)
 
return true;
 }
+#endif
 
 #endif /* __ASM_POWERPC_KFENCE_H */
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index b37412fe5930..9cceaa5998a3 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -424,7 +424,7 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long 
vend,
break;
 
cond_resched();
-   if (debug_pagealloc_enabled() &&
+   if (debug_pagealloc_enabled_or_kfence() &&
(paddr >> PAGE_SHIFT) < linear_map_hash_count)
linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80;
}
@@ -773,7 +773,7 @@ static void __init htab_init_page_sizes(void)
bool aligned = true;
init_hpte_page_sizes();
 
-   if (!debug_pagealloc_enabled()) {
+   if (!debug_pagealloc_enabled_or_kfence()) {
/*
 * Pick a size for the linear mapping. C

[PATCH v3 3/4] powerpc/64s: Allow double call of kernel_[un]map_linear_page()

2022-09-26 Thread Nicholas Miehlbradt
From: Christophe Leroy 

If the page is already mapped resp. already unmapped, bail out.

Signed-off-by: Christophe Leroy 
Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/mm/book3s64/hash_utils.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index e63ff401a6ea..b37412fe5930 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -2000,6 +2000,9 @@ static void kernel_map_linear_page(unsigned long vaddr, 
unsigned long lmi)
if (!vsid)
return;
 
+   if (linear_map_hash_slots[lmi] & 0x80)
+   return;
+
ret = hpte_insert_repeating(hash, vpn, __pa(vaddr), mode,
HPTE_V_BOLTED,
mmu_linear_psize, mmu_kernel_ssize);
@@ -2019,7 +2022,10 @@ static void kernel_unmap_linear_page(unsigned long 
vaddr, unsigned long lmi)
 
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
spin_lock(&linear_map_hash_lock);
-   BUG_ON(!(linear_map_hash_slots[lmi] & 0x80));
+   if (!(linear_map_hash_slots[lmi] & 0x80)) {
+   spin_unlock(&linear_map_hash_lock);
+   return;
+   }
hidx = linear_map_hash_slots[lmi] & 0x7f;
linear_map_hash_slots[lmi] = 0;
spin_unlock(&linear_map_hash_lock);
-- 
2.34.1



[PATCH v3 1/4] powerpc/64s: Add DEBUG_PAGEALLOC for radix

2022-09-26 Thread Nicholas Miehlbradt
There is support for DEBUG_PAGEALLOC on hash but not on radix.
Add support on radix.

Signed-off-by: Nicholas Miehlbradt 
---
v2: Revert change to radix_memory_block_size, instead set the size
in radix_init_pgtable and radix__create_section_mapping directly.
v3: Remove max_mapping_size argument of create_physical_mapping
as the value is the same at all call sites.
---
 arch/powerpc/mm/book3s64/radix_pgtable.c | 18 ++
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c 
b/arch/powerpc/mm/book3s64/radix_pgtable.c
index db2f3d193448..daa40e3b74dd 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -30,6 +30,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -267,13 +268,16 @@ static unsigned long next_boundary(unsigned long addr, 
unsigned long end)
 
 static int __meminit create_physical_mapping(unsigned long start,
 unsigned long end,
-unsigned long max_mapping_size,
 int nid, pgprot_t _prot)
 {
unsigned long vaddr, addr, mapping_size = 0;
bool prev_exec, exec = false;
pgprot_t prot;
int psize;
+   unsigned long max_mapping_size = radix_mem_block_size;
+
+   if (debug_pagealloc_enabled())
+   max_mapping_size = PAGE_SIZE;
 
start = ALIGN(start, PAGE_SIZE);
end   = ALIGN_DOWN(end, PAGE_SIZE);
@@ -352,7 +356,6 @@ static void __init radix_init_pgtable(void)
}
 
WARN_ON(create_physical_mapping(start, end,
-   radix_mem_block_size,
-1, PAGE_KERNEL));
}
 
@@ -850,7 +853,7 @@ int __meminit radix__create_section_mapping(unsigned long 
start,
}
 
return create_physical_mapping(__pa(start), __pa(end),
-  radix_mem_block_size, nid, prot);
+  nid, prot);
 }
 
 int __meminit radix__remove_section_mapping(unsigned long start, unsigned long 
end)
@@ -899,7 +902,14 @@ void __meminit radix__vmemmap_remove_mapping(unsigned long 
start, unsigned long
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void radix__kernel_map_pages(struct page *page, int numpages, int enable)
 {
-   pr_warn_once("DEBUG_PAGEALLOC not supported in radix mode\n");
+   unsigned long addr;
+
+   addr = (unsigned long)page_address(page);
+
+   if (enable)
+   set_memory_p(addr, numpages);
+   else
+   set_memory_np(addr, numpages);
 }
 #endif
 
-- 
2.34.1



[PATCH v2 3/4] powerpc/64s: Allow double call of kernel_[un]map_linear_page()

2022-09-20 Thread Nicholas Miehlbradt
From: Christophe Leroy 

If the page is already mapped resp. already unmapped, bail out.

Signed-off-by: Christophe Leroy 
Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/mm/book3s64/hash_utils.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index e63ff401a6ea..b37412fe5930 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -2000,6 +2000,9 @@ static void kernel_map_linear_page(unsigned long vaddr, 
unsigned long lmi)
if (!vsid)
return;
 
+   if (linear_map_hash_slots[lmi] & 0x80)
+   return;
+
ret = hpte_insert_repeating(hash, vpn, __pa(vaddr), mode,
HPTE_V_BOLTED,
mmu_linear_psize, mmu_kernel_ssize);
@@ -2019,7 +2022,10 @@ static void kernel_unmap_linear_page(unsigned long 
vaddr, unsigned long lmi)
 
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
spin_lock(&linear_map_hash_lock);
-   BUG_ON(!(linear_map_hash_slots[lmi] & 0x80));
+   if (!(linear_map_hash_slots[lmi] & 0x80)) {
+   spin_unlock(&linear_map_hash_lock);
+   return;
+   }
hidx = linear_map_hash_slots[lmi] & 0x7f;
linear_map_hash_slots[lmi] = 0;
spin_unlock(&linear_map_hash_lock);
-- 
2.34.1



[PATCH v2 4/4] powerpc/64s: Enable KFENCE on book3s64

2022-09-20 Thread Nicholas Miehlbradt
KFENCE support was added for ppc32 in commit 90cbac0e995d
("powerpc: Enable KFENCE for PPC32").
Enable KFENCE on ppc64 architecture with hash and radix MMUs.
It uses the same mechanism as debug pagealloc to
protect/unprotect pages. All KFENCE kunit tests pass on both
MMUs.

KFENCE memory is initially allocated using memblock but is
later marked as SLAB allocated. This necessitates the change
to __pud_free to ensure that the KFENCE pages are freed
appropriately.

Based on previous work by Christophe Leroy and Jordan Niethe.

Signed-off-by: Nicholas Miehlbradt 
---
v2: Refactor
---
 arch/powerpc/Kconfig |  2 +-
 arch/powerpc/include/asm/book3s/64/pgalloc.h |  6 --
 arch/powerpc/include/asm/book3s/64/pgtable.h |  2 +-
 arch/powerpc/include/asm/kfence.h| 15 +++
 arch/powerpc/mm/book3s64/hash_utils.c| 10 +-
 arch/powerpc/mm/book3s64/radix_pgtable.c |  8 +---
 6 files changed, 31 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index a4f8a5276e5c..f7dd0f49510d 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -194,7 +194,7 @@ config PPC
select HAVE_ARCH_KASAN  if PPC32 && PPC_PAGE_SHIFT <= 14
select HAVE_ARCH_KASAN  if PPC_RADIX_MMU
select HAVE_ARCH_KASAN_VMALLOC  if HAVE_ARCH_KASAN
-   select HAVE_ARCH_KFENCE if PPC_BOOK3S_32 || PPC_8xx || 
40x
+   select HAVE_ARCH_KFENCE if ARCH_SUPPORTS_DEBUG_PAGEALLOC
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h 
b/arch/powerpc/include/asm/book3s/64/pgalloc.h
index e1af0b394ceb..dd2cff53a111 100644
--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
+++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
@@ -113,9 +113,11 @@ static inline void __pud_free(pud_t *pud)
 
/*
 * Early pud pages allocated via memblock allocator
-* can't be directly freed to slab
+* can't be directly freed to slab. KFENCE pages have
+* both reserved and slab flags set so need to be freed
+* kmem_cache_free.
 */
-   if (PageReserved(page))
+   if (PageReserved(page) && !PageSlab(page))
free_reserved_page(page);
else
kmem_cache_free(PGT_CACHE(PUD_CACHE_INDEX), pud);
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cb9d5fd39d7f..fd5d800f2836 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1123,7 +1123,7 @@ static inline void vmemmap_remove_mapping(unsigned long 
start,
 }
 #endif
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
 static inline void __kernel_map_pages(struct page *page, int numpages, int 
enable)
 {
if (radix_enabled())
diff --git a/arch/powerpc/include/asm/kfence.h 
b/arch/powerpc/include/asm/kfence.h
index a9846b68c6b9..cff60983e88d 100644
--- a/arch/powerpc/include/asm/kfence.h
+++ b/arch/powerpc/include/asm/kfence.h
@@ -11,11 +11,25 @@
 #include 
 #include 
 
+#if defined(CONFIG_PPC64) && !defined(CONFIG_PPC64_ELF_ABI_V2)
+#define ARCH_FUNC_PREFIX "."
+#endif
+
 static inline bool arch_kfence_init_pool(void)
 {
return true;
 }
 
+#ifdef CONFIG_PPC64
+static inline bool kfence_protect_page(unsigned long addr, bool protect)
+{
+   struct page *page = virt_to_page(addr);
+
+   __kernel_map_pages(page, 1, !protect);
+
+   return true;
+}
+#else
 static inline bool kfence_protect_page(unsigned long addr, bool protect)
 {
pte_t *kpte = virt_to_kpte(addr);
@@ -29,5 +43,6 @@ static inline bool kfence_protect_page(unsigned long addr, 
bool protect)
 
return true;
 }
+#endif
 
 #endif /* __ASM_POWERPC_KFENCE_H */
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index b37412fe5930..9cceaa5998a3 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -424,7 +424,7 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long 
vend,
break;
 
cond_resched();
-   if (debug_pagealloc_enabled() &&
+   if (debug_pagealloc_enabled_or_kfence() &&
(paddr >> PAGE_SHIFT) < linear_map_hash_count)
linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80;
}
@@ -773,7 +773,7 @@ static void __init htab_init_page_sizes(void)
bool aligned = true;
init_hpte_page_sizes();
 
-   if (!debug_pagealloc_enabled()) {
+   if (!debug_pagealloc_enabled_or_kfence()) {
/*
 * Pick a size for the linear ma

[PATCH v2 2/4] powerpc/64s: Remove unneeded #ifdef CONFIG_DEBUG_PAGEALLOC in hash_utils

2022-09-20 Thread Nicholas Miehlbradt
From: Christophe Leroy 

debug_pagealloc_enabled() is always defined and constant folds to
'false' when CONFIG_DEBUG_PAGEALLOC is not enabled.

Remove the #ifdefs, the code and associated static variables will
be optimised out by the compiler when CONFIG_DEBUG_PAGEALLOC is
not defined.

Signed-off-by: Christophe Leroy 
Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/mm/book3s64/hash_utils.c | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index fc92613dc2bf..e63ff401a6ea 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -123,11 +123,8 @@ EXPORT_SYMBOL_GPL(mmu_slb_size);
 #ifdef CONFIG_PPC_64K_PAGES
 int mmu_ci_restrictions;
 #endif
-#ifdef CONFIG_DEBUG_PAGEALLOC
 static u8 *linear_map_hash_slots;
 static unsigned long linear_map_hash_count;
-static DEFINE_SPINLOCK(linear_map_hash_lock);
-#endif /* CONFIG_DEBUG_PAGEALLOC */
 struct mmu_hash_ops mmu_hash_ops;
 EXPORT_SYMBOL(mmu_hash_ops);
 
@@ -427,11 +424,9 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long 
vend,
break;
 
cond_resched();
-#ifdef CONFIG_DEBUG_PAGEALLOC
if (debug_pagealloc_enabled() &&
(paddr >> PAGE_SHIFT) < linear_map_hash_count)
linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80;
-#endif /* CONFIG_DEBUG_PAGEALLOC */
}
return ret < 0 ? ret : 0;
 }
@@ -1066,7 +1061,6 @@ static void __init htab_initialize(void)
 
prot = pgprot_val(PAGE_KERNEL);
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
if (debug_pagealloc_enabled()) {
linear_map_hash_count = memblock_end_of_DRAM() >> PAGE_SHIFT;
linear_map_hash_slots = memblock_alloc_try_nid(
@@ -1076,7 +1070,6 @@ static void __init htab_initialize(void)
panic("%s: Failed to allocate %lu bytes max_addr=%pa\n",
  __func__, linear_map_hash_count, &ppc64_rma_size);
}
-#endif /* CONFIG_DEBUG_PAGEALLOC */
 
/* create bolted the linear mapping in the hash table */
for_each_mem_range(i, &base, &end) {
@@ -1991,6 +1984,8 @@ long hpte_insert_repeating(unsigned long hash, unsigned 
long vpn,
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
+static DEFINE_SPINLOCK(linear_map_hash_lock);
+
 static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
 {
unsigned long hash;
-- 
2.34.1



[PATCH v2 1/4] powerpc/64s: Add DEBUG_PAGEALLOC for radix

2022-09-20 Thread Nicholas Miehlbradt
There is support for DEBUG_PAGEALLOC on hash but not on radix.
Add support on radix.

Signed-off-by: Nicholas Miehlbradt 
---
v2: Revert change to radix_memory_block_size, instead set the size
in radix_init_pgtable and radix__create_section_mapping directly.
---
 arch/powerpc/mm/book3s64/radix_pgtable.c | 23 ---
 1 file changed, 20 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c 
b/arch/powerpc/mm/book3s64/radix_pgtable.c
index db2f3d193448..623455c195d8 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -30,6 +30,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -332,6 +333,10 @@ static void __init radix_init_pgtable(void)
unsigned long rts_field;
phys_addr_t start, end;
u64 i;
+   unsigned long size = radix_mem_block_size;
+
+   if (debug_pagealloc_enabled())
+   size = PAGE_SIZE;
 
/* We don't support slb for radix */
slb_set_size(0);
@@ -352,7 +357,7 @@ static void __init radix_init_pgtable(void)
}
 
WARN_ON(create_physical_mapping(start, end,
-   radix_mem_block_size,
+   size,
-1, PAGE_KERNEL));
}
 
@@ -844,13 +849,18 @@ int __meminit radix__create_section_mapping(unsigned long 
start,
unsigned long end, int nid,
pgprot_t prot)
 {
+   unsigned long size = radix_mem_block_size;
+
+   if (debug_pagealloc_enabled())
+   size = PAGE_SIZE;
+
if (end >= RADIX_VMALLOC_START) {
pr_warn("Outside the supported range\n");
return -1;
}
 
return create_physical_mapping(__pa(start), __pa(end),
-  radix_mem_block_size, nid, prot);
+  size, nid, prot);
 }
 
 int __meminit radix__remove_section_mapping(unsigned long start, unsigned long 
end)
@@ -899,7 +909,14 @@ void __meminit radix__vmemmap_remove_mapping(unsigned long 
start, unsigned long
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void radix__kernel_map_pages(struct page *page, int numpages, int enable)
 {
-   pr_warn_once("DEBUG_PAGEALLOC not supported in radix mode\n");
+   unsigned long addr;
+
+   addr = (unsigned long)page_address(page);
+
+   if (enable)
+   set_memory_p(addr, numpages);
+   else
+   set_memory_np(addr, numpages);
 }
 #endif
 
-- 
2.34.1



[PATCH 3/4] powerpc/64s: Allow double call of kernel_[un]map_linear_page()

2022-09-18 Thread Nicholas Miehlbradt
From: Christophe Leroy 

If the page is already mapped resp. already unmapped, bail out.

Signed-off-by: Christophe Leroy 
Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/mm/book3s64/hash_utils.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index e63ff401a6ea..b37412fe5930 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -2000,6 +2000,9 @@ static void kernel_map_linear_page(unsigned long vaddr, 
unsigned long lmi)
if (!vsid)
return;
 
+   if (linear_map_hash_slots[lmi] & 0x80)
+   return;
+
ret = hpte_insert_repeating(hash, vpn, __pa(vaddr), mode,
HPTE_V_BOLTED,
mmu_linear_psize, mmu_kernel_ssize);
@@ -2019,7 +2022,10 @@ static void kernel_unmap_linear_page(unsigned long 
vaddr, unsigned long lmi)
 
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
spin_lock(&linear_map_hash_lock);
-   BUG_ON(!(linear_map_hash_slots[lmi] & 0x80));
+   if (!(linear_map_hash_slots[lmi] & 0x80)) {
+   spin_unlock(&linear_map_hash_lock);
+   return;
+   }
hidx = linear_map_hash_slots[lmi] & 0x7f;
linear_map_hash_slots[lmi] = 0;
spin_unlock(&linear_map_hash_lock);
-- 
2.34.1



[PATCH 2/4] powerpc/64s: Remove unneeded #ifdef CONFIG_DEBUG_PAGEALLOC in hash_utils

2022-09-18 Thread Nicholas Miehlbradt
From: Christophe Leroy 

debug_pagealloc_enabled() is always defined and constant folds to
'false' when CONFIG_DEBUG_PAGEALLOC is not enabled.

Remove the #ifdefs, the code and associated static variables will
be optimised out by the compiler when CONFIG_DEBUG_PAGEALLOC is
not defined.

Signed-off-by: Christophe Leroy 
Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/mm/book3s64/hash_utils.c | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index fc92613dc2bf..e63ff401a6ea 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -123,11 +123,8 @@ EXPORT_SYMBOL_GPL(mmu_slb_size);
 #ifdef CONFIG_PPC_64K_PAGES
 int mmu_ci_restrictions;
 #endif
-#ifdef CONFIG_DEBUG_PAGEALLOC
 static u8 *linear_map_hash_slots;
 static unsigned long linear_map_hash_count;
-static DEFINE_SPINLOCK(linear_map_hash_lock);
-#endif /* CONFIG_DEBUG_PAGEALLOC */
 struct mmu_hash_ops mmu_hash_ops;
 EXPORT_SYMBOL(mmu_hash_ops);
 
@@ -427,11 +424,9 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long 
vend,
break;
 
cond_resched();
-#ifdef CONFIG_DEBUG_PAGEALLOC
if (debug_pagealloc_enabled() &&
(paddr >> PAGE_SHIFT) < linear_map_hash_count)
linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80;
-#endif /* CONFIG_DEBUG_PAGEALLOC */
}
return ret < 0 ? ret : 0;
 }
@@ -1066,7 +1061,6 @@ static void __init htab_initialize(void)
 
prot = pgprot_val(PAGE_KERNEL);
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
if (debug_pagealloc_enabled()) {
linear_map_hash_count = memblock_end_of_DRAM() >> PAGE_SHIFT;
linear_map_hash_slots = memblock_alloc_try_nid(
@@ -1076,7 +1070,6 @@ static void __init htab_initialize(void)
panic("%s: Failed to allocate %lu bytes max_addr=%pa\n",
  __func__, linear_map_hash_count, &ppc64_rma_size);
}
-#endif /* CONFIG_DEBUG_PAGEALLOC */
 
/* create bolted the linear mapping in the hash table */
for_each_mem_range(i, &base, &end) {
@@ -1991,6 +1984,8 @@ long hpte_insert_repeating(unsigned long hash, unsigned 
long vpn,
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
+static DEFINE_SPINLOCK(linear_map_hash_lock);
+
 static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
 {
unsigned long hash;
-- 
2.34.1



[PATCH 4/4] powerpc/64s: Enable KFENCE on book3s64

2022-09-18 Thread Nicholas Miehlbradt
KFENCE support was added for ppc32 in commit 90cbac0e995d
("powerpc: Enable KFENCE for PPC32").
Enable KFENCE on ppc64 architecture with hash and radix MMUs.
It uses the same mechanism as debug pagealloc to
protect/unprotect pages. All KFENCE kunit tests pass on both
MMUs.

KFENCE memory is initially allocated using memblock but is
later marked as SLAB allocated. This necessitates the change
to __pud_free to ensure that the KFENCE pages are freed
appropriately.

Based on previous work by Christophe Leroy and Jordan Niethe.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/Kconfig |  2 +-
 arch/powerpc/include/asm/book3s/64/pgalloc.h |  6 --
 arch/powerpc/include/asm/book3s/64/pgtable.h |  2 +-
 arch/powerpc/include/asm/kfence.h| 18 ++
 arch/powerpc/mm/book3s64/hash_utils.c| 10 +-
 arch/powerpc/mm/book3s64/radix_pgtable.c |  8 +---
 6 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index a4f8a5276e5c..f7dd0f49510d 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -194,7 +194,7 @@ config PPC
select HAVE_ARCH_KASAN  if PPC32 && PPC_PAGE_SHIFT <= 14
select HAVE_ARCH_KASAN  if PPC_RADIX_MMU
select HAVE_ARCH_KASAN_VMALLOC  if HAVE_ARCH_KASAN
-   select HAVE_ARCH_KFENCE if PPC_BOOK3S_32 || PPC_8xx || 
40x
+   select HAVE_ARCH_KFENCE if ARCH_SUPPORTS_DEBUG_PAGEALLOC
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h 
b/arch/powerpc/include/asm/book3s/64/pgalloc.h
index e1af0b394ceb..dd2cff53a111 100644
--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
+++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
@@ -113,9 +113,11 @@ static inline void __pud_free(pud_t *pud)
 
/*
 * Early pud pages allocated via memblock allocator
-* can't be directly freed to slab
+* can't be directly freed to slab. KFENCE pages have
+* both reserved and slab flags set so need to be freed
+* kmem_cache_free.
 */
-   if (PageReserved(page))
+   if (PageReserved(page) && !PageSlab(page))
free_reserved_page(page);
else
kmem_cache_free(PGT_CACHE(PUD_CACHE_INDEX), pud);
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cb9d5fd39d7f..fd5d800f2836 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1123,7 +1123,7 @@ static inline void vmemmap_remove_mapping(unsigned long 
start,
 }
 #endif
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
 static inline void __kernel_map_pages(struct page *page, int numpages, int 
enable)
 {
if (radix_enabled())
diff --git a/arch/powerpc/include/asm/kfence.h 
b/arch/powerpc/include/asm/kfence.h
index a9846b68c6b9..33edbc312a51 100644
--- a/arch/powerpc/include/asm/kfence.h
+++ b/arch/powerpc/include/asm/kfence.h
@@ -11,11 +11,28 @@
 #include 
 #include 
 
+#if defined(CONFIG_PPC64) && !defined(CONFIG_PPC64_ELF_ABI_V2)
+#define ARCH_FUNC_PREFIX "."
+#endif
+
 static inline bool arch_kfence_init_pool(void)
 {
return true;
 }
 
+#ifdef CONFIG_PPC64
+static inline bool kfence_protect_page(unsigned long addr, bool protect)
+{
+   struct page *page = virt_to_page(addr);
+
+   if (protect)
+   __kernel_map_pages(page, 1, 0);
+   else
+   __kernel_map_pages(page, 1, 1);
+
+   return true;
+}
+#else
 static inline bool kfence_protect_page(unsigned long addr, bool protect)
 {
pte_t *kpte = virt_to_kpte(addr);
@@ -29,5 +46,6 @@ static inline bool kfence_protect_page(unsigned long addr, 
bool protect)
 
return true;
 }
+#endif
 
 #endif /* __ASM_POWERPC_KFENCE_H */
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index b37412fe5930..9cceaa5998a3 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -424,7 +424,7 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long 
vend,
break;
 
cond_resched();
-   if (debug_pagealloc_enabled() &&
+   if (debug_pagealloc_enabled_or_kfence() &&
(paddr >> PAGE_SHIFT) < linear_map_hash_count)
linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80;
}
@@ -773,7 +773,7 @@ static void __init htab_init_page_sizes(void)
bool aligned = true;
init_hpte_page_sizes();
 
-   if (!debug_pagealloc_enabled()) {
+   if (!debug_pagealloc_enabled_or_kfence()) {

[PATCH 1/4] powerpc/64s: Add DEBUG_PAGEALLOC for radix

2022-09-18 Thread Nicholas Miehlbradt
There is support for DEBUG_PAGEALLOC on hash but not on radix.
Add support on radix.

Signed-off-by: Nicholas Miehlbradt 
---
 arch/powerpc/mm/book3s64/radix_pgtable.c | 16 +++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c 
b/arch/powerpc/mm/book3s64/radix_pgtable.c
index db2f3d193448..483c99bfbde5 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -30,6 +30,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -503,6 +504,9 @@ static unsigned long __init radix_memory_block_size(void)
 {
unsigned long mem_block_size = MIN_MEMORY_BLOCK_SIZE;
 
+   if (debug_pagealloc_enabled())
+   return PAGE_SIZE;
+
/*
 * OPAL firmware feature is set by now. Hence we are ok
 * to test OPAL feature.
@@ -519,6 +523,9 @@ static unsigned long __init radix_memory_block_size(void)
 
 static unsigned long __init radix_memory_block_size(void)
 {
+   if (debug_pagealloc_enabled())
+   return PAGE_SIZE;
+
return 1UL * 1024 * 1024 * 1024;
 }
 
@@ -899,7 +906,14 @@ void __meminit radix__vmemmap_remove_mapping(unsigned long 
start, unsigned long
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void radix__kernel_map_pages(struct page *page, int numpages, int enable)
 {
-   pr_warn_once("DEBUG_PAGEALLOC not supported in radix mode\n");
+   unsigned long addr;
+
+   addr = (unsigned long)page_address(page);
+
+   if (enable)
+   set_memory_p(addr, numpages);
+   else
+   set_memory_np(addr, numpages);
 }
 #endif
 
-- 
2.34.1



Re: [PATCH v4 2/2] selftests/powerpc: Add a test for execute-only memory

2022-08-18 Thread Nicholas Miehlbradt

On 17/8/2022 4:15 pm, Jordan Niethe wrote:

On Wed, 2022-08-17 at 15:06 +1000, Russell Currey wrote:

From: Nicholas Miehlbradt 

This selftest is designed to cover execute-only protections
on the Radix MMU but will also work with Hash.

The tests are based on those found in pkey_exec_test with modifications
to use the generic mprotect() instead of the pkey variants.


Would it make sense to rename pkey_exec_test to exec_test and have this test be 
apart of that?

I think might make it unnecessarily complex. The checks needed when 
testing with pkeys would mean that it would be necessary to check if 
pkeys are enabled and choose which set of tests to run depending on the 
result. The differences are substantial enough that it would be 
challenging to combine them into a single set of tests.




Signed-off-by: Nicholas Miehlbradt 
Signed-off-by: Russell Currey 
---
v4: new

  tools/testing/selftests/powerpc/mm/Makefile   |   3 +-
  .../testing/selftests/powerpc/mm/exec_prot.c  | 231 ++
  2 files changed, 233 insertions(+), 1 deletion(-)
  create mode 100644 tools/testing/selftests/powerpc/mm/exec_prot.c

diff --git a/tools/testing/selftests/powerpc/mm/Makefile 
b/tools/testing/selftests/powerpc/mm/Makefile
index 27dc09d0bfee..19dd0b2ea397 100644
--- a/tools/testing/selftests/powerpc/mm/Makefile
+++ b/tools/testing/selftests/powerpc/mm/Makefile
@@ -3,7 +3,7 @@ noarg:
$(MAKE) -C ../
  
  TEST_GEN_PROGS := hugetlb_vs_thp_test subpage_prot prot_sao segv_errors wild_bctr \

- large_vm_fork_separation bad_accesses pkey_exec_prot \
+ large_vm_fork_separation bad_accesses exec_prot 
pkey_exec_prot \
  pkey_siginfo stack_expansion_signal stack_expansion_ldst \
  large_vm_gpr_corruption
  TEST_PROGS := stress_code_patching.sh
@@ -22,6 +22,7 @@ $(OUTPUT)/wild_bctr: CFLAGS += -m64
  $(OUTPUT)/large_vm_fork_separation: CFLAGS += -m64
  $(OUTPUT)/large_vm_gpr_corruption: CFLAGS += -m64
  $(OUTPUT)/bad_accesses: CFLAGS += -m64
+$(OUTPUT)/exec_prot: CFLAGS += -m64
  $(OUTPUT)/pkey_exec_prot: CFLAGS += -m64
  $(OUTPUT)/pkey_siginfo: CFLAGS += -m64
  
diff --git a/tools/testing/selftests/powerpc/mm/exec_prot.c b/tools/testing/selftests/powerpc/mm/exec_prot.c

new file mode 100644
index ..db75b2225de1
--- /dev/null
+++ b/tools/testing/selftests/powerpc/mm/exec_prot.c
@@ -0,0 +1,231 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright 2022, Nicholas Miehlbradt, IBM Corporation
+ * based on pkey_exec_prot.c
+ *
+ * Test if applying execute protection on pages works as expected.
+ */
+
+#define _GNU_SOURCE
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "pkeys.h"
+
+
+#define PPC_INST_NOP   0x6000
+#define PPC_INST_TRAP  0x7fe8
+#define PPC_INST_BLR   0x4e800020
+
+static volatile sig_atomic_t fault_code;
+static volatile sig_atomic_t remaining_faults;
+static volatile unsigned int *fault_addr;
+static unsigned long pgsize, numinsns;
+static unsigned int *insns;
+static bool pkeys_supported;
+
+static bool is_fault_expected(int fault_code)
+{
+   if (fault_code == SEGV_ACCERR)
+   return true;
+
+   /* Assume any pkey error is fine since pkey_exec_prot test covers them 
*/
+   if (fault_code == SEGV_PKUERR && pkeys_supported)
+   return true;
+
+   return false;
+}
+
+static void trap_handler(int signum, siginfo_t *sinfo, void *ctx)
+{
+   /* Check if this fault originated from the expected address */
+   if (sinfo->si_addr != (void *)fault_addr)
+   sigsafe_err("got a fault for an unexpected address\n");
+
+   _exit(1);
+}
+
+static void segv_handler(int signum, siginfo_t *sinfo, void *ctx)
+{
+   fault_code = sinfo->si_code;
+
+   /* Check if this fault originated from the expected address */
+   if (sinfo->si_addr != (void *)fault_addr) {
+   sigsafe_err("got a fault for an unexpected address\n");
+   _exit(1);
+   }
+
+   /* Check if too many faults have occurred for a single test case */
+   if (!remaining_faults) {
+   sigsafe_err("got too many faults for the same address\n");
+   _exit(1);
+   }
+
+
+   /* Restore permissions in order to continue */
+   if (is_fault_expected(fault_code)) {
+   if (mprotect(insns, pgsize, PROT_READ | PROT_WRITE | 
PROT_EXEC)) {
+   sigsafe_err("failed to set access permissions\n");
+   _exit(1);
+   }
+   } else {
+   sigsafe_err("got a fault with an unexpected code\n");
+   _exit(1);
+   }
+
+   remaining_faults--;
+}
+
+static int check_exec_fault(int rights)
+{
+   /*
+* Jump to the executable region.
+*
+* The first iteration also checks if the overwrite of the
+* first instructi

[PATCH] docs: powerpc: add POWER9 and POWER10 to CPU families

2022-08-09 Thread Nicholas Miehlbradt
Add POWER9 and POWER10 to CPU families and list Radix MMU.

Signed-off-by: Nicholas Miehlbradt 
---
 Documentation/powerpc/cpu_families.rst | 13 +
 1 file changed, 13 insertions(+)

diff --git a/Documentation/powerpc/cpu_families.rst 
b/Documentation/powerpc/cpu_families.rst
index 9b84e045e713..eb7e60649b43 100644
--- a/Documentation/powerpc/cpu_families.rst
+++ b/Documentation/powerpc/cpu_families.rst
@@ -10,6 +10,7 @@ Book3S (aka sPAPR)
 --
 
 - Hash MMU (except 603 and e300)
+- Radix MMU (POWER9 and later)
 - Software loaded TLB (603 and e300)
 - Selectable Software loaded TLB in addition to hash MMU (755, 7450, e600)
 - Mix of 32 & 64 bit::
@@ -100,6 +101,18 @@ Book3S (aka sPAPR)
   v
+--+
|POWER8|
+   +--+
+  |
+  |
+  v
+   +--+
+   |POWER9|
+   +--+
+  |
+  |
+  v
+   +--+
+   |   POWER10|
+--+
 
 
-- 
2.34.1