The kexec sequence invokes enter_vmx_ops() and exit_vmx_ops() with the MMU disabled. In this context, code must not rely on normal virtual address translations or trigger page faults.
With KASAN enabled, these functions get instrumented and may access shadow memory using regular address translation. When executed with the MMU off, this can lead to page faults (bad_page_fault) from which the kernel cannot recover in the kexec path, resulting in a hang. Mark enter_vmx_ops() and exit_vmx_ops() with __no_sanitize_address to avoid KASAN instrumentation and ensure kexec boots fine with KASAN enabled. Cc: Aditya Gupta <[email protected]> Cc: Daniel Axtens <[email protected]> Cc: Hari Bathini <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Mahesh Salgaonkar <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Ritesh Harjani (IBM) <[email protected]> Cc: Shivang Upadhyay <[email protected]> Cc: Venkat Rao Bagalkote <[email protected]> Reported-by: Aboorva Devarajan <[email protected]> Signed-off-by: Sourabh Jain <[email protected]> --- arch/powerpc/lib/vmx-helper.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c index 554b248002b4..c01b2d856650 100644 --- a/arch/powerpc/lib/vmx-helper.c +++ b/arch/powerpc/lib/vmx-helper.c @@ -52,7 +52,7 @@ int exit_vmx_usercopy(void) } EXPORT_SYMBOL(exit_vmx_usercopy); -int enter_vmx_ops(void) +int __no_sanitize_address enter_vmx_ops(void) { if (in_interrupt()) return 0; @@ -69,7 +69,7 @@ int enter_vmx_ops(void) * passed a pointer to the destination which we return as required by a * memcpy implementation. */ -void *exit_vmx_ops(void *dest) +void __no_sanitize_address *exit_vmx_ops(void *dest) { disable_kernel_altivec(); preempt_enable(); -- 2.52.0
