Both of those functions end up calling ftrace_modify_code(), which is expensive because it changes the page tables and flush caches. Microseconds add up because this is called in a loop for each dyn_ftrace record, and this triggers the softlockup watchdog unless we let it sleep occasionally. Rework so that we call cond_resched() before going into the ftrace_modify_code() function.
Co-developed-by: Arnd Bergmann <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Anders Roxell <[email protected]> --- arch/arm64/kernel/ftrace.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index de1a397d2d3f..9da38da58df7 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -130,6 +130,11 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) old = aarch64_insn_gen_nop(); new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); + /* This function can take a long time when sanitizers are enabled, so + * lets make sure we allow RCU processing. + */ + cond_resched(); + return ftrace_modify_code(pc, old, new, true); } @@ -188,6 +193,11 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, new = aarch64_insn_gen_nop(); + /* This function can take a long time when sanitizers are enabled, so + * lets make sure we allow RCU processing. + */ + cond_resched(); + return ftrace_modify_code(pc, old, new, validate); } -- 2.19.2

