From: "Steven Rostedt (VMware)" <[email protected]>

If a ftrace callback does not supply its own recursion protection and
does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
make a helper trampoline to do so before calling the callback instead of
just calling the callback directly.

The default for ftrace_ops is going to change. It will expect that handlers
provide their own recursion protection, unless its ftrace_ops states
otherwise.

Link: https://lkml.kernel.org/r/[email protected]

Cc: Masami Hiramatsu <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Jiri Kosina <[email protected]>
Cc: Miroslav Benes <[email protected]>
Cc: Joe Lawrence <[email protected]>
Cc: [email protected]
Reviewed-by: Petr Mladek <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
---
 kernel/livepatch/patch.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index b552cf2d85f8..6c0164d24bbd 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -45,9 +45,13 @@ static void notrace klp_ftrace_handler(unsigned long ip,
        struct klp_ops *ops;
        struct klp_func *func;
        int patch_state;
+       int bit;
 
        ops = container_of(fops, struct klp_ops, fops);
 
+       bit = ftrace_test_recursion_trylock();
+       if (bit < 0)
+               return;
        /*
         * A variant of synchronize_rcu() is used to allow patching functions
         * where RCU is not watching, see klp_synchronize_transition().
@@ -117,6 +121,7 @@ static void notrace klp_ftrace_handler(unsigned long ip,
 
 unlock:
        preempt_enable_notrace();
+       ftrace_test_recursion_unlock(bit);
 }
 
 /*
-- 
2.28.0


Reply via email to