On Tue, 21 May 2024 18:38:43 -0700
Andrii Nakryiko <[email protected]> wrote:

> When kernel has pending uretprobes installed, it hijacks original user
> function return address on the stack with a uretprobe trampoline
> address. There could be multiple such pending uretprobes (either on
> different user functions or on the same recursive one) at any given
> time within the same task.
> 
> This approach interferes with the user stack trace capture logic, which
> would report suprising addresses (like 0x7fffffffe000) that correspond
> to a special "[uprobes]" section that kernel installs in the target
> process address space for uretprobe trampoline code, while logically it
> should be an address somewhere within the calling function of another
> traced user function.
> 
> This is easy to correct for, though. Uprobes subsystem keeps track of
> pending uretprobes and records original return addresses. This patch is
> using this to do a post-processing step and restore each trampoline
> address entries with correct original return address. This is done only
> if there are pending uretprobes for current task.
> 
> This is a similar approach to what fprobe/kretprobe infrastructure is
> doing when capturing kernel stack traces in the presence of pending
> return probes.
> 

This looks good to me because this trampoline information is only
managed in uprobes. And it should be provided when unwinding user
stack.

Reviewed-by: Masami Hiramatsu (Google) <[email protected]>

Thank you!

> Reported-by: Riham Selim <[email protected]>
> Signed-off-by: Andrii Nakryiko <[email protected]>
> ---
>  kernel/events/callchain.c | 43 ++++++++++++++++++++++++++++++++++++++-
>  kernel/events/uprobes.c   |  9 ++++++++
>  2 files changed, 51 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
> index 1273be84392c..b17e3323f7f6 100644
> --- a/kernel/events/callchain.c
> +++ b/kernel/events/callchain.c
> @@ -11,6 +11,7 @@
>  #include <linux/perf_event.h>
>  #include <linux/slab.h>
>  #include <linux/sched/task_stack.h>
> +#include <linux/uprobes.h>
>  
>  #include "internal.h"
>  
> @@ -176,13 +177,51 @@ put_callchain_entry(int rctx)
>       put_recursion_context(this_cpu_ptr(callchain_recursion), rctx);
>  }
>  
> +static void fixup_uretprobe_trampoline_entries(struct perf_callchain_entry 
> *entry,
> +                                            int start_entry_idx)
> +{
> +#ifdef CONFIG_UPROBES
> +     struct uprobe_task *utask = current->utask;
> +     struct return_instance *ri;
> +     __u64 *cur_ip, *last_ip, tramp_addr;
> +
> +     if (likely(!utask || !utask->return_instances))
> +             return;
> +
> +     cur_ip = &entry->ip[start_entry_idx];
> +     last_ip = &entry->ip[entry->nr - 1];
> +     ri = utask->return_instances;
> +     tramp_addr = uprobe_get_trampoline_vaddr();
> +
> +     /*
> +      * If there are pending uretprobes for the current thread, they are
> +      * recorded in a list inside utask->return_instances; each such
> +      * pending uretprobe replaces traced user function's return address on
> +      * the stack, so when stack trace is captured, instead of seeing
> +      * actual function's return address, we'll have one or many uretprobe
> +      * trampoline addresses in the stack trace, which are not helpful and
> +      * misleading to users.
> +      * So here we go over the pending list of uretprobes, and each
> +      * encountered trampoline address is replaced with actual return
> +      * address.
> +      */
> +     while (ri && cur_ip <= last_ip) {
> +             if (*cur_ip == tramp_addr) {
> +                     *cur_ip = ri->orig_ret_vaddr;
> +                     ri = ri->next;
> +             }
> +             cur_ip++;
> +     }
> +#endif
> +}
> +
>  struct perf_callchain_entry *
>  get_perf_callchain(struct pt_regs *regs, u32 init_nr, bool kernel, bool user,
>                  u32 max_stack, bool crosstask, bool add_mark)
>  {
>       struct perf_callchain_entry *entry;
>       struct perf_callchain_entry_ctx ctx;
> -     int rctx;
> +     int rctx, start_entry_idx;
>  
>       entry = get_callchain_entry(&rctx);
>       if (!entry)
> @@ -215,7 +254,9 @@ get_perf_callchain(struct pt_regs *regs, u32 init_nr, 
> bool kernel, bool user,
>                       if (add_mark)
>                               perf_callchain_store_context(&ctx, 
> PERF_CONTEXT_USER);
>  
> +                     start_entry_idx = entry->nr;
>                       perf_callchain_user(&ctx, regs);
> +                     fixup_uretprobe_trampoline_entries(entry, 
> start_entry_idx);
>               }
>       }
>  
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index d60d24f0f2f4..1c99380dc89d 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -2149,6 +2149,15 @@ static void handle_trampoline(struct pt_regs *regs)
>  
>               instruction_pointer_set(regs, ri->orig_ret_vaddr);
>               do {
> +                     /* pop current instance from the stack of pending 
> return instances,
> +                      * as it's not pending anymore: we just fixed up 
> original
> +                      * instruction pointer in regs and are about to call 
> handlers;
> +                      * this allows fixup_uretprobe_trampoline_entries() to 
> properly fix up
> +                      * captured stack traces from uretprobe handlers, in 
> which pending
> +                      * trampoline addresses on the stack are replaced with 
> correct
> +                      * original return addresses
> +                      */
> +                     utask->return_instances = ri->next;
>                       if (valid)
>                               handle_uretprobe_chain(ri, regs);
>                       ri = free_ret_instance(ri);
> -- 
> 2.43.0
> 


-- 
Masami Hiramatsu (Google) <[email protected]>

Reply via email to