On Mon, Mar 16, 2026 at 08:35:15AM +0000, [email protected] wrote:
> > The detach works as follows:
> > - we lock all the needed trampolines
> > - remove the program from each (__bpf_trampoline_unlink_prog)
> > - the trampoline_multi_ops passed in __bpf_trampoline_link_prog gathers
> >   ftrace_hash (ip -> trampoline) objects
> 
> The detach description references __bpf_trampoline_link_prog, but the
> detach path calls __bpf_trampoline_unlink_prog.

ack

> 
> > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> > index d55651b13511..9331cca8c0b4 100644
> > --- a/kernel/bpf/trampoline.c
> > +++ b/kernel/bpf/trampoline.c
> 
> [ ... ]
> 
> > +static int modify_fentry_multi(struct bpf_trampoline *tr, u32 orig_flags, 
> > void *old_addr,
> > +                          void *new_addr, bool lock_direct_mutex, void 
> > *ptr)
> > +{
> > +   unsigned long addr = (unsigned long) new_addr;
> > +   unsigned long ip = ftrace_location(tr->ip);
> > +   struct fentry_multi_data *data = ptr;
> > +
> > +   if (bpf_trampoline_use_jmp(tr->flags))
> > +           addr = ftrace_jmp_set(addr);
> > +   return add_ftrace_hash_entry_direct(data->modify, ip, addr) ? 0 : 
> > -ENOMEM;
> > +}
> 
> Unlike the standard modify_fentry() which calls direct_ops_mod() ->
> update_ftrace_direct_mod() to update ftrace immediately,
> modify_fentry_multi() only adds a hash entry.  The actual ftrace
> update is deferred until after the link loop.
> 
> This matters because bpf_trampoline_update() unconditionally calls
> bpf_tramp_image_put() on the old cur_image after a successful
> modify_fentry callback:
> 
>     bpf_trampoline_update() {
>         ...
>         err = ops->modify_fentry(tr, ...);
>         ...
>         if (tr->cur_image)
>             bpf_tramp_image_put(tr->cur_image);
>         tr->cur_image = im;
>     }
> 
> With the standard ops, the old image is already unreachable when put.
> With multi ops, ftrace still points to the old image when it is put.
> bpf_tramp_image_put() text-pokes the old image to skip fexit programs
> and starts deferred freeing.
> 
> In the success path, the window between image_put and the batched
> update_ftrace_direct_mod is brief (bounded by trampoline_lock_all).
> 
> However, in the rollback path below, ftrace is never updated at all:
> 
> [ ... ]
> 
> > +   trampoline_lock_all();
> > +
> > +   for (i = 0; i < cnt; i++) {
> > +           mnode = &link->nodes[i];
> > +           err = __bpf_trampoline_link_prog(&mnode->node, 
> > mnode->trampoline, NULL,
> > +                                            &trampoline_multi_ops, &data);
> > +           if (err)
> > +                   goto rollback_unlink;
> > +   }
> 
> [ ... ]
> 
> > +rollback_unlink:
> > +   for (j = 0; j < i; j++) {
> > +           mnode = &link->nodes[j];
> > +           WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, 
> > mnode->trampoline,
> > +                                   NULL, &trampoline_multi_ops, &data));
> > +   }
> > +   trampoline_unlock_all();
> > +
> > +   i = cnt;
> 
> If a trampoline at index 0 already had programs attached through the
> standard single-trampoline path, and __bpf_trampoline_link_prog
> succeeds for it but fails for a later trampoline, the rollback path
> restores tr->cur_image to a correct new image but never calls
> update_ftrace_direct_mod.
> 
> At that point, ftrace still routes calls through the old image that
> was put during the link phase.  bpf_tramp_image_put() has already
> patched that old image to skip fexit programs and started the deferred
> freeing sequence (percpu_ref_kill).
> 
> Could this leave existing fexit/fmod_ret programs on that trampoline
> non-functional?  And once the percpu_ref drains during a quiet period,
> would the old image be freed while ftrace still references it?

good catch, we need the update_ftrace_direct_mod call after the unlink,
I changed the rollback test to hit the described issue, will fix

> 
> [ ... ]
> 
> > +int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct 
> > bpf_tracing_multi_link *link)
> > +{
> > +   struct bpf_tracing_multi_node *mnode;
> > +   struct fentry_multi_data data = {};
> > +   int i, cnt = link->nodes_cnt;
> > +
> > +   data.unreg  = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> > +   data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> > +
> > +   if (!data.unreg || !data.modify) {
> > +           free_fentry_multi_data(&data);
> > +           return -ENOMEM;
> > +   }
> 
> bpf_trampoline_multi_detach can return -ENOMEM here without detaching
> anything.  In e80a839e3d2c ("bpf: Add support for tracing multi
> link"), this is called from bpf_tracing_multi_link_release():
> 
>     WARN_ON_ONCE(bpf_trampoline_multi_detach(link->prog, tr_link));
> 
> If detach fails, the programs remain linked to the trampolines via
> mnode->node.tramp_hlist, but the link (and its embedded nodes array)
> is freed by bpf_tracing_multi_link_dealloc().  Would this leave
> dangling hlist entries in each trampoline's progs_hlist?

not sure there's anything useful we could do if allocation fails

jirka

Reply via email to