On Thu, Jan 15, 2026 at 10:50:13AM -0800, Andrii Nakryiko wrote:

SNIP

> > > diff --git a/tools/testing/selftests/bpf/benchs/bench_trigger.c 
> > > b/tools/testing/selftests/bpf/benchs/bench_trigger.c
> > > index 34018fc3927f..aeec9edd3851 100644
> > > --- a/tools/testing/selftests/bpf/benchs/bench_trigger.c
> > > +++ b/tools/testing/selftests/bpf/benchs/bench_trigger.c
> > > @@ -146,6 +146,7 @@ static void setup_ctx(void)
> > >         bpf_program__set_autoload(ctx.skel->progs.trigger_driver, true);
> > >
> > >         ctx.skel->rodata->batch_iters = args.batch_iters;
> > > +       ctx.skel->rodata->stacktrace = env.stacktrace;
> > >  }
> > >
> > >  static void load_ctx(void)
> > > diff --git a/tools/testing/selftests/bpf/progs/trigger_bench.c 
> > > b/tools/testing/selftests/bpf/progs/trigger_bench.c
> > > index 2898b3749d07..479400d96fa4 100644
> > > --- a/tools/testing/selftests/bpf/progs/trigger_bench.c
> > > +++ b/tools/testing/selftests/bpf/progs/trigger_bench.c
> > > @@ -25,6 +25,23 @@ static __always_inline void inc_counter(void)
> > >         __sync_add_and_fetch(&hits[cpu & CPU_MASK].value, 1);
> > >  }
> > >
> > > +volatile const int stacktrace;
> > > +
> > > +typedef __u64 stack_trace_t[128];
> > > +
> > > +struct {
> > > +       __uint(type, BPF_MAP_TYPE_STACK_TRACE);
> > > +       __uint(max_entries, 16384);
> > > +       __type(key, __u32);
> > > +       __type(value, stack_trace_t);
> > > +} stackmap SEC(".maps");
> 
> oh, and why bother with STACK_TRACE map, just call bpf_get_stack() API
> and have maybe per-CPU scratch array for stack trace (per-CPU so that
> in multi-cpu benchmarks they don't just contend on the same cache
> lines)

ok, will change

thanks,
jirka

Reply via email to