On Sat, Jan 24, 2026 at 5:27 AM Changwoo Min <[email protected]> wrote:
>
> Add a new selftest suite `exe_ctx` to verify the accuracy of the
> bpf_in_task(), bpf_in_hardirq(), and bpf_in_serving_softirq() helpers
> introduced in bpf_experimental.h.
>
> Testing these execution contexts deterministically requires crossing
> context boundaries within a single CPU. To achieve this, the test
> implements a "Trigger-Observer" pattern using bpf_testmod:
>
> 1. Trigger: A BPF syscall program calls a new bpf_testmod kfunc
>    bpf_kfunc_trigger_ctx_check().
> 2. Task to HardIRQ: The kfunc uses irq_work_queue() to trigger a
>    self-IPI on the local CPU.
> 3. HardIRQ to SoftIRQ: The irq_work handler calls a dummy function
>    (observed by BPF fentry) and then schedules a tasklet to
>    transition into SoftIRQ context.
>
> The user-space runner ensures determinism by pinning itself to CPU 0
> before execution, forcing the entire interrupt chain to remain on a
> single core. Dummy noinline functions with compiler barriers are
> added to bpf_testmod.c to serve as stable attachment points for
> fentry programs. A retry loop is used in user-space to wait for the
> asynchronous SoftIRQ to complete.
>
> Signed-off-by: Changwoo Min <[email protected]>

...

> +#include "vmlinux.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +#include "bpf_experimental.h"
> +
> +char _license[] SEC("license") = "GPL";
> +
> +extern void bpf_kfunc_trigger_ctx_check(void) __ksym;
> +
> +int count_hardirq;
> +int count_softirq;
> +int count_task;
> +
> +/* Triggered via bpf_prog_test_run from user-space */
> +SEC("syscall")
> +int trigger_all_contexts(void *ctx)
> +{
> +       if (bpf_in_task())
> +               __sync_fetch_and_add(&count_task, 1);
> +
> +       /* Trigger the firing of a hardirq and softirq for test. */
> +       bpf_kfunc_trigger_ctx_check();
> +       return 0;
> +}
> +
> +/* Observer for HardIRQ */
> +SEC("fentry/bpf_testmod_test_hardirq_fn")
> +int BPF_PROG(on_hardirq)
> +{
> +       if (bpf_in_hardirq())
> +               __sync_fetch_and_add(&count_hardirq, 1);
> +       return 0;
> +}
> +
> +/* Observer for SoftIRQ */
> +SEC("fentry/bpf_testmod_test_softirq_fn")
> +int BPF_PROG(on_softirq)
> +{
> +       if (bpf_in_serving_softirq())
> +               __sync_fetch_and_add(&count_softirq, 1);
> +       return 0;
> +}
> diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c 
> b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
> index d425034b72d3..1b04022859b7 100644
> --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
> +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
> @@ -1164,6 +1164,33 @@ __bpf_kfunc int bpf_kfunc_implicit_arg(int a, struct 
> bpf_prog_aux *aux);
>  __bpf_kfunc int bpf_kfunc_implicit_arg_legacy(int a, int b, struct 
> bpf_prog_aux *aux);
>  __bpf_kfunc int bpf_kfunc_implicit_arg_legacy_impl(int a, int b, struct 
> bpf_prog_aux *aux);
>
> +/* hook targets */
> +noinline void bpf_testmod_test_hardirq_fn(void) { barrier(); }
> +noinline void bpf_testmod_test_softirq_fn(void) { barrier(); }
> +
> +/* Tasklet for SoftIRQ context */
> +static void ctx_check_tasklet_fn(struct tasklet_struct *t)
> +{
> +       bpf_testmod_test_softirq_fn();
> +}
> +
> +DECLARE_TASKLET(ctx_check_tasklet, ctx_check_tasklet_fn);
> +
> +/* IRQ Work for HardIRQ context */
> +static void ctx_check_irq_fn(struct irq_work *work)
> +{
> +       bpf_testmod_test_hardirq_fn();
> +       tasklet_schedule(&ctx_check_tasklet);
> +}
> +
> +static struct irq_work ctx_check_irq = IRQ_WORK_INIT_HARD(ctx_check_irq_fn);

Nicely done! selftests should work in PREEMPT_RT too
though we don't enable it in bpf CI.

I was about to apply it, but the new tests fails on s390:

test_exe_ctx:FAIL:hardirq_ok unexpected hardirq_ok: actual 0 <= expected 0
test_exe_ctx:FAIL:softirq_ok unexpected softirq_ok: actual 0 <= expected 0

The existing bpf_in_interrupt() also works on x86 and arm64 only.
When it was introduced it came with pretty weak selftest
commit 31329b6 ("selftests/bpf: Introduce experimental bpf_in_interrupt()")
so it's not really testing anything on s390 and non-x86/arm architectures.

So just add your strong selftests to DENYLIST.s390x.
get_preempt_count() on s390 looks like this:
static __always_inline int preempt_count(void)
{
   return READ_ONCE(get_lowcore()->preempt_count) & ~PREEMPT_NEED_RESCHED;
}
but get_lowcore() needs asm.
So it's not going to be easy to make it work purely in bpf.
Let's punt it to people that care about s390.

pw-bot: cr

Reply via email to