> diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
> index 8334cd667bba..9f6ec00bd02e 100644
> --- a/arch/powerpc/net/bpf_jit.h
> +++ b/arch/powerpc/net/bpf_jit.h
> @@ -24,6 +24,7 @@
>
> #define SZL sizeof(unsigned long)
> #define BPF_INSN_SAFETY 64
> +#define BPF_PPC_TAILCALL 8
[ ... ]
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 5e976730b2f5..d5757577f933 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -604,8 +604,8 @@ static void bpf_trampoline_setup_tail_call_cnt(u32
> *image, struct codegen_contex
> int func_frame_offset, int
> r4_off)
> {
> if (IS_ENABLED(CONFIG_PPC64)) {
> - /* See bpf_jit_stack_tailcallcnt() */
> - int tailcallcnt_offset = 7 * 8;
> + /* See Generated stack layout */
> + int tailcallcnt_offset = BPF_PPC_TAILCALL;
Should bpf_trampoline_restore_tail_call_cnt() also be updated to use
BPF_PPC_TAILCALL? That function still has:
int tailcallcnt_offset = 7 * 8;
The setup function stores tail_call_cnt at offset -8 (BPF_PPC_TAILCALL),
but the restore function reads from offset -56 (7 * 8). This mismatch
would cause the restore to read from and write to incorrect stack
locations when BPF_TRAMP_F_TAIL_CALL_CTX is set.
I note that commit b18ae1c3dadf ("powerpc64/bpf: Avoid tailcall restore
from trampoline") later in this series removes the PPC64 path from
bpf_trampoline_restore_tail_call_cnt(), which would resolve this. However,
intermediate commits between this patch and that fix could have broken
tail call counting in trampolines if bisected to.
[ ... ]
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21257691486