Le 17/11/2025 à 07:52, Saket Kumar Bhaskar a écrit :
Inline the calls to bpf_get_smp_processor_id()/bpf_get_current_task()
in the powerpc bpf jit.

powerpc saves the Logical processor number (paca_index) and pointer
to current task (__current) in paca.

Here is how the powerpc JITed assembly changes after this commit:

Before:

cpu = bpf_get_smp_processor_id();

addis 12, 2, -517
addi 12, 12, -29456
mtctr 12
bctrl
mr      8, 3

After:

cpu = bpf_get_smp_processor_id();

lhz 8, 8(13)

To evaluate the performance improvements introduced by this change,
the benchmark described in [1] was employed.

+---------------+-------------------+-------------------+--------------+
|      Name     |      Before       |        After      |   % change   |
|---------------+-------------------+-------------------+--------------|
| glob-arr-inc  | 40.701 ± 0.008M/s | 55.207 ± 0.021M/s |   + 35.64%   |
| arr-inc       | 39.401 ± 0.007M/s | 56.275 ± 0.023M/s |   + 42.42%   |
| hash-inc      | 24.944 ± 0.004M/s | 26.212 ± 0.003M/s |   +  5.08%   |
+---------------+-------------------+-------------------+--------------+

[1] 
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanakryiko%2Flinux%2Fcommit%2F8dec900975ef&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7C4a08a3af41ff4f9bc55d08de25a5f0ee%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638989591794687135%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=FtfTYpm9VgLfO1Q3iZvyrE4QRG317%2B%2BjfPd66Wd%2FQP4%3D&reserved=0

Signed-off-by: Saket Kumar Bhaskar <[email protected]>
---
  arch/powerpc/net/bpf_jit_comp.c   | 11 +++++++++++
  arch/powerpc/net/bpf_jit_comp64.c | 10 ++++++++++
  2 files changed, 21 insertions(+)

diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 2f2230ae2145..c88dfa1418ec 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -471,6 +471,17 @@ bool bpf_jit_supports_percpu_insn(void)
        return IS_ENABLED(CONFIG_PPC64);
  }
+bool bpf_jit_inlines_helper_call(s32 imm)
+{
+       switch (imm) {
+       case BPF_FUNC_get_smp_processor_id:
+       case BPF_FUNC_get_current_task:

What about BPF_FUNC_get_current_task_btf ?

+               return true;
+       default:
+               return false;
+       }
+}
+
  void *arch_alloc_bpf_trampoline(unsigned int size)
  {
        return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
b/arch/powerpc/net/bpf_jit_comp64.c
index 21486706b5ea..4e1643422370 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -1399,6 +1399,16 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, 
u32 *fimage, struct code
                case BPF_JMP | BPF_CALL:
                        ctx->seen |= SEEN_FUNC;
+ if (insn[i].src_reg == BPF_REG_0) {

Are you sure you want to use BPF_REG_0 here ? Is it the correct meaning ? I see RISCV and ARM64 use 0 instead.

If you keep BPF_REG_0 I would have a preference for

                if (src_reg == bpf_to_ppc(BPF_REG_0))

+                               if (imm == BPF_FUNC_get_smp_processor_id) {
+                                       EMIT(PPC_RAW_LHZ(insn[i].src_reg, _R13, 
offsetof(struct paca_struct, paca_index)));

This looks wrong, you can't use insn[i].src_reg to emit powerpc instructions, you must use the local src_reg which converts the register ID with bpf_to_ppc()

+                                       break;
+                               } else if (imm == BPF_FUNC_get_current_task) {
+                                       EMIT(PPC_RAW_LD(insn[i].src_reg, _R13, 
offsetof(struct paca_struct, __current)));

Same here.

+                                       break;
+                               }
+                       }
+
                        ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,
                                                    &func_addr, 
&func_addr_fixed);
                        if (ret < 0)


Reply via email to