Sandipan Das writes:
> Hi Jiong, > > On 05/12/18 2:25 AM, Jiong Wang wrote: >> This patch implements code-gen for BPF_ALU | BPF_ARSH | BPF_*. >> >> Cc: Naveen N. Rao <naveen.n....@linux.ibm.com> >> Cc: Sandipan Das <sandi...@linux.ibm.com> >> Signed-off-by: Jiong Wang <jiong.w...@netronome.com> >> --- > [...] >> diff --git a/arch/powerpc/net/bpf_jit_comp64.c >> b/arch/powerpc/net/bpf_jit_comp64.c >> index 17482f5..c685b4f 100644 >> --- a/arch/powerpc/net/bpf_jit_comp64.c >> +++ b/arch/powerpc/net/bpf_jit_comp64.c >> @@ -529,9 +529,15 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 >> *image, >> if (imm != 0) >> PPC_SRDI(dst_reg, dst_reg, imm); >> break; >> + case BPF_ALU | BPF_ARSH | BPF_X: /* (s32) dst >>= src */ >> + PPC_SRAW(dst_reg, dst_reg, src_reg); >> + break; > > On ppc64, the sraw and srawi instructions also use sign extension. So, you > will have > to ensure that upper 32 bits are cleared. We already have a label in our JIT > code > called bpf_alu32_trunc that takes care of this. Replacing the break statement > with > a goto bpf_alu32_trunc will fix this. (resend the reply, got a delivery failure notification from netdev@vger.kernel.org) Indeed. Doubled checked the ISA doc,"Bit 32 of RS is replicated to fill RA0:31.". Will fix both places in v2. Thanks Regards, Jiong > >> case BPF_ALU64 | BPF_ARSH | BPF_X: /* (s64) dst >>= src */ >> PPC_SRAD(dst_reg, dst_reg, src_reg); >> break; >> + case BPF_ALU | BPF_ARSH | BPF_K: /* (s32) dst >>= imm */ >> + PPC_SRAWI(dst_reg, dst_reg, imm); >> + break; > > Same here. > >> case BPF_ALU64 | BPF_ARSH | BPF_K: /* (s64) dst >>= imm */ >> if (imm != 0) >> PPC_SRADI(dst_reg, dst_reg, imm); >> > > With Regards, > Sandipan