On Tue, Jul 10, 2018 at 12:43:22AM +0200, Daniel Borkmann wrote:
> Mark reported that syzkaller triggered a KASAN detected slab-out-of-bounds
> bug in ___bpf_prog_run() with a BPF_LD | BPF_ABS word load at offset 0x8001.
> After further investigation it became clear that the issue was the
> BPF_LDX_MEM() which takes offset as an argument whereas it cannot encode
> larger than S16_MAX offsets into it. For this synthetical case we need to
> move the full address into tmp register instead and do the LDX without
> immediate value.
> 
> Fixes: e0cea7ce988c ("bpf: implement ld_abs/ld_ind in native bpf")
> Reported-by: syzbot <syzkal...@googlegroups.com>
> Reported-by: Mark Rutland <mark.rutl...@arm.com>
> Signed-off-by: Daniel Borkmann <dan...@iogearbox.net>
> ---
>  net/core/filter.c | 16 +++++++++++++---
>  1 file changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 5fa66a3..a13f5b1 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -459,11 +459,21 @@ static bool convert_bpf_ld_abs(struct sock_filter *fp, 
> struct bpf_insn **insnp)
>            (!unaligned_ok && offset >= 0 &&
>             offset + ip_align >= 0 &&
>             offset + ip_align % size == 0))) {
> +             bool ldx_off_ok = offset <= S16_MAX;
> +

Given offset is a (signed) int, is it possible for that to be a negative
value less than S16_MIN? ... or is that ruled out elsewhere?

Thanks,
Mark.

>               *insn++ = BPF_MOV64_REG(BPF_REG_TMP, BPF_REG_H);
>               *insn++ = BPF_ALU64_IMM(BPF_SUB, BPF_REG_TMP, offset);
> -             *insn++ = BPF_JMP_IMM(BPF_JSLT, BPF_REG_TMP, size, 2 + endian);
> -             *insn++ = BPF_LDX_MEM(BPF_SIZE(fp->code), BPF_REG_A, BPF_REG_D,
> -                                   offset);
> +             *insn++ = BPF_JMP_IMM(BPF_JSLT, BPF_REG_TMP,
> +                                   size, 2 + endian + (!ldx_off_ok * 2));
> +             if (ldx_off_ok) {
> +                     *insn++ = BPF_LDX_MEM(BPF_SIZE(fp->code), BPF_REG_A,
> +                                           BPF_REG_D, offset);
> +             } else {
> +                     *insn++ = BPF_MOV64_REG(BPF_REG_TMP, BPF_REG_D);
> +                     *insn++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_TMP, offset);
> +                     *insn++ = BPF_LDX_MEM(BPF_SIZE(fp->code), BPF_REG_A,
> +                                           BPF_REG_TMP, 0);
> +             }
>               if (endian)
>                       *insn++ = BPF_ENDIAN(BPF_FROM_BE, BPF_REG_A, size * 8);
>               *insn++ = BPF_JMP_A(8);
> -- 
> 2.9.5
> 

Reply via email to