Re: [PATCH net-next] bpf: Detect identical PTR_TO_MAP_VALUE_OR_NULL registers

2017-03-16 Thread Daniel Borkmann

On 03/16/2017 07:23 PM, David Miller wrote:

From: Daniel Borkmann 
Date: Thu, 16 Mar 2017 11:32:31 +0100


Do you have a chance to queue this one and it's follow-up fixes
to -stable? I checked 4.10 and they seem to have it already, but
not 4.9 kernels.

Commits:

   6760bf2 bpf: fix mark_reg_unknown_value for spilled regs on map value
   marking
   a08dd0d bpf: fix regression on verifier pruning wrt map lookups
   d2a4dd3 bpf: fix state equivalence
   57a09bf bpf: Detect identical PTR_TO_MAP_VALUE_OR_NULL registers


Ok, queued up.


Great, thanks!


Re: [PATCH net-next] bpf: Detect identical PTR_TO_MAP_VALUE_OR_NULL registers

2017-03-16 Thread David Miller
From: Daniel Borkmann 
Date: Thu, 16 Mar 2017 11:32:31 +0100

> Do you have a chance to queue this one and it's follow-up fixes
> to -stable? I checked 4.10 and they seem to have it already, but
> not 4.9 kernels.
> 
> Commits:
> 
>   6760bf2 bpf: fix mark_reg_unknown_value for spilled regs on map value
>   marking
>   a08dd0d bpf: fix regression on verifier pruning wrt map lookups
>   d2a4dd3 bpf: fix state equivalence
>   57a09bf bpf: Detect identical PTR_TO_MAP_VALUE_OR_NULL registers

Ok, queued up.


Re: [PATCH net-next] bpf: Detect identical PTR_TO_MAP_VALUE_OR_NULL registers

2017-03-16 Thread Daniel Borkmann

Hi Dave,

On 10/19/2016 05:09 PM, David Miller wrote:

From: Thomas Graf 
Date: Tue, 18 Oct 2016 19:51:19 +0200


A BPF program is required to check the return register of a
map_elem_lookup() call before accessing memory. The verifier keeps
track of this by converting the type of the result register from
PTR_TO_MAP_VALUE_OR_NULL to PTR_TO_MAP_VALUE after a conditional
jump ensures safety. This check is currently exclusively performed
for the result register 0.

In the event the compiler reorders instructions, BPF_MOV64_REG
instructions may be moved before the conditional jump which causes
them to keep their type PTR_TO_MAP_VALUE_OR_NULL to which the
verifier objects when the register is accessed:

0: (b7) r1 = 10
1: (7b) *(u64 *)(r10 -8) = r1
2: (bf) r2 = r10
3: (07) r2 += -8
4: (18) r1 = 0x59c0
6: (85) call 1
7: (bf) r4 = r0
8: (15) if r0 == 0x0 goto pc+1
  R0=map_value(ks=8,vs=8) R4=map_value_or_null(ks=8,vs=8) R10=fp
9: (7a) *(u64 *)(r4 +0) = 0
R4 invalid mem access 'map_value_or_null'

This commit extends the verifier to keep track of all identical
PTR_TO_MAP_VALUE_OR_NULL registers after a map_elem_lookup() by
assigning them an ID and then marking them all when the conditional
jump is observed.

Signed-off-by: Thomas Graf 
Reviewed-by: Josef Bacik 
Acked-by: Daniel Borkmann 
Acked-by: Alexei Starovoitov 


Applied, thanks Thomas.


Do you have a chance to queue this one and it's follow-up fixes
to -stable? I checked 4.10 and they seem to have it already, but
not 4.9 kernels.

Commits:

  6760bf2 bpf: fix mark_reg_unknown_value for spilled regs on map value marking
  a08dd0d bpf: fix regression on verifier pruning wrt map lookups
  d2a4dd3 bpf: fix state equivalence
  57a09bf bpf: Detect identical PTR_TO_MAP_VALUE_OR_NULL registers

Reason is that switching to newer llvm/clang versions (4.0+) tend
to generate such patterns more often apparently, which the older
verifiers don't like and therefore start rejecting unfortunately.

We ran into a slightly different variation of the above ...

  [...]
  1817: (85) call 1   // bpf_map_lookup_elem
  1818: (18) r8 = 0xff68
  1820: (7b) *(u64 *)(r10 -152) = r0
  1821: (15) if r0 == 0x0 goto pc-1718
   R0=map_value(ks=4,vs=104) R6=ctx R7=imm2 R8=inv R9=inv R10=fp 
fp-152=map_value_or_null
  1822: (79) r2 = *(u64 *)(r10 -152)
  1823: (79) r1 = *(u64 *)(r2 +8)
   R2 invalid mem access 'map_value_or_null'
  Error fetching program/map!

... where r0 is spilled to stack right before the NULL test, and
the verifier is not migrating the fp-152=map_value_or_null one
into a map_value type as it should do on newer kernels (due to
having the same id markings there). Hence, accessing r2 later
on will get rejected there due to still being map_value_or_null
type.

Thanks,
Daniel


Re: [PATCH net-next] bpf: Detect identical PTR_TO_MAP_VALUE_OR_NULL registers

2016-10-19 Thread David Miller
From: Thomas Graf 
Date: Tue, 18 Oct 2016 19:51:19 +0200

> A BPF program is required to check the return register of a
> map_elem_lookup() call before accessing memory. The verifier keeps
> track of this by converting the type of the result register from
> PTR_TO_MAP_VALUE_OR_NULL to PTR_TO_MAP_VALUE after a conditional
> jump ensures safety. This check is currently exclusively performed
> for the result register 0.
> 
> In the event the compiler reorders instructions, BPF_MOV64_REG
> instructions may be moved before the conditional jump which causes
> them to keep their type PTR_TO_MAP_VALUE_OR_NULL to which the
> verifier objects when the register is accessed:
> 
> 0: (b7) r1 = 10
> 1: (7b) *(u64 *)(r10 -8) = r1
> 2: (bf) r2 = r10
> 3: (07) r2 += -8
> 4: (18) r1 = 0x59c0
> 6: (85) call 1
> 7: (bf) r4 = r0
> 8: (15) if r0 == 0x0 goto pc+1
>  R0=map_value(ks=8,vs=8) R4=map_value_or_null(ks=8,vs=8) R10=fp
> 9: (7a) *(u64 *)(r4 +0) = 0
> R4 invalid mem access 'map_value_or_null'
> 
> This commit extends the verifier to keep track of all identical
> PTR_TO_MAP_VALUE_OR_NULL registers after a map_elem_lookup() by
> assigning them an ID and then marking them all when the conditional
> jump is observed.
> 
> Signed-off-by: Thomas Graf 
> Reviewed-by: Josef Bacik 
> Acked-by: Daniel Borkmann 
> Acked-by: Alexei Starovoitov 

Applied, thanks Thomas.


[PATCH net-next] bpf: Detect identical PTR_TO_MAP_VALUE_OR_NULL registers

2016-10-18 Thread Thomas Graf
A BPF program is required to check the return register of a
map_elem_lookup() call before accessing memory. The verifier keeps
track of this by converting the type of the result register from
PTR_TO_MAP_VALUE_OR_NULL to PTR_TO_MAP_VALUE after a conditional
jump ensures safety. This check is currently exclusively performed
for the result register 0.

In the event the compiler reorders instructions, BPF_MOV64_REG
instructions may be moved before the conditional jump which causes
them to keep their type PTR_TO_MAP_VALUE_OR_NULL to which the
verifier objects when the register is accessed:

0: (b7) r1 = 10
1: (7b) *(u64 *)(r10 -8) = r1
2: (bf) r2 = r10
3: (07) r2 += -8
4: (18) r1 = 0x59c0
6: (85) call 1
7: (bf) r4 = r0
8: (15) if r0 == 0x0 goto pc+1
 R0=map_value(ks=8,vs=8) R4=map_value_or_null(ks=8,vs=8) R10=fp
9: (7a) *(u64 *)(r4 +0) = 0
R4 invalid mem access 'map_value_or_null'

This commit extends the verifier to keep track of all identical
PTR_TO_MAP_VALUE_OR_NULL registers after a map_elem_lookup() by
assigning them an ID and then marking them all when the conditional
jump is observed.

Signed-off-by: Thomas Graf 
Reviewed-by: Josef Bacik 
Acked-by: Daniel Borkmann 
Acked-by: Alexei Starovoitov 
---
 include/linux/bpf_verifier.h|  2 +-
 kernel/bpf/verifier.c   | 61 +---
 tools/testing/selftests/bpf/test_verifier.c | 72 +
 3 files changed, 118 insertions(+), 17 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 7035b99..ac5b393 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -23,13 +23,13 @@ struct bpf_reg_state {
 * result in a bad access.
 */
u64 min_value, max_value;
+   u32 id;
union {
/* valid when type == CONST_IMM | PTR_TO_STACK | UNKNOWN_VALUE 
*/
s64 imm;
 
/* valid when type == PTR_TO_PACKET* */
struct {
-   u32 id;
u16 off;
u16 range;
};
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 99a7e5b..846d7ce 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -212,9 +212,10 @@ static void print_verifier_state(struct bpf_verifier_state 
*state)
else if (t == CONST_PTR_TO_MAP || t == PTR_TO_MAP_VALUE ||
 t == PTR_TO_MAP_VALUE_OR_NULL ||
 t == PTR_TO_MAP_VALUE_ADJ)
-   verbose("(ks=%d,vs=%d)",
+   verbose("(ks=%d,vs=%d,id=%u)",
reg->map_ptr->key_size,
-   reg->map_ptr->value_size);
+   reg->map_ptr->value_size,
+   reg->id);
if (reg->min_value != BPF_REGISTER_MIN_RANGE)
verbose(",min_value=%llu",
(unsigned long long)reg->min_value);
@@ -447,6 +448,7 @@ static void mark_reg_unknown_value(struct bpf_reg_state 
*regs, u32 regno)
 {
BUG_ON(regno >= MAX_BPF_REG);
regs[regno].type = UNKNOWN_VALUE;
+   regs[regno].id = 0;
regs[regno].imm = 0;
 }
 
@@ -1252,6 +1254,7 @@ static int check_call(struct bpf_verifier_env *env, int 
func_id)
return -EINVAL;
}
regs[BPF_REG_0].map_ptr = meta.map_ptr;
+   regs[BPF_REG_0].id = ++env->id_gen;
} else {
verbose("unknown return type %d of func %d\n",
fn->ret_type, func_id);
@@ -1644,8 +1647,7 @@ static int check_alu_op(struct bpf_verifier_env *env, 
struct bpf_insn *insn)
insn->src_reg);
return -EACCES;
}
-   regs[insn->dst_reg].type = UNKNOWN_VALUE;
-   regs[insn->dst_reg].map_ptr = NULL;
+   mark_reg_unknown_value(regs, insn->dst_reg);
}
} else {
/* case: R = imm
@@ -1907,6 +1909,38 @@ static void reg_set_min_max_inv(struct bpf_reg_state 
*true_reg,
check_reg_overflow(true_reg);
 }
 
+static void mark_map_reg(struct bpf_reg_state *regs, u32 regno, u32 id,
+enum bpf_reg_type type)
+{
+   struct bpf_reg_state *reg = ®s[regno];
+
+   if (reg->type == PTR_TO_MAP_VALUE_OR_NULL && reg->id == id) {
+   reg->type = type;
+   if (type == UNKNOWN_VALUE)
+   mark_reg_unknown_value(regs, regno);
+   }
+}
+
+/* The logic is similar to find_good_pkt_pointers(), both could eventually
+ * be folded together at some point.
+ */
+static void mark_map_regs(struct bpf_verifier_state *state, u32 regno,
+ enum bpf_reg_type type