Re: [PATCH bpf-next v2 4/9] bpf/verifier: improve register value range tracking with ARSH

2018-04-19 Thread Yonghong Song



On 4/18/18 9:35 PM, Alexei Starovoitov wrote:

On Wed, Apr 18, 2018 at 09:54:39AM -0700, Yonghong Song wrote:

When helpers like bpf_get_stack returns an int value
and later on used for arithmetic computation, the LSH and ARSH
operations are often required to get proper sign extension into
64-bit. For example, without this patch:
 54: R0=inv(id=0,umax_value=800)
 54: (bf) r8 = r0
 55: R0=inv(id=0,umax_value=800) R8_w=inv(id=0,umax_value=800)
 55: (67) r8 <<= 32
 56: R8_w=inv(id=0,umax_value=3435973836800,var_off=(0x0; 0x3ff))
 56: (c7) r8 s>>= 32
 57: R8=inv(id=0)
With this patch:
 54: R0=inv(id=0,umax_value=800)
 54: (bf) r8 = r0
 55: R0=inv(id=0,umax_value=800) R8_w=inv(id=0,umax_value=800)
 55: (67) r8 <<= 32
 56: R8_w=inv(id=0,umax_value=3435973836800,var_off=(0x0; 0x3ff))
 56: (c7) r8 s>>= 32
 57: R8=inv(id=0, umax_value=800,var_off=(0x0; 0x3ff))
With better range of "R8", later on when "R8" is added to other register,
e.g., a map pointer or scalar-value register, the better register
range can be derived and verifier failure may be avoided.

In our later example,
 ..
 usize = bpf_get_stack(ctx, raw_data, max_len, BPF_F_USER_STACK);
 if (usize < 0)
 return 0;
 ksize = bpf_get_stack(ctx, raw_data + usize, max_len - usize, 0);
 ..
Without improving ARSH value range tracking, the register representing
"max_len - usize" will have smin_value equal to S64_MIN and will be
rejected by verifier.

Signed-off-by: Yonghong Song 
---
  kernel/bpf/verifier.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a8302c3..6148d31 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2944,6 +2944,7 @@ static int adjust_scalar_min_max_vals(struct 
bpf_verifier_env *env,
__update_reg_bounds(dst_reg);
break;
case BPF_RSH:
+   case BPF_ARSH:


I don't think that's correct.
The code further down is very RSH specific.


Okay, I may need to introduce tnum_arshift then.




if (umax_val >= insn_bitness) {
/* Shifts greater than 31 or 63 are undefined.
 * This includes shifts by a negative number.
--
2.9.5



Re: [PATCH bpf-next v2 4/9] bpf/verifier: improve register value range tracking with ARSH

2018-04-18 Thread Alexei Starovoitov
On Wed, Apr 18, 2018 at 09:54:39AM -0700, Yonghong Song wrote:
> When helpers like bpf_get_stack returns an int value
> and later on used for arithmetic computation, the LSH and ARSH
> operations are often required to get proper sign extension into
> 64-bit. For example, without this patch:
> 54: R0=inv(id=0,umax_value=800)
> 54: (bf) r8 = r0
> 55: R0=inv(id=0,umax_value=800) R8_w=inv(id=0,umax_value=800)
> 55: (67) r8 <<= 32
> 56: R8_w=inv(id=0,umax_value=3435973836800,var_off=(0x0; 0x3ff))
> 56: (c7) r8 s>>= 32
> 57: R8=inv(id=0)
> With this patch:
> 54: R0=inv(id=0,umax_value=800)
> 54: (bf) r8 = r0
> 55: R0=inv(id=0,umax_value=800) R8_w=inv(id=0,umax_value=800)
> 55: (67) r8 <<= 32
> 56: R8_w=inv(id=0,umax_value=3435973836800,var_off=(0x0; 0x3ff))
> 56: (c7) r8 s>>= 32
> 57: R8=inv(id=0, umax_value=800,var_off=(0x0; 0x3ff))
> With better range of "R8", later on when "R8" is added to other register,
> e.g., a map pointer or scalar-value register, the better register
> range can be derived and verifier failure may be avoided.
> 
> In our later example,
> ..
> usize = bpf_get_stack(ctx, raw_data, max_len, BPF_F_USER_STACK);
> if (usize < 0)
> return 0;
> ksize = bpf_get_stack(ctx, raw_data + usize, max_len - usize, 0);
> ..
> Without improving ARSH value range tracking, the register representing
> "max_len - usize" will have smin_value equal to S64_MIN and will be
> rejected by verifier.
> 
> Signed-off-by: Yonghong Song 
> ---
>  kernel/bpf/verifier.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index a8302c3..6148d31 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2944,6 +2944,7 @@ static int adjust_scalar_min_max_vals(struct 
> bpf_verifier_env *env,
>   __update_reg_bounds(dst_reg);
>   break;
>   case BPF_RSH:
> + case BPF_ARSH:

I don't think that's correct.
The code further down is very RSH specific.

>   if (umax_val >= insn_bitness) {
>   /* Shifts greater than 31 or 63 are undefined.
>* This includes shifts by a negative number.
> -- 
> 2.9.5
> 


[PATCH bpf-next v2 4/9] bpf/verifier: improve register value range tracking with ARSH

2018-04-18 Thread Yonghong Song
When helpers like bpf_get_stack returns an int value
and later on used for arithmetic computation, the LSH and ARSH
operations are often required to get proper sign extension into
64-bit. For example, without this patch:
54: R0=inv(id=0,umax_value=800)
54: (bf) r8 = r0
55: R0=inv(id=0,umax_value=800) R8_w=inv(id=0,umax_value=800)
55: (67) r8 <<= 32
56: R8_w=inv(id=0,umax_value=3435973836800,var_off=(0x0; 0x3ff))
56: (c7) r8 s>>= 32
57: R8=inv(id=0)
With this patch:
54: R0=inv(id=0,umax_value=800)
54: (bf) r8 = r0
55: R0=inv(id=0,umax_value=800) R8_w=inv(id=0,umax_value=800)
55: (67) r8 <<= 32
56: R8_w=inv(id=0,umax_value=3435973836800,var_off=(0x0; 0x3ff))
56: (c7) r8 s>>= 32
57: R8=inv(id=0, umax_value=800,var_off=(0x0; 0x3ff))
With better range of "R8", later on when "R8" is added to other register,
e.g., a map pointer or scalar-value register, the better register
range can be derived and verifier failure may be avoided.

In our later example,
..
usize = bpf_get_stack(ctx, raw_data, max_len, BPF_F_USER_STACK);
if (usize < 0)
return 0;
ksize = bpf_get_stack(ctx, raw_data + usize, max_len - usize, 0);
..
Without improving ARSH value range tracking, the register representing
"max_len - usize" will have smin_value equal to S64_MIN and will be
rejected by verifier.

Signed-off-by: Yonghong Song 
---
 kernel/bpf/verifier.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a8302c3..6148d31 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2944,6 +2944,7 @@ static int adjust_scalar_min_max_vals(struct 
bpf_verifier_env *env,
__update_reg_bounds(dst_reg);
break;
case BPF_RSH:
+   case BPF_ARSH:
if (umax_val >= insn_bitness) {
/* Shifts greater than 31 or 63 are undefined.
 * This includes shifts by a negative number.
-- 
2.9.5