Re: [PATCH v2 bpf-next 2/2] bpf: sockmap: initialize sg table entries properly

2018-03-29 Thread John Fastabend
On 03/29/2018 05:21 PM, Prashant Bhole wrote:
> When CONFIG_DEBUG_SG is set, sg->sg_magic is initialized in
> sg_init_table() and it is verified in sg api while navigating. We hit
> BUG_ON when magic check is failed.
> 
> In functions sg_tcp_sendpage and sg_tcp_sendmsg, the struct containing
> the scatterlist is already zeroed out. So to avoid extra memset, we
> use sg_init_marker() to initialize sg_magic.
> 
> Fixed following things:
> - In bpf_tcp_sendpage: initialize sg using sg_init_marker
> - In bpf_tcp_sendmsg: Replace sg_init_table with sg_init_marker
> - In bpf_tcp_push: Replace memset with sg_init_table where consumed
>   sg entry needs to be re-initialized.
> 
> Signed-off-by: Prashant Bhole 
> ---
>  kernel/bpf/sockmap.c | 13 -
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 

Acked-by: John Fastabend 



[PATCH v2 bpf-next 2/2] bpf: sockmap: initialize sg table entries properly

2018-03-29 Thread Prashant Bhole
When CONFIG_DEBUG_SG is set, sg->sg_magic is initialized in
sg_init_table() and it is verified in sg api while navigating. We hit
BUG_ON when magic check is failed.

In functions sg_tcp_sendpage and sg_tcp_sendmsg, the struct containing
the scatterlist is already zeroed out. So to avoid extra memset, we
use sg_init_marker() to initialize sg_magic.

Fixed following things:
- In bpf_tcp_sendpage: initialize sg using sg_init_marker
- In bpf_tcp_sendmsg: Replace sg_init_table with sg_init_marker
- In bpf_tcp_push: Replace memset with sg_init_table where consumed
  sg entry needs to be re-initialized.

Signed-off-by: Prashant Bhole 
---
 kernel/bpf/sockmap.c | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 69c5bccabd22..b4f01656c452 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -312,7 +312,7 @@ static int bpf_tcp_push(struct sock *sk, int apply_bytes,
md->sg_start++;
if (md->sg_start == MAX_SKB_FRAGS)
md->sg_start = 0;
-   memset(sg, 0, sizeof(*sg));
+   sg_init_table(sg, 1);
 
if (md->sg_start == md->sg_end)
break;
@@ -656,7 +656,7 @@ static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr 
*msg, size_t size)
}
 
sg = md.sg_data;
-   sg_init_table(sg, MAX_SKB_FRAGS);
+   sg_init_marker(sg, MAX_SKB_FRAGS);
rcu_read_unlock();
 
lock_sock(sk);
@@ -763,10 +763,14 @@ static int bpf_tcp_sendpage(struct sock *sk, struct page 
*page,
 
lock_sock(sk);
 
-   if (psock->cork_bytes)
+   if (psock->cork_bytes) {
m = psock->cork;
-   else
+   sg = >sg_data[m->sg_end];
+   } else {
m = 
+   sg = m->sg_data;
+   sg_init_marker(sg, MAX_SKB_FRAGS);
+   }
 
/* Catch case where ring is full and sendpage is stalled. */
if (unlikely(m->sg_end == m->sg_start &&
@@ -774,7 +778,6 @@ static int bpf_tcp_sendpage(struct sock *sk, struct page 
*page,
goto out_err;
 
psock->sg_size += size;
-   sg = >sg_data[m->sg_end];
sg_set_page(sg, page, size, offset);
get_page(page);
m->sg_copy[m->sg_end] = true;
-- 
2.14.3