Re: [PATCH net-next] net/tls: Always get number of sg entries for skb to be decrypted

2018-08-02 Thread Doron Roberts-Kedes
On Thu, Aug 02, 2018 at 09:50:58AM -0700, Dave Watson wrote:
> On 08/02/18 09:50 PM, Vakul Garg wrote:
> > Function decrypt_skb() made a bad assumption that number of sg entries
> > required for mapping skb to be decrypted would always be less than
> > MAX_SKB_FRAGS. The required count of sg entries for skb should always be
> > calculated. If they cannot fit in local array sgin_arr[], allocate them
> > from heap irrespective whether it is zero-copy case or otherwise. The
> > change also benefits the non-zero copy case as we could use sgin_arr[]
> > instead of always allocating sg entries from heap.
> > 
> > Signed-off-by: Vakul Garg 
> 
> Looks great, thanks.
> 
> Reviewed-by: Dave Waston 

I agree that this is a problem, but I'm not sure that this is the best
way to fix it. Calling skb_cow_data unconditionally is expensive. In my
benchmarks this patch cause a 5% CPU regression, all from memcpy from
skb_cow_data, and a 15% regression in encrypted NBD IOPS. It is possible
to calculate the number of scatterlist elements required to map the skb
without invoking skb_cow_data. I'll have a patch up shortly.


Re: [PATCH net-next] net/tls: Always get number of sg entries for skb to be decrypted

2018-08-02 Thread Dave Watson
On 08/02/18 09:50 PM, Vakul Garg wrote:
> Function decrypt_skb() made a bad assumption that number of sg entries
> required for mapping skb to be decrypted would always be less than
> MAX_SKB_FRAGS. The required count of sg entries for skb should always be
> calculated. If they cannot fit in local array sgin_arr[], allocate them
> from heap irrespective whether it is zero-copy case or otherwise. The
> change also benefits the non-zero copy case as we could use sgin_arr[]
> instead of always allocating sg entries from heap.
> 
> Signed-off-by: Vakul Garg 

Looks great, thanks.

Reviewed-by: Dave Waston 


[PATCH net-next] net/tls: Always get number of sg entries for skb to be decrypted

2018-08-02 Thread Vakul Garg
Function decrypt_skb() made a bad assumption that number of sg entries
required for mapping skb to be decrypted would always be less than
MAX_SKB_FRAGS. The required count of sg entries for skb should always be
calculated. If they cannot fit in local array sgin_arr[], allocate them
from heap irrespective whether it is zero-copy case or otherwise. The
change also benefits the non-zero copy case as we could use sgin_arr[]
instead of always allocating sg entries from heap.

Signed-off-by: Vakul Garg 
---

The said problem has been discussed with Dave Watson over mail list.

 net/tls/tls_sw.c | 16 
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index ff3a6904a722..e2cf7aebb877 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -693,7 +693,7 @@ int decrypt_skb(struct sock *sk, struct sk_buff *skb,
struct scatterlist sgin_arr[MAX_SKB_FRAGS + 2];
struct scatterlist *sgin = _arr[0];
struct strp_msg *rxm = strp_msg(skb);
-   int ret, nsg = ARRAY_SIZE(sgin_arr);
+   int ret, nsg;
struct sk_buff *unused;
 
ret = skb_copy_bits(skb, rxm->offset + TLS_HEADER_SIZE,
@@ -703,12 +703,20 @@ int decrypt_skb(struct sock *sk, struct sk_buff *skb,
return ret;
 
memcpy(iv, tls_ctx->rx.iv, TLS_CIPHER_AES_GCM_128_SALT_SIZE);
-   if (!sgout) {
-   nsg = skb_cow_data(skb, 0, ) + 1;
+
+   /* If required number of SG entries for skb are more than
+* sgin_arr elements, then dynamically allocate sg table.
+*/
+   nsg = skb_cow_data(skb, 0, ) + 1;
+   if (nsg > ARRAY_SIZE(sgin_arr)) {
sgin = kmalloc_array(nsg, sizeof(*sgin), sk->sk_allocation);
-   sgout = sgin;
+   if (!sgin)
+   return -ENOMEM;
}
 
+   if (!sgout)
+   sgout = sgin;
+
sg_init_table(sgin, nsg);
sg_set_buf([0], ctx->rx_aad_ciphertext, TLS_AAD_SPACE_SIZE);
 
-- 
2.13.6