Function decrypt_skb() made a bad assumption that number of sg entries required for mapping skb to be decrypted would always be less than MAX_SKB_FRAGS. The required count of sg entries for skb should always be calculated. If they cannot fit in local array sgin_arr[], allocate them from heap irrespective whether it is zero-copy case or otherwise. The change also benefits the non-zero copy case as we could use sgin_arr[] instead of always allocating sg entries from heap.
Signed-off-by: Vakul Garg <vakul.g...@nxp.com> --- The said problem has been discussed with Dave Watson over mail list. net/tls/tls_sw.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index ff3a6904a722..e2cf7aebb877 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -693,7 +693,7 @@ int decrypt_skb(struct sock *sk, struct sk_buff *skb, struct scatterlist sgin_arr[MAX_SKB_FRAGS + 2]; struct scatterlist *sgin = &sgin_arr[0]; struct strp_msg *rxm = strp_msg(skb); - int ret, nsg = ARRAY_SIZE(sgin_arr); + int ret, nsg; struct sk_buff *unused; ret = skb_copy_bits(skb, rxm->offset + TLS_HEADER_SIZE, @@ -703,12 +703,20 @@ int decrypt_skb(struct sock *sk, struct sk_buff *skb, return ret; memcpy(iv, tls_ctx->rx.iv, TLS_CIPHER_AES_GCM_128_SALT_SIZE); - if (!sgout) { - nsg = skb_cow_data(skb, 0, &unused) + 1; + + /* If required number of SG entries for skb are more than + * sgin_arr elements, then dynamically allocate sg table. + */ + nsg = skb_cow_data(skb, 0, &unused) + 1; + if (nsg > ARRAY_SIZE(sgin_arr)) { sgin = kmalloc_array(nsg, sizeof(*sgin), sk->sk_allocation); - sgout = sgin; + if (!sgin) + return -ENOMEM; } + if (!sgout) + sgout = sgin; + sg_init_table(sgin, nsg); sg_set_buf(&sgin[0], ctx->rx_aad_ciphertext, TLS_AAD_SPACE_SIZE); -- 2.13.6