Zero-copy mode was left enabled even when zerocopy_from_iter() failed. In this case, control falls back to regular non zero-copy mode. This caused skipping call of datagram copy function skb_copy_datagram_msg() and required an extra iteration of record decryption loop for decrypted record to be copied into user space provided buffer. Hence zero-copy mode should be enabled/disabled as per the success/failure of zerocopy_from_iter().
Fixes: c46234ebb4d1 ("tls: RX path for ktls") Signed-off-by: Vakul Garg <vakul.g...@nxp.com> --- The patch does not need to be applied to 'net' branch as it does not fix any functional bug. The patch achieves better run-time efficiency and code readability. net/tls/tls_sw.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 0c2d029c9d4c..9ae57bec4927 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -787,7 +787,7 @@ int tls_sw_recvmsg(struct sock *sk, target = sock_rcvlowat(sk, flags & MSG_WAITALL, len); timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); do { - bool zc = false; + bool zc; int chunk = 0; skb = tls_wait_data(sk, flags, timeo, &err); @@ -824,7 +824,6 @@ int tls_sw_recvmsg(struct sock *sk, struct scatterlist sgin[MAX_SKB_FRAGS + 1]; int pages = 0; - zc = true; sg_init_table(sgin, MAX_SKB_FRAGS + 1); sg_set_buf(&sgin[0], ctx->rx_aad_plaintext, TLS_AAD_SPACE_SIZE); @@ -836,6 +835,7 @@ int tls_sw_recvmsg(struct sock *sk, if (err < 0) goto fallback_to_reg_recv; + zc = true; err = decrypt_skb_update(sk, skb, sgin, &zc); for (; pages > 0; pages--) put_page(sg_page(&sgin[pages])); @@ -845,6 +845,7 @@ int tls_sw_recvmsg(struct sock *sk, } } else { fallback_to_reg_recv: + zc = false; err = decrypt_skb_update(sk, skb, NULL, &zc); if (err < 0) { tls_err_abort(sk, EBADMSG); -- 2.13.6