4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Antoine Tenart <antoine.ten...@free-electrons.com>

commit 809778e02cd45d0625439fee67688f655627bb3c upstream.

This patch fixes the hash support in the SafeXcel driver when the update
size is a multiple of a block size, and when a final call is made just
after with a size of 0. In such cases the driver should cache the last
block from the update to avoid handling 0 length data on the final call
(that's a hardware limitation).

Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine 
driver")
Signed-off-by: Antoine Tenart <antoine.ten...@free-electrons.com>
Signed-off-by: Herbert Xu <herb...@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 drivers/crypto/inside-secure/safexcel_hash.c |   34 +++++++++++++++++++--------
 1 file changed, 24 insertions(+), 10 deletions(-)

--- a/drivers/crypto/inside-secure/safexcel_hash.c
+++ b/drivers/crypto/inside-secure/safexcel_hash.c
@@ -185,17 +185,31 @@ static int safexcel_ahash_send(struct cr
        else
                cache_len = queued - areq->nbytes;
 
-       /*
-        * If this is not the last request and the queued data does not fit
-        * into full blocks, cache it for the next send() call.
-        */
-       extra = queued & (crypto_ahash_blocksize(ahash) - 1);
-       if (!req->last_req && extra) {
-               sg_pcopy_to_buffer(areq->src, sg_nents(areq->src),
-                                  req->cache_next, extra, areq->nbytes - 
extra);
+       if (!req->last_req) {
+               /* If this is not the last request and the queued data does not
+                * fit into full blocks, cache it for the next send() call.
+                */
+               extra = queued & (crypto_ahash_blocksize(ahash) - 1);
+               if (!extra)
+                       /* If this is not the last request and the queued data
+                        * is a multiple of a block, cache the last one for now.
+                        */
+                       extra = queued - crypto_ahash_blocksize(ahash);
 
-               queued -= extra;
-               len -= extra;
+               if (extra) {
+                       sg_pcopy_to_buffer(areq->src, sg_nents(areq->src),
+                                          req->cache_next, extra,
+                                          areq->nbytes - extra);
+
+                       queued -= extra;
+                       len -= extra;
+
+                       if (!queued) {
+                               *commands = 0;
+                               *results = 0;
+                               return 0;
+                       }
+               }
        }
 
        spin_lock_bh(&priv->ring[ring].egress_lock);


Reply via email to