From: Eric Biggers <ebigg...@google.com>

commit aec286cd36eacfd797e3d5dab8d5d23c15d1bb5e upstream.

If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv.  But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.

Fix this in the LRW template by checking the return value of
skcipher_walk_virt().

This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.  When the extra
self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
splat occured during lrw(aes) testing.

Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
Cc: <sta...@vger.kernel.org> # v4.20+
Cc: Ondrej Mosnacek <omosn...@redhat.com>
Signed-off-by: Eric Biggers <ebigg...@google.com>
Signed-off-by: Herbert Xu <herb...@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 crypto/lrw.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/crypto/lrw.c
+++ b/crypto/lrw.c
@@ -162,8 +162,10 @@ static int xor_tweak(struct skcipher_req
        }
 
        err = skcipher_walk_virt(&w, req, false);
-       iv = (__be32 *)w.iv;
+       if (err)
+               return err;
 
+       iv = (__be32 *)w.iv;
        counter[0] = be32_to_cpu(iv[3]);
        counter[1] = be32_to_cpu(iv[2]);
        counter[2] = be32_to_cpu(iv[1]);


Reply via email to