From: Eric Biggers <ebigg...@google.com>

The arm64 NEON bit-sliced implementation of AES-CTR fails the improved
skcipher tests because it sometimes produces the wrong ciphertext.  The
bug is that the final keystream block isn't returned from the assembly
code when the number of non-final blocks is zero.  This can happen if
the input data ends a few bytes after a page boundary.  In this case the
last bytes get "encrypted" by XOR'ing them with uninitialized memory.

Fix the assembly code to return the final keystream block when needed.

Fixes: 88a3f582bea9 ("crypto: arm64/aes - don't use IV buffer to return final 
keystream block")
Cc: <sta...@vger.kernel.org> # v4.11+
Reviewed-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
Signed-off-by: Eric Biggers <ebigg...@google.com>
---
 arch/arm64/crypto/aes-neonbs-core.S | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/crypto/aes-neonbs-core.S 
b/arch/arm64/crypto/aes-neonbs-core.S
index e613a87f8b53f..8432c8d0dea66 100644
--- a/arch/arm64/crypto/aes-neonbs-core.S
+++ b/arch/arm64/crypto/aes-neonbs-core.S
@@ -971,18 +971,22 @@ CPU_LE(   rev             x8, x8          )
 
 8:     next_ctr        v0
        st1             {v0.16b}, [x24]
-       cbz             x23, 0f
+       cbz             x23, .Lctr_done
 
        cond_yield_neon 98b
        b               99b
 
-0:     frame_pop
+.Lctr_done:
+       frame_pop
        ret
 
        /*
         * If we are handling the tail of the input (x6 != NULL), return the
         * final keystream block back to the caller.
         */
+0:     cbz             x25, 8b
+       st1             {v0.16b}, [x25]
+       b               8b
 1:     cbz             x25, 8b
        st1             {v1.16b}, [x25]
        b               8b
-- 
2.20.1

Reply via email to