The branch master has been updated
       via  e9f148c9356b18995298f37bafbf1836a3fce078 (commit)
      from  3e4e43e609d6e9c36e5e526246d31802102cad4a (commit)


- Log -----------------------------------------------------------------
commit e9f148c9356b18995298f37bafbf1836a3fce078
Author: Daniel Axtens <[email protected]>
Date:   Fri May 17 10:59:40 2019 +1000

    ppc assembly pack: always increment CTR IV as quadword
    
    The kernel self-tests picked up an issue with CTR mode. The issue was
    detected with a test vector with an IV of
    FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD: after 3 increments it should wrap
    around to 0.
    
    There are two paths that increment IVs: the bulk (8 at a time) path,
    and the individual path which is used when there are fewer than 8 AES
    blocks to process.
    
    In the bulk path, the IV is incremented with vadduqm: "Vector Add
    Unsigned Quadword Modulo", which does 128-bit addition.
    
    In the individual path, however, the IV is incremented with vadduwm:
    "Vector Add Unsigned Word Modulo", which instead does 4 32-bit
    additions. Thus the IV would instead become
    FFFFFFFFFFFFFFFFFFFFFFFF00000000, throwing off the result.
    
    Use vadduqm.
    
    This was probably a typo originally, what with q and w being
    adjacent.
    
    CLA: trivial
    
    Reviewed-by: Richard Levitte <[email protected]>
    Reviewed-by: Paul Dale <[email protected]>
    (Merged from https://github.com/openssl/openssl/pull/8942)

-----------------------------------------------------------------------

Summary of changes:
 crypto/aes/asm/aesp8-ppc.pl | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/aes/asm/aesp8-ppc.pl b/crypto/aes/asm/aesp8-ppc.pl
index 44056e3..30ccecf 100755
--- a/crypto/aes/asm/aesp8-ppc.pl
+++ b/crypto/aes/asm/aesp8-ppc.pl
@@ -1331,7 +1331,7 @@ Loop_ctr32_enc:
        addi            $idx,$idx,16
        bdnz            Loop_ctr32_enc
 
-       vadduwm         $ivec,$ivec,$one
+       vadduqm         $ivec,$ivec,$one
         vmr            $dat,$inptail
         lvx            $inptail,0,$inp
         addi           $inp,$inp,16

Reply via email to