Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data

2014-07-28 Thread Herbert Xu
On Fri, Jul 25, 2014 at 07:40:20PM -0400, Mikulas Patocka wrote:
> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
> 
> The bug is caused by incorrect handling of unaligned data in
> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
> data corruption or crashes in the scatterwalk_pagedone.
> 
> This patch fixes the bug by passing the residue bytes that were not
> processed as the last parameter to blkcipher_walk_done.
> 
> Signed-off-by: Mikulas Patocka 

Both patches applied to crypto.  Thanks for catching this!
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data

2014-07-26 Thread Ard Biesheuvel
On 26 July 2014 17:31, Mikulas Patocka  wrote:
>
>
> On Sat, 26 Jul 2014, Ard Biesheuvel wrote:
>
>> On 26 July 2014 15:13, Ard Biesheuvel  wrote:
>> > On 26 July 2014 01:40, Mikulas Patocka  wrote:
>> >> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
>> >> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
>> >>
>> >> The bug is caused by incorrect handling of unaligned data in
>> >> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
>> >> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
>> >> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
>> >> data corruption or crashes in the scatterwalk_pagedone.
>> >>
>> >> This patch fixes the bug by passing the residue bytes that were not
>> >> processed as the last parameter to blkcipher_walk_done.
>> >>
>> >> Signed-off-by: Mikulas Patocka 
>> >>
>> >
>> > Acked-by: Ard Biesheuvel 
>> >
>> > Thanks for the patch. This correctly fixes a thinko on my part
>> > regarding the guarantees offered by the blkcipher API. Unfortunately,
>> > this wasn't caught by the tcrypt test suite, so I will propose some
>> > patches later to address cases like this.
>> >
>> > BTW using kernel crypto with AF_ALG is fairly pointless when using
>> > crypto instructions instead of crypto accelerator peripherals, should
>> > we change anything on the kernel side so we don't expose these
>> > drivers?
>
> The userspace library (gcrypt in this case) doesn't support ARM64, so it's
> still better to call the kernel.
>

OK

>> > @Catalin: this is a bug fix for the code that was merged this cycle. I
>> > would recommend to merge this for 3.16, but if not, could you please
>> > add a cc stable? Or ack it and perhaps Herbert can take both? (There
>> > is a similar patch for ARM as well)
>> >
>>
>> ... only this patch fails to repair the ECB case.
>> @Mikulas: could you do a v2 and include ECB encryption/decryption?
>>
>> Cheers,
>> Ard.
>
> The patch changes ecb_encrypt and ecb_decrypt just like cbc and xts
> functions. What is the problem with ECB?
>

You are right, I got the patches mixed up (this one and the ARM one)
Both look fine, and complete.

-- 
Ard.


>> >> Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
>> >> ===
>> >> --- 
>> >> linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm64/crypto/aes-glue.c
>> >> +++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
>> >> @@ -106,7 +106,7 @@ static int ecb_encrypt(struct blkcipher_
>> >> for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first 
>> >> = 0) {
>> >> aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >> (u8 *)ctx->key_enc, rounds, blocks, 
>> >> first);
>> >> -   err = blkcipher_walk_done(desc, &walk, 0);
>> >> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
>> >> AES_BLOCK_SIZE);
>> >> }
>> >> kernel_neon_end();
>> >> return err;
>> >> @@ -128,7 +128,7 @@ static int ecb_decrypt(struct blkcipher_
>> >> for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first 
>> >> = 0) {
>> >> aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >> (u8 *)ctx->key_dec, rounds, blocks, 
>> >> first);
>> >> -   err = blkcipher_walk_done(desc, &walk, 0);
>> >> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
>> >> AES_BLOCK_SIZE);
>> >> }
>> >> kernel_neon_end();
>> >> return err;
>> >> @@ -151,7 +151,7 @@ static int cbc_encrypt(struct blkcipher_
>> >> aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >> (u8 *)ctx->key_enc, rounds, blocks, 
>> >> walk.iv,
>> >> first);
>> >> -   err = blkcipher_walk_done(desc, &walk, 0);
>> >> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
>> >> AES_BLOCK_SIZE);
>> >> }
>> >> kernel_neon_end();
>> >> return err;
>> >> @@ -174,7 +174,7 @@ static int cbc_decrypt(struct blkcipher_
>> >> aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >> (u8 *)ctx->key_dec, rounds, blocks, 
>> >> walk.iv,
>> >> first);
>> >> -   err = blkcipher_walk_done(desc, &walk, 0);
>> >> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
>> >> AES_BLOCK_SIZE);
>> >> }
>> >> kernel_neon_end();
>> >> return err;
>> >> @@ -243,7 +243,7 @@ static int xts_encrypt(struct blkcipher_
>> >> aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> >> (u8 *)ctx->key1.key_enc, rounds, blocks,
>> >> (u8 *)ctx->key2.key_enc, walk.iv,

Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data

2014-07-26 Thread Mikulas Patocka


On Sat, 26 Jul 2014, Ard Biesheuvel wrote:

> On 26 July 2014 15:13, Ard Biesheuvel  wrote:
> > On 26 July 2014 01:40, Mikulas Patocka  wrote:
> >> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
> >> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
> >>
> >> The bug is caused by incorrect handling of unaligned data in
> >> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
> >> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
> >> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
> >> data corruption or crashes in the scatterwalk_pagedone.
> >>
> >> This patch fixes the bug by passing the residue bytes that were not
> >> processed as the last parameter to blkcipher_walk_done.
> >>
> >> Signed-off-by: Mikulas Patocka 
> >>
> >
> > Acked-by: Ard Biesheuvel 
> >
> > Thanks for the patch. This correctly fixes a thinko on my part
> > regarding the guarantees offered by the blkcipher API. Unfortunately,
> > this wasn't caught by the tcrypt test suite, so I will propose some
> > patches later to address cases like this.
> >
> > BTW using kernel crypto with AF_ALG is fairly pointless when using
> > crypto instructions instead of crypto accelerator peripherals, should
> > we change anything on the kernel side so we don't expose these
> > drivers?

The userspace library (gcrypt in this case) doesn't support ARM64, so it's 
still better to call the kernel.

> > @Catalin: this is a bug fix for the code that was merged this cycle. I
> > would recommend to merge this for 3.16, but if not, could you please
> > add a cc stable? Or ack it and perhaps Herbert can take both? (There
> > is a similar patch for ARM as well)
> >
> 
> ... only this patch fails to repair the ECB case.
> @Mikulas: could you do a v2 and include ECB encryption/decryption?
> 
> Cheers,
> Ard.

The patch changes ecb_encrypt and ecb_decrypt just like cbc and xts 
functions. What is the problem with ECB?

Mikulas

> >> Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
> >> ===
> >> --- 
> >> linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm64/crypto/aes-glue.c
> >> +++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
> >> @@ -106,7 +106,7 @@ static int ecb_encrypt(struct blkcipher_
> >> for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 
> >> 0) {
> >> aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >> (u8 *)ctx->key_enc, rounds, blocks, first);
> >> -   err = blkcipher_walk_done(desc, &walk, 0);
> >> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> >> AES_BLOCK_SIZE);
> >> }
> >> kernel_neon_end();
> >> return err;
> >> @@ -128,7 +128,7 @@ static int ecb_decrypt(struct blkcipher_
> >> for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 
> >> 0) {
> >> aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >> (u8 *)ctx->key_dec, rounds, blocks, first);
> >> -   err = blkcipher_walk_done(desc, &walk, 0);
> >> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> >> AES_BLOCK_SIZE);
> >> }
> >> kernel_neon_end();
> >> return err;
> >> @@ -151,7 +151,7 @@ static int cbc_encrypt(struct blkcipher_
> >> aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >> (u8 *)ctx->key_enc, rounds, blocks, 
> >> walk.iv,
> >> first);
> >> -   err = blkcipher_walk_done(desc, &walk, 0);
> >> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> >> AES_BLOCK_SIZE);
> >> }
> >> kernel_neon_end();
> >> return err;
> >> @@ -174,7 +174,7 @@ static int cbc_decrypt(struct blkcipher_
> >> aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >> (u8 *)ctx->key_dec, rounds, blocks, 
> >> walk.iv,
> >> first);
> >> -   err = blkcipher_walk_done(desc, &walk, 0);
> >> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> >> AES_BLOCK_SIZE);
> >> }
> >> kernel_neon_end();
> >> return err;
> >> @@ -243,7 +243,7 @@ static int xts_encrypt(struct blkcipher_
> >> aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >> (u8 *)ctx->key1.key_enc, rounds, blocks,
> >> (u8 *)ctx->key2.key_enc, walk.iv, first);
> >> -   err = blkcipher_walk_done(desc, &walk, 0);
> >> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> >> AES_BLOCK_SIZE);
> >> }
> >> kernel_neon_end();
> >>
> >> @@ -267,7 +267,7 @@ static int xts_decrypt(stru

Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data

2014-07-26 Thread Ard Biesheuvel
On 26 July 2014 15:13, Ard Biesheuvel  wrote:
> On 26 July 2014 01:40, Mikulas Patocka  wrote:
>> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
>> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
>>
>> The bug is caused by incorrect handling of unaligned data in
>> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
>> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
>> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
>> data corruption or crashes in the scatterwalk_pagedone.
>>
>> This patch fixes the bug by passing the residue bytes that were not
>> processed as the last parameter to blkcipher_walk_done.
>>
>> Signed-off-by: Mikulas Patocka 
>>
>
> Acked-by: Ard Biesheuvel 
>
> Thanks for the patch. This correctly fixes a thinko on my part
> regarding the guarantees offered by the blkcipher API. Unfortunately,
> this wasn't caught by the tcrypt test suite, so I will propose some
> patches later to address cases like this.
>
> BTW using kernel crypto with AF_ALG is fairly pointless when using
> crypto instructions instead of crypto accelerator peripherals, should
> we change anything on the kernel side so we don't expose these
> drivers?
>
> @Catalin: this is a bug fix for the code that was merged this cycle. I
> would recommend to merge this for 3.16, but if not, could you please
> add a cc stable? Or ack it and perhaps Herbert can take both? (There
> is a similar patch for ARM as well)
>

... only this patch fails to repair the ECB case.
@Mikulas: could you do a v2 and include ECB encryption/decryption?

Cheers,
Ard.


>> Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
>> ===
>> --- linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm64/crypto/aes-glue.c
>> +++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
>> @@ -106,7 +106,7 @@ static int ecb_encrypt(struct blkcipher_
>> for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 
>> 0) {
>> aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> (u8 *)ctx->key_enc, rounds, blocks, first);
>> -   err = blkcipher_walk_done(desc, &walk, 0);
>> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
>> AES_BLOCK_SIZE);
>> }
>> kernel_neon_end();
>> return err;
>> @@ -128,7 +128,7 @@ static int ecb_decrypt(struct blkcipher_
>> for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 
>> 0) {
>> aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> (u8 *)ctx->key_dec, rounds, blocks, first);
>> -   err = blkcipher_walk_done(desc, &walk, 0);
>> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
>> AES_BLOCK_SIZE);
>> }
>> kernel_neon_end();
>> return err;
>> @@ -151,7 +151,7 @@ static int cbc_encrypt(struct blkcipher_
>> aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> (u8 *)ctx->key_enc, rounds, blocks, walk.iv,
>> first);
>> -   err = blkcipher_walk_done(desc, &walk, 0);
>> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
>> AES_BLOCK_SIZE);
>> }
>> kernel_neon_end();
>> return err;
>> @@ -174,7 +174,7 @@ static int cbc_decrypt(struct blkcipher_
>> aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> (u8 *)ctx->key_dec, rounds, blocks, walk.iv,
>> first);
>> -   err = blkcipher_walk_done(desc, &walk, 0);
>> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
>> AES_BLOCK_SIZE);
>> }
>> kernel_neon_end();
>> return err;
>> @@ -243,7 +243,7 @@ static int xts_encrypt(struct blkcipher_
>> aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> (u8 *)ctx->key1.key_enc, rounds, blocks,
>> (u8 *)ctx->key2.key_enc, walk.iv, first);
>> -   err = blkcipher_walk_done(desc, &walk, 0);
>> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
>> AES_BLOCK_SIZE);
>> }
>> kernel_neon_end();
>>
>> @@ -267,7 +267,7 @@ static int xts_decrypt(struct blkcipher_
>> aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>> (u8 *)ctx->key1.key_dec, rounds, blocks,
>> (u8 *)ctx->key2.key_enc, walk.iv, first);
>> -   err = blkcipher_walk_done(desc, &walk, 0);
>> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
>> AES_BLOCK_SIZE);
>> }
>> kernel_neon_end();
>>
--
To unsubscribe from this list: send th

Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data

2014-07-26 Thread Ard Biesheuvel
On 26 July 2014 01:40, Mikulas Patocka  wrote:
> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
>
> The bug is caused by incorrect handling of unaligned data in
> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
> data corruption or crashes in the scatterwalk_pagedone.
>
> This patch fixes the bug by passing the residue bytes that were not
> processed as the last parameter to blkcipher_walk_done.
>
> Signed-off-by: Mikulas Patocka 
>

Acked-by: Ard Biesheuvel 

Thanks for the patch. This correctly fixes a thinko on my part
regarding the guarantees offered by the blkcipher API. Unfortunately,
this wasn't caught by the tcrypt test suite, so I will propose some
patches later to address cases like this.

BTW using kernel crypto with AF_ALG is fairly pointless when using
crypto instructions instead of crypto accelerator peripherals, should
we change anything on the kernel side so we don't expose these
drivers?

@Catalin: this is a bug fix for the code that was merged this cycle. I
would recommend to merge this for 3.16, but if not, could you please
add a cc stable? Or ack it and perhaps Herbert can take both? (There
is a similar patch for ARM as well)

Regards,
Ard.



> Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
> ===
> --- linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm64/crypto/aes-glue.c
> +++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
> @@ -106,7 +106,7 @@ static int ecb_encrypt(struct blkcipher_
> for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) 
> {
> aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
> (u8 *)ctx->key_enc, rounds, blocks, first);
> -   err = blkcipher_walk_done(desc, &walk, 0);
> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> AES_BLOCK_SIZE);
> }
> kernel_neon_end();
> return err;
> @@ -128,7 +128,7 @@ static int ecb_decrypt(struct blkcipher_
> for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) 
> {
> aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
> (u8 *)ctx->key_dec, rounds, blocks, first);
> -   err = blkcipher_walk_done(desc, &walk, 0);
> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> AES_BLOCK_SIZE);
> }
> kernel_neon_end();
> return err;
> @@ -151,7 +151,7 @@ static int cbc_encrypt(struct blkcipher_
> aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
> (u8 *)ctx->key_enc, rounds, blocks, walk.iv,
> first);
> -   err = blkcipher_walk_done(desc, &walk, 0);
> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> AES_BLOCK_SIZE);
> }
> kernel_neon_end();
> return err;
> @@ -174,7 +174,7 @@ static int cbc_decrypt(struct blkcipher_
> aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
> (u8 *)ctx->key_dec, rounds, blocks, walk.iv,
> first);
> -   err = blkcipher_walk_done(desc, &walk, 0);
> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> AES_BLOCK_SIZE);
> }
> kernel_neon_end();
> return err;
> @@ -243,7 +243,7 @@ static int xts_encrypt(struct blkcipher_
> aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
> (u8 *)ctx->key1.key_enc, rounds, blocks,
> (u8 *)ctx->key2.key_enc, walk.iv, first);
> -   err = blkcipher_walk_done(desc, &walk, 0);
> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> AES_BLOCK_SIZE);
> }
> kernel_neon_end();
>
> @@ -267,7 +267,7 @@ static int xts_decrypt(struct blkcipher_
> aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
> (u8 *)ctx->key1.key_dec, rounds, blocks,
> (u8 *)ctx->key2.key_enc, walk.iv, first);
> -   err = blkcipher_walk_done(desc, &walk, 0);
> +   err = blkcipher_walk_done(desc, &walk, walk.nbytes % 
> AES_BLOCK_SIZE);
> }
> kernel_neon_end();
>
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html