Re: [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block()

2017-11-22 Thread Ard Biesheuvel
On 22 November 2017 at 21:29, Eric Biggers wrote: > On Wed, Nov 22, 2017 at 08:51:57PM +, Ard Biesheuvel wrote: >> On 22 November 2017 at 19:51, Eric Biggers wrote: >> > From: Eric Biggers >> > >> > When chacha20_block() outputs

Re: [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block()

2017-11-22 Thread Eric Biggers
On Wed, Nov 22, 2017 at 08:51:57PM +, Ard Biesheuvel wrote: > On 22 November 2017 at 19:51, Eric Biggers wrote: > > From: Eric Biggers > > > > When chacha20_block() outputs the keystream block, it uses 'u32' stores > > directly. However, the callers

Re: [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block()

2017-11-22 Thread Ard Biesheuvel
On 22 November 2017 at 19:51, Eric Biggers wrote: > From: Eric Biggers > > When chacha20_block() outputs the keystream block, it uses 'u32' stores > directly. However, the callers (crypto/chacha20_generic.c and > drivers/char/random.c) declare the

Re: [PATCH 4/5] crypto: x86/chacha20 - Remove cra_alignmask

2017-11-22 Thread Ard Biesheuvel
On 22 November 2017 at 19:51, Eric Biggers wrote: > From: Eric Biggers > > Now that the generic ChaCha20 implementation no longer needs a > cra_alignmask, the x86 one doesn't either -- given that the x86 > implementation doesn't need the alignment

Re: [PATCH 3/5] crypto: chacha20 - Remove cra_alignmask

2017-11-22 Thread Ard Biesheuvel
On 22 November 2017 at 19:51, Eric Biggers wrote: > From: Eric Biggers > > Now that crypto_chacha20_setkey() and crypto_chacha20_init() use the > unaligned access macros and crypto_xor() also accepts unaligned buffers, > there is no need to have a

Re: [PATCH 2/5] crypto: chacha20 - Use unaligned access macros when loading key and IV

2017-11-22 Thread Ard Biesheuvel
On 22 November 2017 at 19:51, Eric Biggers wrote: > From: Eric Biggers > > The generic ChaCha20 implementation has a cra_alignmask of 3, which > ensures that the key passed into crypto_chacha20_setkey() and the IV > passed into crypto_chacha20_init() are

Re: [PATCH 1/5] crypto: chacha20 - Fix unaligned access when loading constants

2017-11-22 Thread Ard Biesheuvel
On 22 November 2017 at 19:51, Eric Biggers wrote: > From: Eric Biggers > > The four 32-bit constants for the initial state of ChaCha20 were loaded > from a char array which is not guaranteed to have the needed alignment. > > Fix it by just assigning the

[PATCH 3/5] crypto: chacha20 - Remove cra_alignmask

2017-11-22 Thread Eric Biggers
From: Eric Biggers Now that crypto_chacha20_setkey() and crypto_chacha20_init() use the unaligned access macros and crypto_xor() also accepts unaligned buffers, there is no need to have a cra_alignmask set for chacha20-generic. Signed-off-by: Eric Biggers

[PATCH 2/5] crypto: chacha20 - Use unaligned access macros when loading key and IV

2017-11-22 Thread Eric Biggers
From: Eric Biggers The generic ChaCha20 implementation has a cra_alignmask of 3, which ensures that the key passed into crypto_chacha20_setkey() and the IV passed into crypto_chacha20_init() are 4-byte aligned. However, these functions are also called from the ARM and ARM64

[PATCH 1/5] crypto: chacha20 - Fix unaligned access when loading constants

2017-11-22 Thread Eric Biggers
From: Eric Biggers The four 32-bit constants for the initial state of ChaCha20 were loaded from a char array which is not guaranteed to have the needed alignment. Fix it by just assigning the constants directly instead. Signed-off-by: Eric Biggers ---

[PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block()

2017-11-22 Thread Eric Biggers
From: Eric Biggers When chacha20_block() outputs the keystream block, it uses 'u32' stores directly. However, the callers (crypto/chacha20_generic.c and drivers/char/random.c) declare the keystream buffer as a 'u8' array, which is not guaranteed to have the needed

[PATCH 0/5] crypto: chacha20 - Alignment fixes

2017-11-22 Thread Eric Biggers
From: Eric Biggers This series fixes potentially unaligned memory accesses when loading the initial state, key, and IV for ChaCha20, and when outputting each keystream block. It also removes the cra_alignmask from the generic and x86 ChaCha20 implementations, once it is no

[PATCH 4/5] crypto: x86/chacha20 - Remove cra_alignmask

2017-11-22 Thread Eric Biggers
From: Eric Biggers Now that the generic ChaCha20 implementation no longer needs a cra_alignmask, the x86 one doesn't either -- given that the x86 implementation doesn't need the alignment itself. Signed-off-by: Eric Biggers ---

Re: x509 parsing bug + fuzzing crypto in the userspace

2017-11-22 Thread Stephan Mueller
Am Mittwoch, 22. November 2017, 18:03:14 CET schrieb Dmitry Vyukov: Hi Dmitry, > On Wed, Nov 22, 2017 at 5:54 PM, Stephan Mueller wrote: > > Am Dienstag, 21. November 2017, 21:46:28 CET schrieb Eric Biggers: > > > > Hi Eric, > > > >> (There is probably more to improve

Re: x509 parsing bug + fuzzing crypto in the userspace

2017-11-22 Thread Stephan Mueller
Am Mittwoch, 22. November 2017, 11:44:51 CET schrieb Dmitry Vyukov: Hi Dmitry, > > Thanks! I think we can incorporate this into syzkaller. > > One question: what's the relation between alg names and type ("aead", > "hash", "rng", "skcipher")? If you refer to AF_ALG, then the following

Re: x509 parsing bug + fuzzing crypto in the userspace

2017-11-22 Thread Dmitry Vyukov
On Wed, Nov 22, 2017 at 5:54 PM, Stephan Mueller wrote: > Am Dienstag, 21. November 2017, 21:46:28 CET schrieb Eric Biggers: > > Hi Eric, > >> >> (There is probably more to improve for AF_ALG besides the algorithm names; >> this is just what I happened to notice for now.) > >

Re: x509 parsing bug + fuzzing crypto in the userspace

2017-11-22 Thread Stephan Mueller
Am Dienstag, 21. November 2017, 21:46:28 CET schrieb Eric Biggers: Hi Eric, > > (There is probably more to improve for AF_ALG besides the algorithm names; > this is just what I happened to notice for now.) Just grepping may not cover all possibilities. Attached is a script that I use to

Re: [PATCH] crypto/arm64: aes-ce-cipher - move assembler code to .S file

2017-11-22 Thread Mark Rutland
On Wed, Nov 22, 2017 at 12:04:58PM +, Mark Rutland wrote: > On Wed, Nov 22, 2017 at 10:12:17AM +, Ard Biesheuvel wrote: > > On 22 November 2017 at 10:05, Alex Matveev wrote: > > > This is better than my simple fix, thank you. > > > > > > Out of curiosity, why doesn't

loop-AES-v3.7l fails to compile with kernel 4.14.1

2017-11-22 Thread Helmut Jarausch
/var/tmp/portage/sys-fs/loop-aes-3.7l/work/loop-AES-v3.7l/tmp-d-kbuild/patched-loop.c:530:7: error: 'struct bio' has no member named 'bi_bdev'; did you mean 'bi_iter'?

Re: [PATCH] crypto/arm64: aes-ce-cipher - move assembler code to .S file

2017-11-22 Thread Mark Rutland
On Wed, Nov 22, 2017 at 10:12:17AM +, Ard Biesheuvel wrote: > On 22 November 2017 at 10:05, Alex Matveev wrote: > > This is better than my simple fix, thank you. > > > > Out of curiosity, why doesn't NEON code use barrier() to prevent > > reordering? > > Because barrier()

Re: [PATCH] crypto/arm64: aes-ce-cipher - move assembler code to .S file

2017-11-22 Thread Ard Biesheuvel
On 22 November 2017 at 10:05, Alex Matveev wrote: > This is better than my simple fix, thank you. > > Out of curiosity, why doesn't NEON code use barrier() to prevent > reordering? > Because barrier() affects ordering of memory accesses, not register accesses. > On Tue, 21

Re: [PATCH] crypto/arm64: aes-ce-cipher - move assembler code to .S file

2017-11-22 Thread Alex Matveev
This is better than my simple fix, thank you. Out of curiosity, why doesn't NEON code use barrier() to prevent reordering? On Tue, 21 Nov 2017 13:40:17 + Ard Biesheuvel wrote: > Most crypto drivers involving kernel mode NEON take care to put the > code that

Re: [PATCH] crypto: arm64/aes - do not call crypto_unregister_skcipher twice on error

2017-11-22 Thread Ard Biesheuvel
Hello Corentin, On 22 November 2017 at 08:08, Corentin Labbe wrote: > When a cipher fail fails > to register in aes_init(), the error path go thought goes through > aes_exit() then crypto_unregister_skciphers(). > Since aes_exit calls also crypto_unregister_skcipher,

[PATCH] crypto: arm64/aes - do not call crypto_unregister_skcipher twice on error

2017-11-22 Thread Corentin Labbe
When a cipher fail to register in aes_init(), the error path go thought aes_exit() then crypto_unregister_skciphers(). Since aes_exit calls also crypto_unregister_skcipher, this trigger a refcount_t: underflow; use-after-free. Signed-off-by: Corentin Labbe ---