Re: [PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path

2018-10-08 Thread Eric Biggers
Hi Ard, On Mon, Oct 08, 2018 at 11:15:53PM +0200, Ard Biesheuvel wrote: > On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > because the ordinary load/store instructions (ldr, ldrh, ldrb) can > tolerate any misalignment of the memory address. However, load/store > double and

Re: [PATCH 1/3] crypto: memneq - use unaligned accessors for aligned fast path

2018-10-08 Thread Eric Biggers
Hi Ard, On Mon, Oct 08, 2018 at 11:15:52PM +0200, Ard Biesheuvel wrote: > On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > because the ordinary load/store instructions (ldr, ldrh, ldrb) can > tolerate any misalignment of the memory address. However, load/store > double and

[PATCH 0/3] crypto: use unaligned accessors in aligned fast paths

2018-10-08 Thread Ard Biesheuvel
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS behaves a bit counterintuitively on ARM: we set it for architecture revisions v6 and up, which support any alignment for load/store instructions that operate on bytes, half words or words. However, load/store double word and load store multiple instructions

[PATCH 1/3] crypto: memneq - use unaligned accessors for aligned fast path

2018-10-08 Thread Ard Biesheuvel
On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory

[PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path

2018-10-08 Thread Ard Biesheuvel
On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory

[PATCH 3/3] crypto: siphash - drop _aligned variants

2018-10-08 Thread Ard Biesheuvel
On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory

[PATCH] crypto: arm64/aes-blk - ensure XTS mask is always loaded

2018-10-08 Thread Ard Biesheuvel
Commit 2e5d2f33d1db ("crypto: arm64/aes-blk - improve XTS mask handling") optimized away some reloads of the XTS mask vector, but failed to take into account that calls into the XTS en/decrypt routines will take a slightly different code path if a single block of input is split across different