On Tue, Jan 17, 2017 at 12:20:02PM +0100, Ondrej Mosnáček wrote:
> 2017-01-13 15:29 GMT+01:00 Herbert Xu :
> > What if the driver had hardware support for generating these IVs?
> > With your scheme this cannot be supported at all.
>
> That's true... I'm starting to
On Tue, Dec 27, 2016 at 11:40:23PM +0100, Stephan Müller wrote:
> The variable ip is defined to be a __u64 which is always 8 bytes on any
> architecture. Thus, the check for sizeof(ip) > 4 will always be true.
>
> As the check happens in a hot code path, remove the branch.
The fact that it's a
On Tue, Dec 27, 2016 at 11:39:57PM +0100, Stephan Müller wrote:
> The random_ready callback mechanism is intended to replicate the
> getrandom system call behavior to in-kernel users. As the getrandom
> system call unblocks with crng_init == 1, trigger the random_ready
> wakeup call at the same
Patch #1 is a fix for the CBC chaining issue that was discussed on the
mailing list. The driver itself is queued for v4.11, so this fix can go
right on top.
Patches #2 - #6 clear the cra_alignmasks of various drivers: all NEON
capable CPUs can perform unaligned accesses, and the advantage of
Remove the unnecessary alignmask: it is much more efficient to deal with
the misalignment in the core algorithm than relying on the crypto API to
copy the data to a suitably aligned buffer.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/chacha20-neon-glue.c | 1 -
Update the new bitsliced NEON AES implementation in CTR mode to return
the next IV back to the skcipher API client. This is necessary for
chaining to work correctly.
Note that this is only done if the request is a round multiple of the
block size, since otherwise, chaining is impossible anyway.
Using simple adrp/add pairs to refer to the AES lookup tables exposed by
the generic AES driver (which could be loaded far away from this driver
when KASLR is in effect) was unreliable at module load time before commit
41c066f2c4d4 ("arm64: assembler: make adr_l work in modules under KASLR"),
The new bitsliced NEON implementation of AES uses a fallback in two
places: CBC encryption (which is strictly sequential, whereas this
driver can only operate efficiently on 8 blocks at a time), and the
XTS tweak generation, which involves encrypting a single AES block
with a different key
Remove the unnecessary alignmask: it is much more efficient to deal with
the misalignment in the core algorithm than relying on the crypto API to
copy the data to a suitably aligned buffer.
Signed-off-by: Ard Biesheuvel
---
NOTE: this won't apply unless 'crypto:
Shuffle some instructions around in the __hround macro to shave off
0.1 cycles per byte on Cortex-A57.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-cipher-core.S | 52 +++-
1 file changed, 19 insertions(+), 33 deletions(-)
diff --git
The non-bitsliced AES implementation using the NEON is highly sensitive
to micro-architectural details, and, as it turns out, the Cortex-A53 on
the Raspberry Pi 3 is a core that can benefit from this code, given that
its scalar AES performance is abysmal (32.9 cycles per byte).
The new bitsliced
Remove the unnecessary alignmask: it is much more efficient to deal with
the misalignment in the core algorithm than relying on the crypto API to
copy the data to a suitably aligned buffer.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-ce-ccm-glue.c | 1 -
1
Remove the unnecessary alignmask: it is much more efficient to deal with
the misalignment in the core algorithm than relying on the crypto API to
copy the data to a suitably aligned buffer.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/chacha20-neon-glue.c | 1 -
Update the ARMv8 Crypto Extensions and the plain NEON AES implementations
in CBC and CTR modes to return the next IV back to the skcipher API client.
This is necessary for chaining to work correctly.
Note that for CTR, this is only done if the request is a round multiple of
the block size, since
2017-01-13 15:29 GMT+01:00 Herbert Xu :
> What if the driver had hardware support for generating these IVs?
> With your scheme this cannot be supported at all.
That's true... I'm starting to think that this isn't really a good
idea. I was mainly trying to keep the
Hi Binoy,
2017-01-16 9:37 GMT+01:00 Binoy Jayan :
> The initial goal of our proposal was to process the encryption requests with
> the
> maximum possible block sizes with a hardware which has automated iv generation
> capabilities. But when it is done in software, and if
On Tue, Jan 17, 2017 at 09:20:11AM +, Ard Biesheuvel wrote:
>
> So to be clear, it is part of the API that after calling
> crypto_skcipher_encrypt(req), and completing the request, req->iv
> should contain a value that could potentially be used to encrypt
> additional data? That sounds highly
On Tue, Jan 17, 2017 at 09:30:30AM +, Ard Biesheuvel wrote:
>
> Got a link?
http://lkml.iu.edu/hypermail/linux/kernel/1506.2/00346.html
> OK, so that means chaining skcipher_set_crypt() calls, where req->iv
> is passed on between requests? Are there chaining modes beyond
> cts(cbc)
On Mon, Jan 16, 2017 at 09:16:35AM +, Ard Biesheuvel wrote:
> Since the skcipher conversion in commit 0605c41cc53c ("crypto:
> cts - Convert to skcipher"), the cts code tacitly assumes that
> the underlying CBC encryption transform performed on the first
> part of the plaintext returns an IV
On 17 January 2017 at 09:25, Herbert Xu wrote:
> On Tue, Jan 17, 2017 at 09:20:11AM +, Ard Biesheuvel wrote:
>>
>> So to be clear, it is part of the API that after calling
>> crypto_skcipher_encrypt(req), and completing the request, req->iv
>> should contain a
On 17 January 2017 at 09:11, Herbert Xu wrote:
> On Mon, Jan 16, 2017 at 09:16:35AM +, Ard Biesheuvel wrote:
>> Since the skcipher conversion in commit 0605c41cc53c ("crypto:
>> cts - Convert to skcipher"), the cts code tacitly assumes that
>> the underlying CBC
21 matches
Mail list logo