On Thu, 26 Sep 2019 at 16:03, Pascal Van Leeuwen
wrote:
>
> > -Original Message-
> > From: Ard Biesheuvel
> > Sent: Thursday, September 26, 2019 3:16 PM
> > To: Pascal Van Leeuwen
> > Cc: Jason A. Donenfeld ; Linux Crypto Mailing List > cry..
On Thu, 26 Sep 2019 at 15:06, Pascal Van Leeuwen
wrote:
...
> >
> > My preference would be to address this by permitting per-request keys
> > in the AEAD layer. That way, we can instantiate the transform only
> > once, and just invoke it with the appropriate key on the hot path (and
> > avoid any
On Thu, 26 Sep 2019 at 13:06, Ard Biesheuvel wrote:
>
> On Thu, 26 Sep 2019 at 00:15, Linus Torvalds
> wrote:
> >
> > On Wed, Sep 25, 2019 at 9:14 AM Ard Biesheuvel
> > wrote:
> > >
> > > Replace the chacha20poly1305() library calls with invocations
On Thu, 26 Sep 2019 at 10:59, Jason A. Donenfeld wrote:
>
...
>
> Instead what we’ve wound up with in this series is a Frankenstein’s
> monster of Zinc, which appears to have basically the same goal as
> Zinc, and even much of the same implementation just moved to a
> different directory, but then
On Thu, 26 Sep 2019 at 00:15, Linus Torvalds
wrote:
>
> On Wed, Sep 25, 2019 at 9:14 AM Ard Biesheuvel
> wrote:
> >
> > Replace the chacha20poly1305() library calls with invocations of the
> > RFC7539 AEAD, as implemented by the generic chacha20poly1305 template.
>
On Wed, 25 Sep 2019 at 23:01, Linus Torvalds
wrote:
>
> On Wed, Sep 25, 2019 at 9:14 AM Ard Biesheuvel
> wrote:
> >
> > config ARCH_SUPPORTS_INT128
> > bool
> > + depends on !$(cc-option,-D__SIZEOF_INT128__=0)
>
> Hmm. Does this actually wor
---
drivers/net/Kconfig | 6 +++---
drivers/net/wireguard/cookie.c | 4 ++--
drivers/net/wireguard/messages.h | 6 +++---
3 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index c26aef673538..3bd4dc662392 100644
--- a/drivers/ne
ludes with Kconfig based object selection
- drop simd handling and support for per-arch versions ]
Signed-off-by: Ard Biesheuvel
---
crypto/Kconfig |3 +
include/crypto/curve25519.h | 28 +
lib/crypto/Makefile |6 +
lib/crypto/curve25519-fiat32.
Taken from
https://git.zx2c4.com/WireGuard/commit/src?id=3120425f69003be287cb2d308f89c7a6a0335ff0
Reported-by: Bruno Wolff III
---
drivers/net/wireguard/netlink.c | 17 -
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/drivers/net/wireguard/netlink.c b/drivers/net/w
://www.wireguard.com/quickstart/#demo-server)
Signed-off-by: Ard Biesheuvel
---
drivers/net/wireguard/noise.c| 34 -
drivers/net/wireguard/noise.h| 3 +-
drivers/net/wireguard/queueing.h | 5 +-
drivers/net/wireguard/receive.c | 51
drivers/net/wireguard/send.c
into the header file.
Information: https://blake2.net/
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Samuel Neves
Co-developed-by: Samuel Neves
[ardb: move from lib/zinc to lib/crypto and remove simd handling]
Signed-off-by: Ard Biesheuvel
---
crypto/Kconfig|3 +
incl
Add the usual init/update/final library routines for the Poly1305
keyed hash library. Since this will be the external interface of
the library, move the poly1305_core_* routines to the internal
header (and update the users to refer to it where needed)
Signed-off-by: Ard Biesheuvel
---
crypto
Move the core Poly1305 transformation into a separate library in
lib/crypto so it can be used by other subsystems without going
through the entire crypto API.
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/poly1305_glue.c| 2 +-
crypto/Kconfig | 4 +
crypto
luded in the
first place.
Signed-off-by: Ard Biesheuvel
---
crypto/ecc.c | 2 +-
init/Kconfig | 1 +
lib/ubsan.c | 2 +-
lib/ubsan.h | 2 +-
4 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/crypto/ecc.c b/crypto/ecc.c
index dfe114bc0c4a..6e6aab6c987c 100644
--- a/crypto/ecc.c
Add a test case to the RFC7539 (non-ESP) test vector array that
exercises the newly added code path that may optimize away one
invocation of the shash when the assoclen is a multiple of the
Poly1305 block size.
Signed-off-by: Ard Biesheuvel
---
crypto/testmgr.h | 45
1 file
c60798952f,
and already contains all the changes required to build it as part of a
Linux kernel module.
[0] https://github.com/dot-asm/cryptogams
Co-developed-by: Andy Polyakov
Signed-off-by: Andy Polyakov
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/Kconfig |3 +
arch/
request structure on the stack, removing the need for
per-packet heap allocations on the en/decryption hot path.
Signed-off-by: Ard Biesheuvel
---
crypto/chacha20poly1305.c | 51
1 file changed, 32 insertions(+), 19 deletions(-)
diff --git a/crypto/chacha20poly1305.c b/crypto
c60798952f,
and already contains all the changes required to build it as part of a
Linux kernel module.
[0] https://github.com/dot-asm/cryptogams
Co-developed-by: Andy Polyakov
Signed-off-by: Andy Polyakov
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/Kconfig | 4 +
ash to shash for
the Poly1305 transformation. At the same time, switch to using the
ChaCha library to generate the Poly1305 key so that we don't have to
call into the [potentially asynchronous] skcipher twice, with one call
only operating on 32 bytes of data.
Signed-off-by: Ard Biesheuvel
---
.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/chacha-neon-glue.c | 2 +-
arch/arm64/crypto/chacha-neon-glue.c | 2 +-
arch/x86/crypto/chacha_glue.c| 2 +-
crypto/chacha_generic.c | 42 ++--
include/crypto/chacha.h | 37
ind)
Cc: Herbert Xu
Cc: David Miller
Cc: Greg KH
Cc: Linus Torvalds
Cc: Jason A. Donenfeld
Cc: Samuel Neves
Cc: Dan Carpenter
Cc: Arnd Bergmann
Cc: Eric Biggers
Cc: Andy Lutomirski
Cc: Will Deacon
Cc: Marc Zyngier
Cc: Catalin Marinas
Ard Biesheuvel (15):
crypto: shash - add
In order to reduce the number of invocations of the RFC7539 template
into the Poly1305 driver, implement the new internal .update_from_sg
method that allows the driver to amortize the cost of FPU preserve/
restore sequences over a larger chunk of input.
Signed-off-by: Ard Biesheuvel
---
arch
shash to process each scatterlist entry with a discrete
update() call. This will be used later in the SIMD accelerated Poly1305
to amortize SIMD begin()/end() calls over the entire input.
Signed-off-by: Ard Biesheuvel
---
crypto/ahash.c | 18 +++
crypto/shash.c
> > Refname:refs/heads/master
> > Web:
> > https://git.kernel.org/torvalds/c/724ecd3c0eb7040d423b22332a60d097e2666820
> > Author: Ard Biesheuvel
> > AuthorDate: Tue Jul 2 21:41:20 2019 +0200
> > Committer: Herbert Xu
> > CommitDate:
From: Ard Biesheuvel
The NEON/Crypto Extensions based AES implementation for 32-bit ARM
can be built in a kernel that targets ARMv6 CPUs and higher, even
though the actual code will not be able to run on that generation,
but it allows for a portable image to be generated that can will
use the
From: Ard Biesheuvel
The ARM accelerated AES driver depends on the new AES library for
its non-SIMD fallback so express this in its Kconfig declaration.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/crypto/Kconfig b/arch
that the
'armv7-a' -march argument is considered to be compatible with the
ARM crypto extensions. Instead, we should use armv8-a, which does
allow the crypto extensions to be enabled.
Signed-off-by: Ard Biesheuvel
---
crypto/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
di
On Fri, 13 Sep 2019 at 17:17, Pascal Van Leeuwen
wrote:
>
> > -Original Message-
> > From: Ard Biesheuvel
> > Sent: Friday, September 13, 2019 5:27 PM
> > To: Pascal van Leeuwen
> > Cc: open list:HARDWARE RANDOM NUMBER GENERATOR CORE
>
On Fri, 13 Sep 2019 at 16:06, Pascal van Leeuwen wrote:
>
> This patch adds support for the authenc(hmac(sha1),cbc(des)) aead
>
> Signed-off-by: Pascal van Leeuwen
Please make sure your code is based on cryptodev/master before sending
it to the list.
--
Ard.
> ---
> drivers/crypto/inside-sec
tial block must come at the very end and not
> in the middle.
>
> This is exactly what chunksize is meant to describe so this patch
> changes blocksize to chunksize.
>
> Fixes: 8ff590903d5f ("crypto: algif_skcipher - User-space...")
> Signed-off-by: Herbert Xu
Acke
This series reimplements gcm(aes) for arm64 systems that support the
AES and 64x64->128 PMULL/PMULL2 instructions. Patch #1 adds a test
case and patch #2 updates the driver.
Ard Biesheuvel (2):
crypto: testmgr - add another gcm(aes) testcase
crypto: arm64/gcm-ce - implement 4 way interle
Add an additional gcm(aes) test case that triggers the code path in
the new arm64 driver that deals with tail blocks whose size is not
a multiple of the block size, and where the size of the preceding
input is a multiple of 64 bytes.
Signed-off-by: Ard Biesheuvel
---
crypto/testmgr.h | 192
: Ard Biesheuvel
---
arch/arm64/crypto/ghash-ce-core.S | 501 ++--
arch/arm64/crypto/ghash-ce-glue.c | 293 +---
2 files changed, 467 insertions(+), 327 deletions(-)
diff --git a/arch/arm64/crypto/ghash-ce-core.S
b/arch/arm64/crypto/ghash-ce-core.S
index 410e8afcf5a7
On Mon, 9 Sep 2019 at 13:34, Gilad Ben-Yossef wrote:
>
> On Mon, Sep 9, 2019 at 3:20 PM Ard Biesheuvel
> wrote:
> >
> > On Sun, 8 Sep 2019 at 09:04, Uri Shir wrote:
> > >
> > > In XTS encryption/decryption the plaintext byte size
> > > can be
On Sun, 8 Sep 2019 at 09:04, Uri Shir wrote:
>
> In XTS encryption/decryption the plaintext byte size
> can be >= AES_BLOCK_SIZE. This patch enable the AES-XTS ciphertext
> stealing implementation in ccree driver.
>
> Signed-off-by: Uri Shir
> ---
> drivers/crypto/ccree/cc_cipher.c | 16 ++--
On Fri, 6 Sep 2019 at 18:56, Herbert Xu wrote:
>
> On Fri, Sep 06, 2019 at 06:32:29PM -0700, Ard Biesheuvel wrote:
> >
> > The point is that doing
> >
> > skcipher_walk_virt(&walk, ...);
> > skcipher_walk_done(&walk, -EFOO);
> >
> > may clob
On Fri, 6 Sep 2019 at 18:19, Herbert Xu wrote:
>
> On Fri, Sep 06, 2019 at 05:52:56PM -0700, Ard Biesheuvel wrote:
> >
> > With this change, we still copy out the output in the
> > SKCIPHER_WALK_COPY or SKCIPHER_WALK_SLOW cases. I'd expect the failure
> >
distinguishes between the two cases by checking whether
> walk->nbytes is zero or not. For internal callers, we now set
> walk->nbytes to zero prior to the call. For external callers,
> walk->nbytes has always been non-zero (as zero is used to indicate
> the termination of a
The RFC4106 key derivation code instantiates an AES cipher transform
to encrypt only a single block before it is freed again. Switch to
the new AES library which is more suitable for such use cases.
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/aesni-intel_glue.c | 17 ++---
1
On Wed, 4 Sep 2019 at 07:21, Harald Freudenberger wrote:
>
> On 22.08.19 12:24, Ard Biesheuvel wrote:
> > Fix a typo XTS_BLOCKSIZE -> XTS_BLOCK_SIZE, causing the build to
> > break.
> >
> > Signed-off-by: Ard Biesheuvel
> > ---
> > Apologies for the
t and sha224_init from
> lib/crypto/sha256.c. An added advantage of this, is that this gives these
> 2 functions coverage by the crypto selftests.
>
For the series,
Acked-by: Ard Biesheuvel
Thanks Hans.
On Wed, 4 Sep 2019 at 05:25, Pascal Van Leeuwen
wrote:
>
> > -Original Message-
> > From: Ard Biesheuvel
> > Sent: Wednesday, September 4, 2019 2:11 PM
> > To: Pascal Van Leeuwen
> > Cc: YueHaibing ; antoine.ten...@bootlin.com;
> > herb...@g
On Wed, 4 Sep 2019 at 04:57, Pascal Van Leeuwen
wrote:
>
>
> > -Original Message-
> > From: linux-crypto-ow...@vger.kernel.org
> > On Behalf Of
> > YueHaibing
> > Sent: Tuesday, September 3, 2019 3:45 AM
> > To: antoine.ten...@bootlin.com; herb...@gondor.apana.org.au;
> > da...@davemlof
Replace the vector load from memory sequence with a simple instruction
sequence to compose the tweak vector directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/arm/crypto/aes-ce-core.S b/arch
-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-glue.c | 112 +++-
1 file changed, 59 insertions(+), 53 deletions(-)
diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index ca0c84d56cba..4154bb93a85b 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64
partial block are presented
at the same time. The glue code is updated so that the common case of
operating on a sector or page is mostly as before. When CTS is needed,
the walk is split up into two pieces, unless the entire input is covered
by a single step.
Signed-off-by: Ard Biesheuvel
---
arch
it can operate on at
least 7 blocks of input at the same time, let's reuse the alternate
path we are adding for CTS to process any data tail whose size is
not a multiple of 128 bytes.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-ce.S | 3 +
arch/arm64/crypto/aes-g
Since the CTS-CBC code completes synchronously, there is no point in
keeping part of the scratch data it uses in the request context, so
move it to the stack instead.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-glue.c | 61 +---
1 file changed, 26 insertions(+), 35
Optimize away one of the tbl instructions in the decryption path,
which turns out to be unnecessary.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-modes.S | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes
The AES round keys are arrays of u32s in native endianness now, so
update the function prototypes accordingly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 18 -
arch/arm/crypto/aes-ce-glue.c | 40 ++--
2 files changed, 29 insertions(+), 29 deletions
Replace the vector load from memory sequence with a simple instruction
sequence to compose the tweak vector directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-neonbs-core.S | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/crypto/aes-neonbs
After starting a skcipher walk, the only way to ensure that all
resources it has tied up are released is to complete it. In some
cases, it will be useful to be able to abort a walk cleanly after
it has started, so add this ability to the skcipher walk API.
Signed-off-by: Ard Biesheuvel
Replace the vector load from memory sequence with a simple instruction
sequence to compose the tweak vector directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-neonbs-core.S | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/arch/arm/crypto/aes-neonbs-core.S
Import the AES-XTS test vectors from IEEE publication P1619/D16
that exercise the ciphertext stealing part of the XTS algorithm,
which we haven't supported in the Linux kernel implementation up
till now.
Tested-by: Pascal van Leeuwen
Signed-off-by: Ard Biesheuvel
---
crypto/testmgr.h
implementations working
on 3,4,5 or 8 AES blocks in parallel) lengths.
This code was kindly donated to the public domain by the author.
Link:
https://lore.kernel.org/linux-crypto/mn2pr20mb29739591e1a3e54e7a8a8e18ca...@mn2pr20mb2973.namprd20.prod.outlook.com/
Signed-off-by: Ard Biesheuvel
---
crypto
e are -stable candidates AFAICT.
Changes since v1:
- simply skcipher_walk_abort() - pass -ECANCELED instead of walk->nbytes into
skcipher_walk_done() so that the latter does not require any changes (#8)
- rebased onto cryptodev/master
Ard Biesheuvel (16):
crypto: arm/aes - fix round key pro
with the 64-bit driver, and to ensure that we can reach optimum
performance when running under emulation on high end 64-bit cores.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 263 +++-
1 file changed, 144 insertions(+), 119 deletions(-)
diff --git a/arch/arm/c
Update the AES-XTS implementation based on AES instructions so that it
can deal with inputs whose size is not a multiple of the cipher block
size. This is part of the original XTS specification, but was never
implemented before in the Linux kernel.
Signed-off-by: Ard Biesheuvel
---
arch/arm
Update the AES-XTS implementation based on NEON instructions so that it
can deal with inputs whose size is not a multiple of the cipher block
size. This is part of the original XTS specification, but was never
implemented before in the Linux kernel.
Signed-off-by: Ard Biesheuvel
---
arch/arm
more responsive system.
After this change, we can also permit the cipher_walk infrastructure to
sleep, so set the 'atomic' parameter to skcipher_walk_virt() to false as
well.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-glue.c | 47 ++--
arch/arm/crypto/
Instead of relying on the CTS template to wrap the accelerated CBC
skcipher, implement the ciphertext stealing part directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 85 +
arch/arm/crypto/aes-ce-glue.c | 188 ++--
2 files changed, 256
On Mon, 2 Sep 2019 at 07:19, YueHaibing wrote:
>
> If CONFIG_PCI is not set, building fails:
>
> rivers/crypto/inside-secure/safexcel.c: In function safexcel_request_ring_irq:
> drivers/crypto/inside-secure/safexcel.c:944:9: error: implicit declaration of
> function pci_irq_vector;
> did you mea
On Fri, 30 Aug 2019 at 11:03, Herbert Xu wrote:
>
> On Wed, Aug 21, 2019 at 05:32:44PM +0300, Ard Biesheuvel wrote:
> > After starting a skcipher walk, the only way to ensure that all
> > resources it has tied up are released is to complete it. In some
> > cases, it will
On Fri, 23 Aug 2019 at 11:21, Elon Zhang wrote:
>
>
> On 8/23/2019 15:33, Ard Biesheuvel wrote:
> > On Fri, 23 Aug 2019 at 10:10, Elon Zhang wrote:
> >> Hi Ard,
> >>
> >> I will try to fix this bug.
> > Good
> >
> >> Furthermore, I
s
if you use the one that ships with the kernel, which is not always the
case.
> On 8/20/2019 23:45, Ard Biesheuvel wrote:
> > Hello all,
> >
> > While playing around with the fuzz tests on kernelci.org (which has a
> > couple of rk3288 based boards for boot testing)
than [16, 512, 1024, 2048, 4096],
just drop the check against the block size.
Cc: Tom Lendacky
Cc: Gary Hook
Signed-off-by: Ard Biesheuvel
---
drivers/crypto/ccp/ccp-crypto-aes-xts.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/crypto/ccp/ccp-crypto-aes-xts.c
b/drivers/crypto/ccp
Fix build breakage caused by the DES library refactor.
Fixes: d4b90dbc8578 ("crypto: n2/des - switch to new verification routines")
Signed-off-by: Ard Biesheuvel
---
drivers/crypto/n2_core.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/n
Fix a typo XTS_BLOCKSIZE -> XTS_BLOCK_SIZE, causing the build to
break.
Signed-off-by: Ard Biesheuvel
---
Apologies for the sloppiness.
Herbert, could we please merge this before cryptodev hits -next?
arch/s390/crypto/aes_s390.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
d
Import the AES-XTS test vectors from IEEE publication P1619/D16
that exercise the ciphertext stealing part of the XTS algorithm,
which we haven't supported in the Linux kernel implementation up
till now.
Tested-by: Pascal van Leeuwen
Signed-off-by: Ard Biesheuvel
---
crypto/testmgr.h
Update the AES-XTS implementation based on NEON instructions so that it
can deal with inputs whose size is not a multiple of the cipher block
size. This is part of the original XTS specification, but was never
implemented before in the Linux kernel.
Signed-off-by: Ard Biesheuvel
---
arch/arm
implementations working
on 3,4,5 or 8 AES blocks in parallel) lengths.
This code was kindly donated to the public domain by the author.
Link:
https://lore.kernel.org/linux-crypto/mn2pr20mb29739591e1a3e54e7a8a8e18ca...@mn2pr20mb2973.namprd20.prod.outlook.com/
Signed-off-by: Ard Biesheuvel
---
crypto
Update the AES-XTS implementation based on AES instructions so that it
can deal with inputs whose size is not a multiple of the cipher block
size. This is part of the original XTS specification, but was never
implemented before in the Linux kernel.
Signed-off-by: Ard Biesheuvel
---
arch/arm
Instead of relying on the CTS template to wrap the accelerated CBC
skcipher, implement the ciphertext stealing part directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 85 +
arch/arm/crypto/aes-ce-glue.c | 188 ++--
2 files changed, 256
partial block are presented
at the same time. The glue code is updated so that the common case of
operating on a sector or page is mostly as before. When CTS is needed,
the walk is split up into two pieces, unless the entire input is covered
by a single step.
Signed-off-by: Ard Biesheuvel
---
arch
Replace the vector load from memory sequence with a simple instruction
sequence to compose the tweak vector directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-neonbs-core.S | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/arch/arm/crypto/aes-neonbs-core.S
Optimize away one of the tbl instructions in the decryption path,
which turns out to be unnecessary.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-modes.S | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes
Replace the vector load from memory sequence with a simple instruction
sequence to compose the tweak vector directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/arm/crypto/aes-ce-core.S b/arch
it can operate on at
least 7 blocks of input at the same time, let's reuse the alternate
path we are adding for CTS to process any data tail whose size is
not a multiple of 128 bytes.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-ce.S | 3 +
arch/arm64/crypto/aes-g
more responsive system.
After this change, we can also permit the cipher_walk infrastructure to
sleep, so set the 'atomic' parameter to skcipher_walk_virt() to false as
well.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-glue.c | 47 ++--
arch/arm/crypto/
;ongoing
maintenance' category. None of these are -stable candidates AFAICT.
Ard Biesheuvel (16):
crypto: arm/aes - fix round key prototypes
crypto: arm/aes-ce - yield the SIMD unit between scatterwalk steps
crypto: arm/aes-ce - switch to 4x interleave
crypto: arm/aes-ce - replace tweak
-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-glue.c | 112 +++-
1 file changed, 59 insertions(+), 53 deletions(-)
diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index ca0c84d56cba..4154bb93a85b 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64
The AES round keys are arrays of u32s in native endianness now, so
update the function prototypes accordingly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 18 -
arch/arm/crypto/aes-ce-glue.c | 40 ++--
2 files changed, 29 insertions(+), 29 deletions
Replace the vector load from memory sequence with a simple instruction
sequence to compose the tweak vector directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-neonbs-core.S | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/crypto/aes-neonbs
with the 64-bit driver, and to ensure that we can reach optimum
performance when running under emulation on high end 64-bit cores.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 263 +++-
1 file changed, 144 insertions(+), 119 deletions(-)
diff --git a/arch/arm/c
After starting a skcipher walk, the only way to ensure that all
resources it has tied up are released is to complete it. In some
cases, it will be useful to be able to abort a walk cleanly after
it has started, so add this ability to the skcipher walk API.
Signed-off-by: Ard Biesheuvel
Since the CTS-CBC code completes synchronously, there is no point in
keeping part of the scratch data it uses in the request context, so
move it to the stack instead.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-glue.c | 61 +---
1 file changed, 26 insertions(+), 35
Hello all,
While playing around with the fuzz tests on kernelci.org (which has a
couple of rk3288 based boards for boot testing), I noticed that the
rk3288 cbc mode driver is still broken (both AES and DES fail).
For instance, one of the runs failed with
alg: skcipher: cbc-aes-rk encryption tes
On Tue, 20 Aug 2019 at 13:24, Krzysztof Kozlowski wrote:
>
> On Mon, 19 Aug 2019 at 16:24, Ard Biesheuvel
> wrote:
> >
> > Align the s5p ctr(aes) implementation with other implementations
> > of the same mode, by setting the block size to 1.
> >
On Mon, 19 Aug 2019 at 22:38, Hans de Goede wrote:
>
> Hi,
>
> On 19-08-19 17:08, Ard Biesheuvel wrote:
> > On Sat, 17 Aug 2019 at 17:24, Hans de Goede wrote:
> >>
> >> Hi All,
> >>
> >> Here is v2 of my patch series refactoring the current 2
On Sat, 17 Aug 2019 at 17:24, Hans de Goede wrote:
>
> Hi All,
>
> Here is v2 of my patch series refactoring the current 2 separate SHA256
> C implementations into 1 and put it into a separate library.
>
> There are 3 reasons for this:
>
> 1) Remove the code duplication of having 2 separate implem
, which is a thing we usually try to avoid in response
to situations that can be triggered by unprivileged users.
Signed-off-by: Ard Biesheuvel
---
drivers/crypto/s5p-sss.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c
Align the s5p ctr(aes) implementation with other implementations
of the same mode, by setting the block size to 1.
Signed-off-by: Ard Biesheuvel
---
drivers/crypto/s5p-sss.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c
Fix a couple of issues in the s5p crypto driver that were caught in fuzz
testing.
Cc: Krzysztof Kozlowski
Cc: Vladimir Zapolskiy
Cc: Kamil Konieczny
Cc: linux-samsung-...@vger.kernel.org
Ard Biesheuvel (2):
crypto: s5p - deal gracefully with bogus input sizes
crypto: s5p - use correct
.
Reported-by: Nathan Chancellor
Signed-off-by: Ard Biesheuvel
---
crypto/aegis128-neon-inner.c | 38 ++--
1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/crypto/aegis128-neon-inner.c b/crypto/aegis128-neon-inner.c
index ed55568afd1b..f05310ca22aa 100644
--- a/crypto
x27;s use it for this purpose as well. If ciphertext
stealing use cases ever become a bottleneck, we can always revisit this.
Signed-off-by: Ard Biesheuvel
---
drivers/crypto/vmx/aes_xts.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/vmx/aes_xts.c b/drivers/c
s use it for this purpose as well. If ciphertext
stealing use cases ever become a bottleneck, we can always revisit this.
Signed-off-by: Ard Biesheuvel
---
arch/s390/crypto/aes_s390.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/s390/crypto/aes_s390.c b/arch
Align the x86 code with the generic XTS template, which now supports
ciphertext stealing as described by the IEEE XTS-AES spec P1619.
Tested-by: Stephan Mueller
Signed-off-by: Ard Biesheuvel
---
v2: - move 'decrypt' flag from glue ctx struct to function prototype
- remove redu
On Fri, 16 Aug 2019 at 13:22, Stephan Mueller wrote:
>
> Am Freitag, 16. August 2019, 12:10:21 CEST schrieb Ard Biesheuvel:
>
> Hi Ard,
>
> > Align the x86 code with the generic XTS template, which now supports
> > ciphertext stealing as described by the IEEE XT
On Fri, 16 Aug 2019 at 13:10, Ard Biesheuvel wrote:
>
> Align the x86 code with the generic XTS template, which now supports
> ciphertext stealing as described by the IEEE XTS-AES spec P1619.
>
> Signed-off-by: Ard Biesheuvel
Oops, $SUBJECT should be x86/xts rather than aes/xts
Align the x86 code with the generic XTS template, which now supports
ciphertext stealing as described by the IEEE XTS-AES spec P1619.
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/aesni-intel_glue.c | 1 +
arch/x86/crypto/camellia_aesni_avx2_glue.c | 1 +
arch/x86/crypto
801 - 900 of 2556 matches
Mail list logo