On 12 May 2018 at 11:50, Dmitry Vyukov <dvyu...@google.com> wrote:
> On Sat, May 12, 2018 at 11:09 AM, Ard Biesheuvel
> <ard.biesheu...@linaro.org> wrote:
>> (+ Arnd)
>>
>> On 12 May 2018 at 10:43, Dmitry Vyukov <dvyu...@google.com> wrote:
>>> On
ed.
>>> > compiler: gcc (GCC) 7.1.1 20170620
>>> > .config is attached.
>>>
>>>
>>> From suspicious frames I see salsa20_asm_crypt there, so +crypto
>>> maintainers.
>>>
>>
>> Looks like the x86 implementations of Salsa20
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha512-ce-core.S | 27 +++-
1 file changed, 21 insertions(+), 6 del
Avoid excessive scheduling delays under a preemptible kernel by
yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/crc32-ce-core.S | 40 +++-
1 file changed, 30 insertions(+), 10 deletions(-)
diff
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha3-ce-core.S | 77 +---
1 file changed, 50 insertions(+), 27 del
Avoid excessive scheduling delays under a preemptible kernel by
yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/ghash-ce-core.S | 113 ++--
arch/arm64/crypto/ghash-ce-glue.c | 28 +++--
2 files c
Avoid excessive scheduling delays under a preemptible kernel by
yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/crct10dif-ce-core.S | 32 +---
1 file changed, 28 insertions(+), 4 deletions(-)
Avoid excessive scheduling delays under a preemptible kernel by
yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/aes-neonbs-core.S | 305 +++-
1 file changed, 170 insertions(+), 135 deletions(-)
Avoid excessive scheduling delays under a preemptible kernel by
yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/aes-ce.S| 15 +-
arch/arm64/crypto/aes-modes.S | 331
2 files change
Avoid excessive scheduling delays under a preemptible kernel by
yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/aes-ce-ccm-core.S | 150 +---
1 file changed, 95 insertions(+), 55 deletions(-)
Avoid excessive scheduling delays under a preemptible kernel by
yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha2-ce-core.S | 37 ++--
1 file changed, 26 insertions(+), 11 deletions(-)
diff
Hello Herbert,
These are the patches that depend on the arm64/assembler.h patches that
inadvertently got pulled into the cryptodev tree and reverted shortly
after. Those have now been merged into Linus's tree, and so the
remaining changes can be applied as well. Please apply.
Ard Biesheuvel (10
Avoid excessive scheduling delays under a preemptible kernel by
yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha1-ce-core.S | 42 ++--
1 file changed, 29 insertions(+), 13 deletions(-)
diff
On 25 April 2018 at 14:20, Ard Biesheuvel <ard.biesheu...@linaro.org> wrote:
> In preparation of adding support for the SIMD based arm64 implementation
> of arm64,
SM4 ^^^
> which requires a fallback to non-SIMD code when invoked in
> certain contexts, expose the generic SM4 e
In preparation of adding support for the SIMD based arm64 implementation
of arm64, which requires a fallback to non-SIMD code when invoked in
certain contexts, expose the generic SM4 encrypt and decrypt routines
to other drivers.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.
Add support for the SM4 symmetric cipher implemented using the special
SM4 instructions introduced in ARM architecture revision 8.2.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/Kconfig | 6 ++
arch/arm64/crypto/Makefile | 3 +
arch/arm64/
+chaining mode combinations as we
do for AES. This can be added later if desiresd.
Ard Biesheuvel (2):
crypto: sm4 - export encrypt/decrypt routines to other drivers
crypto: arm64 - add support for SM4 encryption using special
instructions
arch/arm64/crypto/Kconfig | 6 ++
arch/arm64
On 16 March 2018 at 23:57, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Sat, Mar 10, 2018 at 03:21:45PM +0000, Ard Biesheuvel wrote:
>> As reported by Sebastian, the way the arm64 NEON crypto code currently
>> keeps kernel mode NEON enabled across calls i
here: https://lkml.org/lkml/2018/3/8/1379
>>
>> Signed-off-by: Leonard Crestez <leonard.cres...@nxp.com>
>> Cc: <sta...@vger.kernel.org>
>> ---
>
>
>
> Reviewed-by: Masahiro Yamada <yamada.masah...@socionext.com>
>
Acked-by: Ard Bie
On 12 March 2018 at 14:38, Vitaly Andrianov wrote:
> Hello,
>
> The Texas Instruments keystone2 out-of-tree kernel uses the
> private_AES_set_encrypt_key() and
> the AES_encrypt() at the crypto HW acceleration driver.
>
> The "crypto: arm/aes - replace bit-sliced OpenSSL NEON
-
>> From: linux-crypto-ow...@vger.kernel.org [mailto:linux-crypto-
>> ow...@vger.kernel.org] On Behalf Of Ard Biesheuvel
>> Sent: Saturday, March 10, 2018 8:52 PM
>> To: linux-crypto@vger.kernel.org
>> Cc: herb...@gondor.apana.org.au; linux-arm-ker...@lists.infradead.org
every 64 bytes and not have an exception for
CBC MAC which yields every 16 bytes)
So unroll the loop by 4. We still cannot perform the AES algorithm in
parallel, but we can at least merge the loads and stores.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/cryp
Test code to force a kernel_neon_end+begin sequence at every yield point,
and wipe the entire NEON state before resuming the algorithm.
---
arch/arm64/include/asm/assembler.h | 33
1 file changed, 33 insertions(+)
diff --git a/arch/arm64/include/asm/assembler.h
, and the code in between is only
executed when the yield path is taken, allowing the context to be preserved.
The third macro takes an optional label argument that marks the resume
path after a yield has been performed.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/inclu
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/aes-ce-ccm-core.S | 150 +---
1 file changed, 95 insertions(+), 55 del
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha3-ce-core.S | 77 +---
1 file changed, 50 insertions(+), 27 del
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sm3-ce-core.S | 30 +++-
1 file changed, 23 insertions(+), 7 del
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha2-ce-core.S | 37 ++--
1 file changed, 26 insertions(+), 11 del
contexts.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha256-glue.c | 36 +---
1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c
index b064d925fe2a..e8880c
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/crc32-ce-core.S | 40 +++-
1 file changed, 30 insertions(+), 10 del
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/aes-neonbs-core.S | 305 +++-
1 file changed, 170 insertions(+
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/aes-ce.S| 15 +-
arch/arm64/crypto/aes-modes.S | 331
2
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha1-ce-core.S | 42 ++--
1 file changed, 29 insertions(+), 13 del
in the stack frame (for locals) and emit
the ldp/stp sequences.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/include/asm/assembler.h | 70
1 file changed, 70 insertions(+)
diff --git a/arch/arm64/include/asm/assembler.h
b/arch/arm64/inclu
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha512-ce-core.S | 27 +++-
1 file changed, 21 insertions(+), 6 del
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/crct10dif-ce-core.S | 32 +---
1 file changed, 28 insertions(+), 4 del
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/ghash-ce-core.S | 113 ++--
arch/arm64/crypto/ghash-ce-glue.c | 28 ++
In order to be able to test yield support under preempt, add a test
vector for CRC-T10DIF that is long enough to take multiple iterations
(and thus possible preemption between them) of the primary loop of the
accelerated x86 and arm64 implementations.
Signed-off-by: Ard Biesheuvel <ard.bies
code, and run the remainder of
the code with kernel mode NEON disabled (and preemption enabled)
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/aes-ce-ccm-glue.c | 47 ++--
1 file changed, 23 insertions(+), 24 deletions(-)
diff --git a/arch
code, and run the remainder of
the code with kernel mode NEON disabled (and preemption enabled)
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/aes-neonbs-glue.c | 36 +---
1 file changed, 17 insertions(+), 19 deletions(-)
diff --git a/arch
code, and run the remainder of
the code with kernel mode NEON disabled (and preemption enabled)
Note that this requires some reshuffling of the registers in the asm
code, because the XTS routines can no longer rely on the registers to
retain their contents between invocations.
Signed-off-by: Ard
routines yield every 64 bytes and not have
an exception for CBC encrypt which yields every 16 bytes)
So unroll the loop by 4. We still cannot perform the AES algorithm in
parallel, but we can at least merge the loads and stores.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch
INTERLEAVE=4 with inlining disabled for both flavors
of the core AES routines, so let's stick with that, and remove the option
to configure this at build time. This makes the code easier to modify,
which is nice now that we're adding yield support.
Signed-off-by: Ard Biesheuvel <ard.bies
code, and run the remainder of
the code with kernel mode NEON disabled (and preemption enabled)
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/chacha20-neon-glue.c | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/
Rutland <mark.rutl...@arm.com>
Cc: linux-rt-us...@vger.kernel.org
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
Cc: Steven Rostedt <rost...@goodmis.org>
Cc: Thomas Gleixner <t...@linutronix.de&g
On 6 March 2018 at 12:35, Dave Martin wrote:
> On Mon, Mar 05, 2018 at 11:17:07AM -0800, Eric Biggers wrote:
>> Add a NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
>> for ARM64. This is ported from the 32-bit version. It may be useful on
>> devices with
On 13 February 2018 at 18:57, Eric Biggers <ebigg...@google.com> wrote:
> Hi Ard,
>
> On Tue, Feb 13, 2018 at 11:34:36AM +, Ard Biesheuvel wrote:
>> Hi Eric,
>>
>> On 12 February 2018 at 23:52, Eric Biggers <ebigg...@google.com> wrote:
>> > Add
Hi Eric,
On 12 February 2018 at 23:52, Eric Biggers wrote:
> Add an ARM NEON-accelerated implementation of Speck-XTS. It operates on
> 128-byte chunks at a time, i.e. 8 blocks for Speck128 or 16 blocks for
> Speck64. Each 128-byte chunk goes through XTS preprocessing, then
On 12 February 2018 at 13:52, Jinbum Park <jinb.pa...@gmail.com> wrote:
> Move the AES inverse S-box to the .rodata section
> where it is safe from abuse by speculation.
>
> Signed-off-by: Jinbum Park <jinb.pa...@gmail.com>
Acked-by: Ard Biesheuvel <ard.biesheu...@lin
On 1 February 2018 at 10:21, Geert Uytterhoeven <ge...@linux-m68k.org> wrote:
> Create a new function attribute __optimize, which allows to specify an
> optimization level on a per-function basis.
>
> Signed-off-by: Geert Uytterhoeven <ge...@linux-m68k.org>
Acked-by: Ard
ae791c3 ("crypto: sha3-generic - rewrite KECCAK transform to
> help the compiler optimize")
> Signed-off-by: Geert Uytterhoeven <ge...@linux-m68k.org>
Acked-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
> ---
> crypto/sha3_generic.c | 2 +-
> 1 file changed, 1 i
com>
Suggested-by: Arnd Bergmann <a...@arndb.de>
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
crypto/sha3_generic.c | 218 +++-
1 file changed, 118 insertions(+), 100 deletions(-)
diff --git a/crypto/sha3_generic.c b/crypto/sha3_generic.c
On 22 January 2018 at 20:51, Arnd Bergmann <a...@arndb.de> wrote:
> On Mon, Jan 22, 2018 at 3:54 PM, Arnd Bergmann <a...@arndb.de> wrote:
>> On Fri, Jan 19, 2018 at 1:04 PM, Ard Biesheuvel
>> I'm doing a little more randconfig build testing here now, will write
Implement the various flavours of SHA3 using the new optional
EOR3/RAX1/XAR/BCAX instructions introduced by ARMv8.2.
Tested-by: Steve Capper <steve.cap...@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/Kconfig| 6 +
arch/arm64/cry
In preparation of exposing the generic SHA3 implementation to other
versions as a fallback, simplify the code, and remove an inconsistency
in the output handling (endian swabbing rsizw words of state before
writing the output does not make sense)
Signed-off-by: Ard Biesheuvel <ard.bies
All current SHA3 test cases are smaller than the SHA3 block size, which
means not all code paths are being exercised. So add a new test case to
each variant, and make one of the existing test cases chunked.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
crypto/testmgr.h
To allow accelerated implementations to fall back to the generic
routines, e.g., in contexts where a SIMD based implementation is
not allowed to run, expose the generic SHA3 init/update/final
routines to other modules.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
Implement the Chinese SM3 secure hash algorithm using the new
special instructions that have been introduced as an optional
extension in ARMv8.2.
Tested-by: Steve Capper <steve.cap...@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/Kconfig
Add a missing symbol export that prevents this code to be built as a
module. Also, move the round constant table to the .rodata section,
and use a more optimized version of the core transform.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha512-ce-core.S
sha3 - Add SHA-3 hash algorithm")
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
crypto/sha3_generic.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/crypto/sha3_generic.c b/crypto/sha3_generic.c
index 7e8ed96236ce..a68be626017c 100644
--- a
eric.c
@@ -5,6 +5,7 @@
* http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf
*
* SHA-3 code by Jeff Garzik <j...@garzik.org>
+ * Ard Biesheuvel <ard.biesheu...@linaro.org>
*
* This program is free software; you can redistribute it and/or modify it
* under th
for the recently queued SHA-512 arm64 code.
Ard Biesheuvel (8):
crypto/generic: sha3 - fixes for alignment and big endian operation
crypto/generic: sha3: rewrite KECCAK transform to help the compiler
optimize
crypto/generic: sha3 - simplify code
crypto/generic: sha3 - export init/update/final
On 14 January 2018 at 16:41, Ard Biesheuvel <ard.biesheu...@linaro.org> wrote:
> Add an implementation of SHA3 to arm64 using the new special instructions,
> and another one using scalar instructions but coded in assembler (#2)
>
> In preparation of that, fix a bug in the SHA3
56
> Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83651
> Fixes: 148b974deea9 ("crypto: aes-generic - build with -Os on gcc-7+")
> Signed-off-by: Arnd Bergmann <a...@arndb.de>
Acked-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
> ---
> v2: fix a typo in th
On 18 January 2018 at 11:41, Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Wed, Jan 10, 2018 at 12:11:35PM +0000, Ard Biesheuvel wrote:
>> Prevent inadvertently creating speculative gadgets by moving literal data
>> into the .rodata section.
>>
>>
On 16 January 2018 at 08:41, Steve Capper <steve.cap...@arm.com> wrote:
> On Fri, Jan 12, 2018 at 03:13:56PM +0000, Ard Biesheuvel wrote:
>> On 12 January 2018 at 13:15, Ard Biesheuvel <ard.biesheu...@linaro.org>
>> wrote:
>> > Add an implementation of
Implement the Chinese SM3 secure hash algorithm using the new
special instructions that have been introduced as an optional
extension in ARMv8.2.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/Kconfig | 5 ++
arch/arm64/crypto/Makefile | 3 +
On 16 January 2018 at 08:16, Steve Capper <steve.cap...@arm.com> wrote:
> On Tue, Jan 09, 2018 at 06:23:02PM +0000, Ard Biesheuvel wrote:
>> Implement the SHA-512 using the new special instructions that have
>> been introduced as an optional extension in ARMv8.2.
>
&
100644
--- a/crypto/sha3_generic.c
+++ b/crypto/sha3_generic.c
@@ -5,6 +5,7 @@
* http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf
*
* SHA-3 code by Jeff Garzik <j...@garzik.org>
+ * Ard Biesheuvel <ard.biesheu...@linaro.org>
*
* This program is free software; yo
On 15 January 2018 at 05:53, Chris Moore <mo...@free.fr> wrote:
> Hi,
>
> Le 14/01/2018 à 17:41, Ard Biesheuvel a écrit :
>>
>> Ensure that the input is byte swabbed before injecting it into the
>
>
> Nitpick : s/swabbed/swapped/
>
Thanks Chris - byte sw
Ensure that the input is byte swabbed before injecting it into the
SHA3 transform. Use the get_unaligned() accessor for this so that
we don't perform unaligned access inadvertently on architectures
that do not support that.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
All current SHA3 test cases are smaller than the SHA3 block size, which
means not all code paths are being exercised. So add a new test case to
each variant, and make one of the existing test cases chunked.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
crypto/testmgr.h
for the arm64 module. Instead, provide
a special arm64 version to use as a fallback when the instructions are
not available or when executing in a context that does not allow SIMD
Drop patches that simplify the generic SHA3 and make it reusable by
other modules.
Ard Biesheuvel (3
@ 2GHz101.6 cpb 11.8 cpb 8.6x
The ARMv8.2 version has only been tested against emulators, so no
performance data is available yet.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/Kconfig | 4 +
arch/arm64/crypto/Makefile
On 12 January 2018 at 13:15, Ard Biesheuvel <ard.biesheu...@linaro.org> wrote:
> Add an implementation of SHA3 to arm64 using the new special instructions (#4)
>
> In preparation of that, fix a bug in the SHA3 and refactor it a bit so it
> can serve as a fallback for the other co
Add an implementation of SHA3 to arm64 using the new special instructions (#4)
In preparation of that, fix a bug in the SHA3 and refactor it a bit so it
can serve as a fallback for the other code. Also, add some new test vectors
to get better test coverage.
Ard Biesheuvel (5):
crypto/generic
To allow accelerated implementations to fall back to the generic
routines, e.g., in contexts where a SIMD based implementation is
not allowed to run, expose the generic SHA3 init/update/final
routines to other modules.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
Ensure that the input is byte swabbed before injecting it into the
SHA3 transform. Use the get_unaligned() accessor for this so that
we don't perform unaligned access inadvertently on architectures
that do not support that.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
In preparation of exposing the generic SHA3 implementation to other
versions as a fallback, simplify the code, and remove an inconsistency
in the output handling (endian swabbing rsizw words of state before
writing the output does not make sense)
Signed-off-by: Ard Biesheuvel <ard.bies
Implement the various flavours of SHA3 using the new optional
EOR3/RAX1/XAR/BCAX instructions introduced by ARMv8.2.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/Kconfig| 6 +
arch/arm64/crypto/Makefile | 3 +
arch/arm64/crypto/sha3-ce-
All current SHA3 test cases are smaller than the SHA3 block size, which
means not all code paths are being exercised. So add a new test case to
each variant, and make one of the existing test cases chunked.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
crypto/testmgr.h
- remaining ADRP instructions are redirected via a veneer that performs
the load using an unaffected movn/movk sequence.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/Kconfig | 4 +-
arch/arm64/Makefile | 1 -
arch/arm64/inclu
Move the SHA2 round constant table to the .rodata section where it is
safe from being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha2-ce-core.S | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git
Move the S-boxes and some other literals to the .rodata section where
it is safe from being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/aes-neon.S | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff
Move the AES inverse S-box to the .rodata section where it is safe from
abuse by speculation.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/aes-cipher-core.S | 19 ++-
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/arch
Load the four SHA-1 round constants using immediates rather than literal
pool entries, to avoid having executable data that may be exploitable
under speculation attacks.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/sha1-ce-core.S | 20 +
Move CRC32 literal data to the .rodata section where it is safe from
being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/crc32-ce-core.S | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch
Move the CRC-T10DIF literal data to the .rodata section where it is
safe from being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/crct10dif-ce-core.S | 17 +
1 file changed, 9 insertions(+), 8 del
ve gadgets inadvertently.
So set -mpc-relative-literal-loads only for modules, and only if the
A53 erratum is enabled.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/Makef
- #7 update the crypto asm code to move sboxes and round constant
tables (which may or may not be hiding 'interesting' opcodes) from .text
to .rodata
Ard Biesheuvel (7):
arm64: kernel: avoid executable literal pools
arm64/crypto: aes-cipher: move S-box to .rodata section
arm64/crypto: aes-neon
Implement the SHA-512 using the new special instructions that have
been introduced as an optional extension in ARMv8.2.
Signed-off-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
---
arch/arm64/crypto/Kconfig | 6 ++
arch/arm64/crypto/Makefile | 3 +
arch/arm64/crypto/
On 9 January 2018 at 08:31, Riku Voipio wrote:
> Hi,
>
> Loading omap_rng module on McBin causes hangup (in about 9/10 times).
> Looking at /proc/interrupts it seems the interrupt starts running like
> crazy, and after a while the whole system is unresponsive. This with
>
, as later testing
>found.
>
> b) disabling UBSAN on this file or all ciphers, as suggested by Ard
>Biesheuvel. This would lead to massively better crypto performance in
>UBSAN-enabled kernels and avoid the stack usage, but there is a concern
>over whether we should
On 3 January 2018 at 16:37, Arnd Bergmann <a...@arndb.de> wrote:
> On Fri, Dec 22, 2017 at 4:47 PM, Ard Biesheuvel
> <ard.biesheu...@linaro.org> wrote:
>> On 21 December 2017 at 13:47, PrasannaKumar Muralidharan
>> <prasannatsmku...@gmail.com> wrote:
&
On 21 December 2017 at 13:47, PrasannaKumar Muralidharan
<prasannatsmku...@gmail.com> wrote:
> Hi Ard,
>
> On 21 December 2017 at 17:52, Ard Biesheuvel <ard.biesheu...@linaro.org>
> wrote:
>> On 21 December 2017 at 10:20, Arnd Bergmann <a...@arndb.de> wrote:
On 21 December 2017 at 10:20, Arnd Bergmann wrote:
> On Wed, Dec 20, 2017 at 10:46 PM, Jakub Jelinek wrote:
>> On Wed, Dec 20, 2017 at 09:52:05PM +0100, Arnd Bergmann wrote:
>>> diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c
>>> index
On 20 December 2017 at 20:52, Arnd Bergmann wrote:
> While testing other changes, I discovered that gcc-7.2.1 produces badly
> optimized code for aes_encrypt/aes_decrypt. This is especially true when
> CONFIG_UBSAN_SANITIZE_ALL is enabled, where it leads to extremely
> large stack
On 8 December 2017 at 23:11, Eric Biggers <ebigge...@gmail.com> wrote:
> On Fri, Dec 08, 2017 at 10:54:24PM +0000, Ard Biesheuvel wrote:
>> >> Note that there are two conflicting conventions for what inputs ChaCha
>> >> takes.
>> >> The original pape
On 8 December 2017 at 22:42, Ard Biesheuvel <ard.biesheu...@linaro.org> wrote:
> On 8 December 2017 at 22:17, Eric Biggers <ebigge...@gmail.com> wrote:
>> On Fri, Dec 08, 2017 at 11:55:02AM +, Ard Biesheuvel wrote:
>>> As pointed out by Eric [0], the way RFC7539
On 8 December 2017 at 22:17, Eric Biggers <ebigge...@gmail.com> wrote:
> On Fri, Dec 08, 2017 at 11:55:02AM +0000, Ard Biesheuvel wrote:
>> As pointed out by Eric [0], the way RFC7539 was interpreted when creating
>> our implementation of ChaCha20 creates a risk of IV reuse w
101 - 200 of 837 matches
Mail list logo