://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=1a9d60d2
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm/crypto/sha1-armv4-large.S |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/crypto/sha1-armv4-large.S
b/arch/arm/crypto/sha1-armv4-large.S
index 92c6eed
.
Ard Biesheuvel (2):
crypto: move ablk_helper out of arch/x86
arm64: add support for AES using ARMv8 Crypto Extensions
arch/arm64/Makefile| 8 +-
arch/arm64/crypto/Makefile | 12 +
arch/arm64/crypto/aesce-cbc.S | 58 +
arch/arm64/crypto
Move the ablk_helper code out of arch/x86 so it can be reused
by other architectures. The only x86 specific dependency was
a call to irq_fpu_usable(), this has been factored out and moved
to crypto/ablk_helper_x86.c
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/x86/crypto
This adds ARMv8 Crypto Extensions based implemenations of
AES in CBC, CTR and XTS mode.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/Makefile | 8 +-
arch/arm64/crypto/Makefile | 12 ++
arch/arm64/crypto/aesce-cbc.S| 58 +++
arch/arm64
v2:
- whitespace fix
- split into two patches so that the first one applies cleanly to the ARM/ARM64
trees as well
- rebased onto cryptodev/master
Ard Biesheuvel (2):
crypto: create generic version of ablk_helper
crypto: move x86 to the generic version of ablk_helper
arch/x86/crypto
Move all users of ablk_helper under x86/ to the generic version
and delete the x86 specific version.
Acked-by: Jussi Kivilinna jussi.kivili...@iki.fi
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/x86/crypto/Makefile | 1 -
arch/x86/crypto/ablk_helper.c
Create a generic version of ablk_helper so it can be reused
by other architectures.
Acked-by: Jussi Kivilinna jussi.kivili...@iki.fi
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
crypto/Kconfig | 4 ++
crypto/Makefile | 1 +
crypto/ablk_helper.c
Move all users of ablk_helper under x86/ to the generic version
and delete the x86 specific version.
Acked-by: Jussi Kivilinna jussi.kivili...@iki.fi
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/x86/crypto/Makefile | 1 -
arch/x86/crypto/ablk_helper.c
v3:
- added generic and x86 versions of asm/simd.h containing may_use_simd(), and
use it to decide whether to take the sync or the async path
v2:
- whitespace fix
- split into two patches so that the first one applies cleanly to the ARM/ARM64
trees as well
- rebased onto cryptodev/master
Ard
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm/include/asm/Kbuild | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild
index d3db398..6577b8a 100644
--- a/arch/arm/include/asm/Kbuild
+++ b/arch/arm/include/asm/Kbuild
Create a generic version of ablk_helper so it can be reused
by other architectures.
Acked-by: Jussi Kivilinna jussi.kivili...@iki.fi
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
crypto/Kconfig | 4 ++
crypto/Makefile | 1 +
crypto/ablk_helper.c
.
Ard Biesheuvel (4):
crypto: create generic version of ablk_helper
ARM: pull in asm/simd.h from asm-generic
ARM: move AES typedefs and function prototypes to separate header
ARM: add support for bit sliced AES using NEON instructions
arch/arm/crypto/Makefile |6 +-
arch/arm/crypto
Put the struct definitions for AES keys and the asm function prototypes in a
separate header and export the asm functions from the module.
This allows other drivers to use them directly.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm/crypto/aes_glue.c | 22
On 22 sep. 2013, at 12:05, Jussi Kivilinna jussi.kivili...@iki.fi wrote:
On 20.09.2013 21:46, Ard Biesheuvel wrote:
Create a generic version of ablk_helper so it can be reused
by other architectures.
Acked-by: Jussi Kivilinna jussi.kivili...@iki.fi
Signed-off-by: Ard Biesheuvel
On 22 September 2013 13:12, Jussi Kivilinna jussi.kivili...@iki.fi wrote:
[...]
Decryption can probably be made faster by implementing InvMixColumns slightly
differently. Instead of implementing inverse MixColumns matrix directly, use
preprocessing step, followed by MixColumns as described in
instead.
http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=6f6a6130
This series still depends on commit a62b01cd (crypto: create generic version of
ablk_helper) which I omitted this time but which can be found in the cryptodev
tree or in linux-next.
Ard Biesheuvel (3):
ARM: pull in asm
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm/include/asm/Kbuild | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild
index d3db398..6577b8a 100644
--- a/arch/arm/include/asm/Kbuild
+++ b/arch/arm/include/asm/Kbuild
Put the struct definitions for AES keys and the asm function prototypes in a
separate header and export the asm functions from the module.
This allows other drivers to use them directly.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm/crypto/aes_glue.c | 22
On 4 October 2013 19:48, Will Deacon will.dea...@arm.com wrote:
On Thu, Oct 03, 2013 at 10:59:23PM +0100, Ard Biesheuvel wrote:
Note to reviewers:
Reviewing the file aesbs-core.S may be a bit overwhelming, so if there are
any
questions or concerns, please refer the file bsaes-armv7.pl which
On 4 October 2013 20:34, Nicolas Pitre nicolas.pi...@linaro.org wrote:
On Fri, 4 Oct 2013, Will Deacon wrote:
[...]
Why do you consider it unsuitable to ship the perl script with the kernel?
Perl 5 is already documented as a build dependency in Documentation/Changes
Do you have an example of
the autobuilder failures.
arch/arm/crypto/bsaes-armv7.pl |2 +-
The .S_shipped file produced by this script should be updated at the same time.
Acked-by: Ard Biesheuvel ard.biesheu...@linaro.org
Regards,
Ard.
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/arm/crypto
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/Makefile | 1 +
arch/arm64/crypto/Makefile| 13 ++
arch/arm64/crypto/aes-ce-cipher.c | 257 ++
crypto/Kconfig| 6 +
4 files changed, 277
through .cia_enc_interleave and .cia_dec_interleave.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
include/linux/crypto.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index b92eadf92d72..4f09a10a4efa 100644
--- a/include/linux
a benchmark, but CTR and XTS are other obvious
candidates for the treatment.
I have included my arm64 AES cipher implementation for reference.
Ard Biesheuvel (3):
crypto: add interleave option to cipher_alg
crypto: take interleave into account for CBC decryption
arm64: add Crypto Extensions based core
As CBC decryption can be executed in parallel, take the cipher alg's
preferred interleave into account when decrypting data.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
crypto/cbc.c | 109 ---
1 file changed, 82 insertions
On 7 February 2014 03:23, Herbert Xu herb...@gondor.apana.org.au wrote:
On Thu, Feb 06, 2014 at 01:25:01PM +0100, Ard Biesheuvel wrote:
My apologies if this has been discussed/debated before on linux-crypto.
When working on accelerated crypto for ARM and arm64, I noticed that many
On 7 February 2014 10:23, Herbert Xu herb...@gondor.apana.org.au wrote:
On Fri, Feb 07, 2014 at 08:30:26AM +0100, Ard Biesheuvel wrote:
I agree that it would be trivial for cbc(%s) to probe for ecb(%s)
before settling on using plain '%s.
But how to probe for an /accelerated/ ecb(%s), i.e
On 7 February 2014 10:44, Herbert Xu herb...@gondor.apana.org.au wrote:
On Fri, Feb 07, 2014 at 10:42:14AM +0100, Ard Biesheuvel wrote:
Another example is bit sliced AES like the implementation in
arch/arm/crypto. It is 45% faster than the ordinary ARM asm
implementation, but its natural
This adds support for a synchronous implementation of AES in CCM mode
using ARMv8 Crypto Extensions, using NEON registers q0 - q5.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
Hi all,
I am posting this for review/RFC. The main topic for feedback is the way
I have used an inner
On 25 February 2014 08:02, Herbert Xu herb...@gondor.apana.org.au wrote:
On Tue, Feb 11, 2014 at 09:21:45AM +0100, Ard Biesheuvel wrote:
This adds support for a synchronous implementation of AES in CCM mode
using ARMv8 Crypto Extensions, using NEON registers q0 - q5.
Signed-off-by: Ard
On 25 February 2014 08:16, Herbert Xu herb...@gondor.apana.org.au wrote:
On Tue, Feb 25, 2014 at 08:12:36AM +0100, Ard Biesheuvel wrote:
Do you have any comments specifically about using an inner blkcipher
instance to implement the aead?
Indeed, the inner block cipher looks superfluous since
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/Makefile | 1 +
arch/arm64/crypto/Makefile| 13 ++
arch/arm64/crypto/aes-ce-cipher.c | 382 ++
crypto/Kconfig| 6 +
4 files changed, 402
for the chunk.
Anyway, no performance numbers yet. I will post back once I produce any.
--
Ard.
Ard Biesheuvel (3):
crypto: update generic ECB's driver_name to 'ecb_generic'
crypto: use ECB to implement CBC decryption
arm64: add Crypto Extensions based core AES cipher and 4-way ECB
arch
of generic ECB to 'ecb_generic(%s)'.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
crypto/ecb.c | 12
1 file changed, 12 insertions(+)
diff --git a/crypto/ecb.c b/crypto/ecb.c
index 935cfef4aa84..46a6a61fbcb9 100644
--- a/crypto/ecb.c
+++ b/crypto/ecb.c
@@ -134,6 +134,12
-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
crypto/cbc.c | 234 +++
1 file changed, 221 insertions(+), 13 deletions(-)
diff --git a/crypto/cbc.c b/crypto/cbc.c
index 61ac42e1e32b..7fa22ea155c8 100644
--- a/crypto/cbc.c
+++ b/crypto
On 25 February 2014 10:08, Herbert Xu herb...@gondor.apana.org.au wrote:
On Tue, Feb 25, 2014 at 08:21:22AM +0100, Ard Biesheuvel wrote:
For the authenticate-only data, this is manageable as you are only
dealing with input, but when dealing with both in- and output, as in
the core of CCM
that allow these data member (iv size, alignmask, etc) to be supplied directly.
Suggestions for better names than blkcipher_walk_init_raw and
blkcipher_walk_virt_raw are highly appreciated.
Ard Biesheuvel (3):
crypto: remove direct blkcipher_walk dependency on transform
crypto: allow blkcipher
This adds the functions blkcipher_walk_init_raw and blkcipher_walk_virt_raw,
which allow the caller to initialize the walk struct data members directly.
This allows non-blkcipher uses (e.g., AEADs) of the blkcipher walk API.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
crypto
This adds support for a synchronous implementation of AES in CCM mode
using ARMv8 Crypto Extensions, using NEON registers q0 - q5.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/Makefile| 1 +
arch/arm64/crypto/Makefile | 12 ++
arch/arm64
In order to allow other uses of the blkcipher walk API than the blkcipher
algos themselves, this patch copies some of the transform data members to the
walk struct so the transform is only accessed at walk init time.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
crypto/blkcipher.c
This adds the function blkcipher_aead_walk_virt_block, which allows the caller
to use the blkcipher walk API to handle the input and output scatterlists.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
crypto/blkcipher.c | 14 ++
include/crypto/algapi.h | 4
2
This adds support for a synchronous implementation of AES in CCM mode
using ARMv8 Crypto Extensions, using NEON registers q0 - q5.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/Makefile| 1 +
arch/arm64/crypto/Makefile | 12 ++
arch/arm64
In order to allow other uses of the blkcipher walk API than the blkcipher
algos themselves, this patch copies some of the transform data members to the
walk struct so the transform is only accessed at walk init time.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
crypto/blkcipher.c
transform.
Ard Biesheuvel (3):
crypto: remove direct blkcipher_walk dependency on transform
crypto: allow blkcipher walks over AEAD data
arm64: add support for AES in CCM mode using Crypto Extensions
arch/arm64/Makefile| 1 +
arch/arm64/crypto/Makefile | 12 ++
arch
On 4 March 2014 15:46, Herbert Xu herb...@gondor.apana.org.au wrote:
On Tue, Mar 04, 2014 at 01:28:37PM +0800, Ard Biesheuvel wrote:
I think this is a better approach than the one I proposed before. This time,
I have only added a single function specifically for use by aeads
On 4 March 2014 15:53, Herbert Xu herb...@gondor.apana.org.au wrote:
On Tue, Mar 04, 2014 at 03:51:11PM +0800, Ard Biesheuvel wrote:
Is there anything else required before you can take these patches?
Note that the first one should go through the arm64 tree, and may need
further review
This implementation keeps the 64 bytes of workspace in registers rather than
on the stack, eliminating most of the loads and stores, and reducing the
instruction count by about 25%.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
Hello all,
No performance numbers I am allowed
On 17 March 2014 22:18, Marek Vasut ma...@denx.de wrote:
On Friday, March 14, 2014 at 04:02:33 PM, Ard Biesheuvel wrote:
This implementation keeps the 64 bytes of workspace in registers rather
than on the stack, eliminating most of the loads and stores, and reducing
the instruction count
This patch adds support for the SHA-224 and SHA-256 hash algorithms using the
NEON based SHA-256 instructions that were introduced in ARM v8.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
Again, this patch depends on the FPSIMD optimization patches that I have posted
to the LAKML
() does not use any particular SSE features and is not
expected to become a performance bottleneck.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
I suppose this should be marked for stable as well?
arch/x86/crypto/ghash-clmulni-intel_asm.S | 29 -
arch
On 27 March 2014 12:36, Herbert Xu herb...@gondor.apana.org.au wrote:
On Thu, Mar 27, 2014 at 12:29:00PM +0100, Ard Biesheuvel wrote:
The GHASH setkey() function uses SSE registers but fails to call
kernel_fpu_begin()/kernel_fpu_end(). Instead of adding these calls, and
then having to deal
On 27 March 2014 12:46, Ard Biesheuvel ard.biesheu...@linaro.org wrote:
On 27 March 2014 12:36, Herbert Xu herb...@gondor.apana.org.au wrote:
On Thu, Mar 27, 2014 at 12:29:00PM +0100, Ard Biesheuvel wrote:
The GHASH setkey() function uses SSE registers but fails to call
kernel_fpu_begin
On 24 March 2014 21:36, Marek Vasut ma...@denx.de wrote:
On Thursday, March 20, 2014 at 03:48:06 PM, Ard Biesheuvel wrote:
This patch adds support for the SHA-224 and SHA-256 hash algorithms using
the NEON based SHA-256 instructions that were introduced in ARM v8.
Signed-off-by: Ard
() does not use any particular SSE features and is not
expected to become a performance bottleneck.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
Acked-by: H. Peter Anvin h...@linux.intel.com
Fixes: 0e1227d356e9b (crypto: ghash - Add PCLMULQDQ accelerated implementation)
---
Changes since
On 1 April 2014 13:23, kbuild test robot fengguang...@intel.com wrote:
tree: git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git
master
head: 8ceee72808d1ae3fb191284afc2257a2be964725
commit: 8ceee72808d1ae3fb191284afc2257a2be964725 [60/60] crypto:
ghash-clmulni-intel -
On 1 April 2014 14:37, Ard Biesheuvel ard.biesheu...@linaro.org wrote:
On 1 April 2014 13:23, kbuild test robot fengguang...@intel.com wrote:
tree: git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git
master
head: 8ceee72808d1ae3fb191284afc2257a2be964725
commit
This adds a test case for each of SHA-1, SHA-224 and SHA-256 with a plaintext
size of 64 bytes, which is exactly the block size. The reason is that some
implementations may use a different code path for inputs that are an exact
multiple of the block size.
---
Just some trivial test vectors I have
: sparse: cast to
restricted __be64
To: Ard Biesheuvel ard.biesheu...@linaro.org
Cc: linux-crypto@vger.kernel.org linux-crypto@vger.kernel.org,
kbuild test robot fengguang...@intel.com
On Tue, Apr 01, 2014 at 02:37:20PM +0200, Ard Biesheuvel wrote:
On 1 April 2014 13:23, kbuild test robot fengguang
On 4 April 2014 14:25, Herbert Xu herb...@gondor.apana.org.au wrote:
On Tue, Apr 01, 2014 at 08:48:24PM +0800, Herbert Xu wrote:
On Tue, Apr 01, 2014 at 02:37:20PM +0200, Ard Biesheuvel wrote:
On 1 April 2014 13:23, kbuild test robot fengguang...@intel.com wrote:
tree:
git
This adds test cases for SHA-1, SHA-224, SHA-256 and AES-CCM with an input size
that is an exact multiple of the block size. The reason is that some
implementations use a different code path for these cases.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
This is the same patch as I
On 11 April 2014 18:03, gre...@linuxfoundation.org
gre...@linuxfoundation.org wrote:
On Fri, Apr 04, 2014 at 10:11:19AM +0200, Ard Biesheuvel wrote:
Greg,
This pertains to commit 8ceee72808d1 (crypto: ghash-clmulni-intel -
use C implementation for setkey()) that has been pulled by Linus
This is a repost of the arm64 crypto patches that I have posted to the LAKML
over the past months. They have now been verified on actual hardware
(Cortex-A57) so if there are no remaining issues I would like to propose them
for 3.16.
Ard Biesheuvel (15):
asm-generic: allow generic unaligned
This is a port to ARMv8 (Crypto Extensions) of the Intel implementation of the
GHASH Secure Hash (used in the Galois/Counter chaining mode). It relies on the
optional PMULL/PMULL2 instruction (polynomial multiply long, what Intel call
carry-less multiply).
Signed-off-by: Ard Biesheuvel
This patch adds support for the SHA-224 and SHA-256 Secure Hash Algorithms
for CPUs that have support for the SHA-2 part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/crypto/Kconfig| 5 +
arch/arm64/crypto/Makefile | 3
This patch adds support for the AES symmetric encryption algorithm for CPUs
that have support for the AES part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/crypto/Kconfig | 7 +-
arch/arm64/crypto/Makefile| 3 +
arch
This patch adds support for the AES-CCM encryption algorithm for CPUs that
have support for the AES part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
Acked-by: Herbert Xu herb...@gondor.apana.org.au
---
arch/arm64/crypto/Kconfig | 7 +
arch
-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/include/asm/fpsimd.h | 2 +
arch/arm64/include/asm/thread_info.h | 4 +-
arch/arm64/kernel/entry.S| 2 +-
arch/arm64/kernel/fpsimd.c | 136 ++-
arch/arm64/kernel/signal.c
. To mark the end of such a partial section, the
regular kernel_neon_end() should be used.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/include/asm/fpsimd.h | 15
arch/arm64/include/asm/fpsimdmacros.h | 35
arch/arm64/include/asm
This patch adds support for the SHA-1 Secure Hash Algorithm for CPUs that
have support for the SHA-1 part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/Kconfig | 3 +
arch/arm64/Makefile | 1 +
arch/arm64
- fpsimd_update_current_state - replace current's FPSIMD state
- fpsimd_flush_task_state - invalidate live copies of a task's FPSIMD state
Where necessary, the ptrace, signal handling and fork code are updated to use
the above wrappers instead of poking into the FPSIMD registers directly.
Signed-off-by: Ard
routines were borrowed from aes_generic.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/crypto/Kconfig | 14 ++
arch/arm64/crypto/Makefile| 14 ++
arch/arm64/crypto/aes-ce.S| 147 +++
arch/arm64/crypto/aes-glue.c | 446
by the
scheduler.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/crypto/ghash-ce-core.S | 10 ++
arch/arm64/crypto/ghash-ce-glue.c | 33 +
2 files changed, 31 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/crypto/ghash-ce
This adds the asm macro definition 'b_if_no_resched' that performs a conditional
branch depending on the preempt need_resched state.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/include/asm/assembler.h | 21 +
1 file changed, 21 insertions(+)
diff
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/include/asm/Kbuild | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 83f71b3004a8..42c7eecd2bb6 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64
by the
scheduler.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/crypto/sha1-ce-core.S | 19
arch/arm64/crypto/sha1-ce-glue.c | 49 +++-
2 files changed, 48 insertions(+), 20 deletions(-)
diff --git a/arch/arm64/crypto/sha1
by the
scheduler.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/crypto/sha2-ce-core.S | 19 ---
arch/arm64/crypto/sha2-ce-glue.c | 51 ++--
2 files changed, 50 insertions(+), 20 deletions(-)
diff --git a/arch/arm64/crypto/sha2
On 6 May 2014 16:43, Catalin Marinas catalin.mari...@arm.com wrote:
On Thu, May 01, 2014 at 04:49:34PM +0100, Ard Biesheuvel wrote:
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 4aef42a04bdc..86ac6a9bc86a 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64
On 6 May 2014 18:08, Catalin Marinas catalin.mari...@arm.com wrote:
On Thu, May 01, 2014 at 04:49:35PM +0100, Ard Biesheuvel wrote:
@@ -153,12 +252,11 @@ static int fpsimd_cpu_pm_notifier(struct
notifier_block *self,
{
switch (cmd) {
case CPU_PM_ENTER
On 6 May 2014 18:49, Catalin Marinas catalin.mari...@arm.com wrote:
On Thu, May 01, 2014 at 04:49:36PM +0100, Ard Biesheuvel wrote:
diff --git a/arch/arm64/include/asm/fpsimd.h
b/arch/arm64/include/asm/fpsimd.h
index 7a900142dbc8..05e1b24aca4c 100644
--- a/arch/arm64/include/asm/fpsimd.h
On 7 May 2014 16:45, Catalin Marinas catalin.mari...@arm.com wrote:
On Thu, May 01, 2014 at 04:49:32PM +0100, Ard Biesheuvel wrote:
This is a repost of the arm64 crypto patches that I have posted to the LAKML
over the past months. They have now been verified on actual hardware
(Cortex-A57) so
On 7 May 2014 16:45, Catalin Marinas catalin.mari...@arm.com wrote:
On Thu, May 01, 2014 at 04:49:32PM +0100, Ard Biesheuvel wrote:
This is a repost of the arm64 crypto patches that I have posted to the LAKML
over the past months. They have now been verified on actual hardware
(Cortex-A57) so
This is a port to ARMv8 (Crypto Extensions) of the Intel implementation of the
GHASH Secure Hash (used in the Galois/Counter chaining mode). It relies on the
optional PMULL/PMULL2 instruction (polynomial multiply long, what Intel call
carry-less multiply).
Signed-off-by: Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/include/asm/Kbuild | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 83f71b3004a8..42c7eecd2bb6 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64
This patch adds support for the AES-CCM encryption algorithm for CPUs that
have support for the AES part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
Acked-by: Herbert Xu herb...@gondor.apana.org.au
---
arch/arm64/crypto/Kconfig | 7 +
arch
This patch adds support for the SHA-224 and SHA-256 Secure Hash Algorithms
for CPUs that have support for the SHA-2 part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
Acked-by: Herbert Xu herb...@gondor.apana.org.au
---
arch/arm64/crypto/Kconfig
patches operate correctly under their respective 'tcrypt.ko mode=xx' tests.
Ard Biesheuvel (11):
arm64/crypto: SHA-1 using ARMv8 Crypto Extensions
arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions
arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions
arm64/crypto: AES using
This adds the asm macro definition 'b_if_no_resched' that performs a conditional
branch depending on the preempt need_resched state.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
arch/arm64/include/asm/assembler.h | 21 +
1 file changed, 21 insertions(+)
diff
routines were borrowed from aes_generic.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
Acked-by: Herbert Xu herb...@gondor.apana.org.au
---
arch/arm64/crypto/Kconfig | 14 ++
arch/arm64/crypto/Makefile| 14 ++
arch/arm64/crypto/aes-ce.S| 133 +++
arch/arm64/crypto
by the
scheduler.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
Acked-by: Herbert Xu herb...@gondor.apana.org.au
---
arch/arm64/crypto/sha1-ce-core.S | 19 ---
arch/arm64/crypto/sha1-ce-glue.c | 52 ++--
2 files changed, 44 insertions(+), 27
This patch adds support for the SHA-1 Secure Hash Algorithm for CPUs that
have support for the SHA-1 part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
Acked-by: Herbert Xu herb...@gondor.apana.org.au
---
arch/arm64/Kconfig | 3 +
arch
by the
scheduler.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
Acked-by: Herbert Xu herb...@gondor.apana.org.au
---
arch/arm64/crypto/sha2-ce-core.S | 19 ---
arch/arm64/crypto/sha2-ce-glue.c | 51 ++--
2 files changed, 44 insertions(+), 26
This patch adds support for the AES symmetric encryption algorithm for CPUs
that have support for the AES part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
Acked-by: Herbert Xu herb...@gondor.apana.org.au
---
arch/arm64/crypto/Kconfig | 7
On 15 May 2014 10:24, Catalin Marinas catalin.mari...@arm.com wrote:
On Wed, May 14, 2014 at 07:17:29PM +0100, Ard Biesheuvel wrote:
The Crypto Extensions based SHA1 implementation uses the NEON register file,
and hence runs with preemption disabled. This patch adds a TIF_NEED_RESCHED
check
On 15 May 2014 14:47, Catalin Marinas catalin.mari...@arm.com wrote:
On 15 May 2014, at 22:35, Ard Biesheuvel ard.biesheu...@linaro.org wrote:
On 15 May 2014 10:24, Catalin Marinas catalin.mari...@arm.com wrote:
On Wed, May 14, 2014 at 07:17:29PM +0100, Ard Biesheuvel wrote:
+static u8 const
On 28 June 2014 12:39, Jussi Kivilinna jussi.kivili...@iki.fi wrote:
Common SHA-1 structures are defined in crypto/sha.h for code sharing.
This patch changes SHA-1/ARM glue code to use these structures.
Signed-off-by: Jussi Kivilinna jussi.kivili...@iki.fi
Acked-by: Ard Biesheuvel
Hi Jussi,
On 28 June 2014 12:40, Jussi Kivilinna jussi.kivili...@iki.fi wrote:
This patch adds ARM NEON assembly implementation of SHA-1 algorithm.
tcrypt benchmark results on Cortex-A8, sha1-arm-asm vs sha1-neon-asm:
block-size bytes/updateold-vs-new
16 16
contants to .text section
- Further tweaks to implementation for ~10% speed-up.
Please move the changelog to below the '---' so it doesn't end up in
the kernel commit log.
Signed-off-by: Jussi Kivilinna jussi.kivili...@iki.fi
Acked-by: Ard Biesheuvel ard.biesheu...@linaro.org
Tested-by: Ard
provide Thumb2 version
Please move Changelog below '---'
Signed-off-by: Jussi Kivilinna jussi.kivili...@iki.fi
Acked-by: Ard Biesheuvel ard.biesheu...@linaro.org
Tested-by: Ard Biesheuvel ard.biesheu...@linaro.org
Tested on Exynos-5250 (Cortex-A15)
ARM-asm
[ 1715.164122] testing
On 29 June 2014 16:33, Jussi Kivilinna jussi.kivili...@iki.fi wrote:
Common SHA-1 structures are defined in crypto/sha.h for code sharing.
This patch changes SHA-1/ARM glue code to use these structures.
Acked-by: Ard Biesheuvel ard.biesheu...@linaro.org
Signed-off-by: Jussi Kivilinna
3.56x
409640963.59x
819216 2.48x
8192256 3.42x
819210243.56x
819240963.60x
819281923.60x
Acked-by: Ard Biesheuvel ard.biesheu...@linaro.org
Tested
1 - 100 of 837 matches
Mail list logo