[PATCH v2 00/14] crypto: SHA glue code consolidation

2015-03-30 Thread Ard Biesheuvel
Hello all,

This is v2 of what is now a complete glue code consolidation series
for generic, x86, arm and arm64 implementations of SHA-1, SHA-224/256
and SHA-384/512.

The base layer implements all the update and finalization logic around
the block transforms, where the prototypes of the latter look something
like this:

typedef void (shaXXX_block_fn)(int blocks, u8 const *src, uXX *state,
 const u8 *head, void *p);

The block implementation should process the head block first, then
process the requested number of block starting at 'src'. The generic
pointer 'p' is passed down from the do_update/do_finalize() versions;
this is used for instance by the ARM64 implementations to indicate to
the core ASM implementation that it should finalize the digest, which
it will do only if the input was a round multiple of the block size.
The generic pointer is used here as a means of conveying that information
back and forth.

Note that the base functions prototypes are all 'returning int' but
they all return 0. They should be invoked as tail calls where possible
to eliminate some of the function call overhead. If that is not possible,
the return values can be safely ignored.

Changes since v1 (RFC):
- prefixed globally visible generic symbols with crypto_
- added SHA-1 base layer
- updated init code to only set the initial constants and clear the
  count, clearing the buffer is unnecessary [Markus]
- favor the small update path in crypto_sha_XXX_base_do_update() [Markus]
- update crypto_sha_XXX_do_finalize() to use memset() on the buffer directly
  rather than copying a statically allocated padding buffer into it
  [Markus]
- moved a bunch of existing arm and x86 implementations to use the new base
  layers

Note: looking at the generated asm (for arm64), I noticed that the memcpy/memset
invocations with compile time constant src and len arguments (which includes
the empty struct assignments) are eliminated completely, and replaced by
direct loads and stores. Hopefully this addresses the concern raised by Markus
regarding this.

Ard Biesheuvel (14):
  crypto: sha512: implement base layer for SHA-512
  crypto: sha256: implement base layer for SHA-256
  crypto: sha1: implement base layer for SHA-1
  crypto: sha512-generic: move to generic glue implementation
  crypto: sha256-generic: move to generic glue implementation
  crypto: sha1-generic: move to generic glue implementation
  crypto/arm: move SHA-1 ARM asm implementation to base layer
  crypto/arm: move SHA-1 ARMv8 implementation to base layer
  crypto/arm: move SHA-224/256 ARMv8 implementation to base layer
  crypto/arm64: move SHA-1 ARMv8 implementation to base layer
  crypto/arm64: move SHA-224/256 ARMv8 implementation to base layer
  crypto/x86: move SHA-1 SSSE3 implementation to base layer
  crypto/x86: move SHA-224/256 SSSE3 implementation to base layer
  crypto/x86: move SHA-384/512 SSSE3 implementation to base layer

 arch/arm/crypto/Kconfig  |   4 +-
 arch/arm/crypto/sha1-ce-glue.c   | 110 +---
 arch/arm/{include/asm = }/crypto/sha1.h |   3 +
 arch/arm/crypto/sha1_glue.c  | 117 -
 arch/arm/crypto/sha2-ce-glue.c   | 151 +-
 arch/arm64/crypto/Kconfig|   2 +
 arch/arm64/crypto/sha1-ce-core.S |  11 +-
 arch/arm64/crypto/sha1-ce-glue.c | 132 
 arch/arm64/crypto/sha2-ce-core.S |  11 +-
 arch/arm64/crypto/sha2-ce-glue.c | 208 +--
 arch/x86/crypto/sha1_ssse3_glue.c| 139 +
 arch/x86/crypto/sha256_ssse3_glue.c  | 186 ++-
 arch/x86/crypto/sha512_ssse3_glue.c  | 195 ++---
 crypto/Kconfig   |  16 +++
 crypto/Makefile  |   3 +
 crypto/sha1_base.c   | 125 +++
 crypto/sha1_generic.c| 105 
 crypto/sha256_base.c | 140 +
 crypto/sha256_generic.c  | 139 -
 crypto/sha512_base.c | 143 +
 crypto/sha512_generic.c  | 126 ---
 include/crypto/sha.h |  62 +
 22 files changed, 836 insertions(+), 1292 deletions(-)
 rename arch/arm/{include/asm = }/crypto/sha1.h (67%)
 create mode 100644 crypto/sha1_base.c
 create mode 100644 crypto/sha256_base.c
 create mode 100644 crypto/sha512_base.c

-- 
1.8.3.2

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 05/14] crypto: sha256-generic: move to generic glue implementation

2015-03-30 Thread Ard Biesheuvel
This updates the generic SHA-256 implementation to use the
new shared SHA-256 glue code.

It also implements a .finup hook crypto_sha256_finup() and exports
it to other modules.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig  |   1 +
 crypto/sha256_generic.c | 139 ++--
 include/crypto/sha.h|   3 ++
 3 files changed, 31 insertions(+), 112 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 83bc1680391a..72bf5af7240d 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -611,6 +611,7 @@ config CRYPTO_SHA256_BASE
 
 config CRYPTO_SHA256
tristate SHA224 and SHA256 digest algorithm
+   select CRYPTO_SHA256_BASE
select CRYPTO_HASH
help
  SHA256 secure hash standard (DFIPS 180-2).
diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c
index b001ff5c2efc..d5c18c08b3da 100644
--- a/crypto/sha256_generic.c
+++ b/crypto/sha256_generic.c
@@ -214,136 +214,50 @@ static void sha256_transform(u32 *state, const u8 *input)
memzero_explicit(W, 64 * sizeof(u32));
 }
 
-static int sha224_init(struct shash_desc *desc)
+static void sha256_generic_block_fn(int blocks, u8 const *src, u32 *state,
+   const u8 *head, void *p)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA224_H0;
-   sctx-state[1] = SHA224_H1;
-   sctx-state[2] = SHA224_H2;
-   sctx-state[3] = SHA224_H3;
-   sctx-state[4] = SHA224_H4;
-   sctx-state[5] = SHA224_H5;
-   sctx-state[6] = SHA224_H6;
-   sctx-state[7] = SHA224_H7;
-   sctx-count = 0;
+   if (head)
+   sha256_transform(state, head);
 
-   return 0;
-}
-
-static int sha256_init(struct shash_desc *desc)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA256_H0;
-   sctx-state[1] = SHA256_H1;
-   sctx-state[2] = SHA256_H2;
-   sctx-state[3] = SHA256_H3;
-   sctx-state[4] = SHA256_H4;
-   sctx-state[5] = SHA256_H5;
-   sctx-state[6] = SHA256_H6;
-   sctx-state[7] = SHA256_H7;
-   sctx-count = 0;
-
-   return 0;
+   while (blocks--) {
+   sha256_transform(state, src);
+   src += SHA256_BLOCK_SIZE;
+   }
 }
 
 int crypto_sha256_update(struct shash_desc *desc, const u8 *data,
  unsigned int len)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial, done;
-   const u8 *src;
-
-   partial = sctx-count  0x3f;
-   sctx-count += len;
-   done = 0;
-   src = data;
-
-   if ((partial + len)  63) {
-   if (partial) {
-   done = -partial;
-   memcpy(sctx-buf + partial, data, done + 64);
-   src = sctx-buf;
-   }
-
-   do {
-   sha256_transform(sctx-state, src);
-   done += 64;
-   src = data + done;
-   } while (done + 63  len);
-
-   partial = 0;
-   }
-   memcpy(sctx-buf + partial, src, len - done);
-
-   return 0;
+   return crypto_sha256_base_do_update(desc, data, len,
+   sha256_generic_block_fn, NULL);
 }
 EXPORT_SYMBOL(crypto_sha256_update);
 
-static int sha256_final(struct shash_desc *desc, u8 *out)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   __be32 *dst = (__be32 *)out;
-   __be64 bits;
-   unsigned int index, pad_len;
-   int i;
-   static const u8 padding[64] = { 0x80, };
-
-   /* Save number of bits */
-   bits = cpu_to_be64(sctx-count  3);
-
-   /* Pad out to 56 mod 64. */
-   index = sctx-count  0x3f;
-   pad_len = (index  56) ? (56 - index) : ((64+56) - index);
-   crypto_sha256_update(desc, padding, pad_len);
-
-   /* Append length (before padding) */
-   crypto_sha256_update(desc, (const u8 *)bits, sizeof(bits));
-
-   /* Store state in digest */
-   for (i = 0; i  8; i++)
-   dst[i] = cpu_to_be32(sctx-state[i]);
-
-   /* Zeroize sensitive information. */
-   memset(sctx, 0, sizeof(*sctx));
-
-   return 0;
-}
-
-static int sha224_final(struct shash_desc *desc, u8 *hash)
-{
-   u8 D[SHA256_DIGEST_SIZE];
-
-   sha256_final(desc, D);
-
-   memcpy(hash, D, SHA224_DIGEST_SIZE);
-   memzero_explicit(D, SHA256_DIGEST_SIZE);
-
-   return 0;
-}
-
-static int sha256_export(struct shash_desc *desc, void *out)
+int crypto_sha256_finup(struct shash_desc *desc, const u8 *data,
+   unsigned int len, u8 *hash)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-
-   memcpy(out, sctx, sizeof(*sctx));
-   return 0;
+   if (len)
+   crypto_sha256_base_do_update(desc, data, len,
+sha256_generic_block_fn, NULL);
+ 

[PATCH v2 03/14] crypto: sha1: implement base layer for SHA-1

2015-03-30 Thread Ard Biesheuvel
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-1
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig   |   3 ++
 crypto/Makefile  |   1 +
 crypto/sha1_base.c   | 125 +++
 include/crypto/sha.h |  17 +++
 4 files changed, 146 insertions(+)
 create mode 100644 crypto/sha1_base.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1664bd68b97d..155cc15c2719 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -516,6 +516,9 @@ config CRYPTO_RMD320
  Developed by Hans Dobbertin, Antoon Bosselaers and Bart Preneel.
  See http://homes.esat.kuleuven.be/~bosselae/ripemd160.html
 
+config CRYPTO_SHA1_BASE
+   tristate
+
 config CRYPTO_SHA1
tristate SHA1 digest algorithm
select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index bb9bafeb3ac7..42446cab15f3 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -43,6 +43,7 @@ obj-$(CONFIG_CRYPTO_RMD128) += rmd128.o
 obj-$(CONFIG_CRYPTO_RMD160) += rmd160.o
 obj-$(CONFIG_CRYPTO_RMD256) += rmd256.o
 obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
+obj-$(CONFIG_CRYPTO_SHA1_BASE) += sha1_base.o
 obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
 obj-$(CONFIG_CRYPTO_SHA256_BASE) += sha256_base.o
 obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
diff --git a/crypto/sha1_base.c b/crypto/sha1_base.c
new file mode 100644
index ..30fb0f9b47cf
--- /dev/null
+++ b/crypto/sha1_base.c
@@ -0,0 +1,125 @@
+/*
+ * sha1_base.c - core logic for SHA-1 implementations
+ *
+ * Copyright (C) 2015 Linaro Ltd ard.biesheu...@linaro.org
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include crypto/internal/hash.h
+#include crypto/sha.h
+#include linux/crypto.h
+#include linux/module.h
+
+#include asm/unaligned.h
+
+int crypto_sha1_base_init(struct shash_desc *desc)
+{
+   static const u32 sha1_init_state[] = {
+   SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4,
+   };
+   struct sha1_state *sctx = shash_desc_ctx(desc);
+
+   memcpy(sctx-state, sha1_init_state, sizeof(sctx-state));
+   sctx-count = 0;
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha1_base_init);
+
+int crypto_sha1_base_export(struct shash_desc *desc, void *out)
+{
+   struct sha1_state *sctx = shash_desc_ctx(desc);
+   struct sha1_state *dst = out;
+
+   *dst = *sctx;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha1_base_export);
+
+int crypto_sha1_base_import(struct shash_desc *desc, const void *in)
+{
+   struct sha1_state *sctx = shash_desc_ctx(desc);
+   struct sha1_state const *src = in;
+
+   *sctx = *src;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha1_base_import);
+
+int crypto_sha1_base_do_update(struct shash_desc *desc, const u8 *data,
+unsigned int len, sha1_block_fn *block_fn,
+void *p)
+{
+   struct sha1_state *sctx = shash_desc_ctx(desc);
+   unsigned int partial = sctx-count % SHA1_BLOCK_SIZE;
+
+   sctx-count += len;
+
+   if (unlikely((partial + len) = SHA1_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx-buffer + partial, data, p);
+   data += p;
+   len -= p;
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   block_fn(blocks, data, sctx-state,
+partial ? sctx-buffer : NULL, p);
+   data += blocks * SHA1_BLOCK_SIZE;
+   partial = 0;
+   }
+   if (len)
+   memcpy(sctx-buffer + partial, data, len);
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha1_base_do_update);
+
+int crypto_sha1_base_do_finalize(struct shash_desc *desc,
+sha1_block_fn *block_fn, void *p)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   struct sha1_state *sctx = shash_desc_ctx(desc);
+   __be64 *bits = (__be64 *)(sctx-buffer + bit_offset);
+   unsigned int partial = sctx-count % SHA1_BLOCK_SIZE;
+
+   sctx-buffer[partial++] = 0x80;
+   if (partial  bit_offset) {
+   memset(sctx-buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+   block_fn(1, sctx-buffer, sctx-state, NULL, p);
+   }
+
+   memset(sctx-buffer + partial, 0x0, bit_offset - partial);
+   *bits = cpu_to_be64(sctx-count  3);
+   block_fn(1, sctx-buffer, sctx-state, NULL, p);
+
+   return 0;
+}

[PATCH v2 01/14] crypto: sha512: implement base layer for SHA-512

2015-03-30 Thread Ard Biesheuvel
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-512
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig   |   3 ++
 crypto/Makefile  |   1 +
 crypto/sha512_base.c | 143 +++
 include/crypto/sha.h |  20 +++
 4 files changed, 167 insertions(+)
 create mode 100644 crypto/sha512_base.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 88639937a934..3400cf4e3cdb 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -641,6 +641,9 @@ config CRYPTO_SHA256_SPARC64
  SHA-256 secure hash standard (DFIPS 180-2) implemented
  using sparc64 crypto instructions, when available.
 
+config CRYPTO_SHA512_BASE
+   tristate
+
 config CRYPTO_SHA512
tristate SHA384 and SHA512 digest algorithms
select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index 97b7d3ac87e7..6174bf2592fe 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -45,6 +45,7 @@ obj-$(CONFIG_CRYPTO_RMD256) += rmd256.o
 obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
 obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
 obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
+obj-$(CONFIG_CRYPTO_SHA512_BASE) += sha512_base.o
 obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
 obj-$(CONFIG_CRYPTO_WP512) += wp512.o
 obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
diff --git a/crypto/sha512_base.c b/crypto/sha512_base.c
new file mode 100644
index ..9a60829e06c4
--- /dev/null
+++ b/crypto/sha512_base.c
@@ -0,0 +1,143 @@
+/*
+ * sha512_base.c - core logic for SHA-512 implementations
+ *
+ * Copyright (C) 2015 Linaro Ltd ard.biesheu...@linaro.org
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include crypto/internal/hash.h
+#include crypto/sha.h
+#include linux/crypto.h
+#include linux/module.h
+
+#include asm/unaligned.h
+
+int crypto_sha384_base_init(struct shash_desc *desc)
+{
+   static const u64 sha384_init_state[] = {
+   SHA384_H0, SHA384_H1, SHA384_H2, SHA384_H3,
+   SHA384_H4, SHA384_H5, SHA384_H6, SHA384_H7,
+   };
+   struct sha512_state *sctx = shash_desc_ctx(desc);
+
+   memcpy(sctx-state, sha384_init_state, sizeof(sctx-state));
+   sctx-count[0] = sctx-count[1] = 0;
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha384_base_init);
+
+int crypto_sha512_base_init(struct shash_desc *desc)
+{
+   static const u64 sha512_init_state[] = {
+   SHA512_H0, SHA512_H1, SHA512_H2, SHA512_H3,
+   SHA512_H4, SHA512_H5, SHA512_H6, SHA512_H7,
+   };
+   struct sha512_state *sctx = shash_desc_ctx(desc);
+
+   memcpy(sctx-state, sha512_init_state, sizeof(sctx-state));
+   sctx-count[0] = sctx-count[1] = 0;
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha512_base_init);
+
+int crypto_sha512_base_export(struct shash_desc *desc, void *out)
+{
+   struct sha512_state *sctx = shash_desc_ctx(desc);
+   struct sha512_state *dst = out;
+
+   *dst = *sctx;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha512_base_export);
+
+int crypto_sha512_base_import(struct shash_desc *desc, const void *in)
+{
+   struct sha512_state *sctx = shash_desc_ctx(desc);
+   struct sha512_state const *src = in;
+
+   *sctx = *src;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha512_base_import);
+
+int crypto_sha512_base_do_update(struct shash_desc *desc, const u8 *data,
+unsigned int len, sha512_block_fn *block_fn,
+void *p)
+{
+   struct sha512_state *sctx = shash_desc_ctx(desc);
+   unsigned int partial = sctx-count[0] % SHA512_BLOCK_SIZE;
+
+   sctx-count[0] += len;
+   if (sctx-count[0]  len)
+   sctx-count[1]++;
+
+   if (unlikely((partial + len) = SHA512_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA512_BLOCK_SIZE - partial;
+
+   memcpy(sctx-buf + partial, data, p);
+   data += p;
+   len -= p;
+   }
+
+   blocks = len / SHA512_BLOCK_SIZE;
+   len %= SHA512_BLOCK_SIZE;
+
+   block_fn(blocks, data, sctx-state,
+partial ? sctx-buf : NULL, p);
+   data += blocks * SHA512_BLOCK_SIZE;
+   partial = 0;
+   }
+   if (len)
+   memcpy(sctx-buf + partial, data, len);
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha512_base_do_update);
+
+int crypto_sha512_base_do_finalize(struct shash_desc *desc,
+  sha512_block_fn *block_fn, void *p)
+{
+   const int bit_offset = SHA512_BLOCK_SIZE - 

[PATCH v2 resend 10/14] crypto/arm64: move SHA-1 ARMv8 implementation to base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/arm64/crypto/Kconfig|   1 +
 arch/arm64/crypto/sha1-ce-core.S |  11 ++--
 arch/arm64/crypto/sha1-ce-glue.c | 132 +++
 3 files changed, 31 insertions(+), 113 deletions(-)

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 2cf32e9887e1..c87792dfaacc 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -12,6 +12,7 @@ config CRYPTO_SHA1_ARM64_CE
tristate SHA-1 digest algorithm (ARMv8 Crypto Extensions)
depends on ARM64  KERNEL_MODE_NEON
select CRYPTO_HASH
+   select CRYPTO_SHA1_BASE
 
 config CRYPTO_SHA2_ARM64_CE
tristate SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)
diff --git a/arch/arm64/crypto/sha1-ce-core.S b/arch/arm64/crypto/sha1-ce-core.S
index 09d57d98609c..a2c3ad51286b 100644
--- a/arch/arm64/crypto/sha1-ce-core.S
+++ b/arch/arm64/crypto/sha1-ce-core.S
@@ -131,15 +131,18 @@ CPU_LE(   rev32   v11.16b, v11.16b)
 
/*
 * Final block: add padding and total bit count.
-* Skip if we have no total byte count in x4. In that case, the input
-* size was not a round multiple of the block size, and the padding is
-* handled by the C code.
+* Skip if the input size was not a round multiple of the block size,
+* the padding is handled by the C code in that case.
 */
cbz x4, 3f
+   ldr x5, [x2, #-8]   // sha1_state::count
+   tst x5, #0x3f   // round multiple of block size?
+   b.ne3f
+   str wzr, [x4]
moviv9.2d, #0
mov x8, #0x8000
moviv10.2d, #0
-   ror x7, x4, #29 // ror(lsl(x4, 3), 32)
+   ror x7, x5, #29 // ror(lsl(x4, 3), 32)
fmovd8, x8
mov x4, #0
mov v11.d[0], xzr
diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c
index 6fe83f37a750..a1cf07b9a8fa 100644
--- a/arch/arm64/crypto/sha1-ce-glue.c
+++ b/arch/arm64/crypto/sha1-ce-glue.c
@@ -21,132 +21,46 @@ MODULE_AUTHOR(Ard Biesheuvel 
ard.biesheu...@linaro.org);
 MODULE_LICENSE(GPL v2);
 
 asmlinkage void sha1_ce_transform(int blocks, u8 const *src, u32 *state,
- u8 *head, long bytes);
+ const u8 *head, void *p);
 
-static int sha1_init(struct shash_desc *desc)
+static int sha1_ce_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-
-   *sctx = (struct sha1_state){
-   .state = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4 },
-   };
-   return 0;
-}
-
-static int sha1_update(struct shash_desc *desc, const u8 *data,
-  unsigned int len)
-{
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial = sctx-count % SHA1_BLOCK_SIZE;
-
-   sctx-count += len;
-
-   if ((partial + len) = SHA1_BLOCK_SIZE) {
-   int blocks;
-
-   if (partial) {
-   int p = SHA1_BLOCK_SIZE - partial;
-
-   memcpy(sctx-buffer + partial, data, p);
-   data += p;
-   len -= p;
-   }
-
-   blocks = len / SHA1_BLOCK_SIZE;
-   len %= SHA1_BLOCK_SIZE;
-
-   kernel_neon_begin_partial(16);
-   sha1_ce_transform(blocks, data, sctx-state,
- partial ? sctx-buffer : NULL, 0);
-   kernel_neon_end();
-
-   data += blocks * SHA1_BLOCK_SIZE;
-   partial = 0;
-   }
-   if (len)
-   memcpy(sctx-buffer + partial, data, len);
-   return 0;
-}
-
-static int sha1_final(struct shash_desc *desc, u8 *out)
-{
-   static const u8 padding[SHA1_BLOCK_SIZE] = { 0x80, };
-
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   __be64 bits = cpu_to_be64(sctx-count  3);
-   __be32 *dst = (__be32 *)out;
-   int i;
-
-   u32 padlen = SHA1_BLOCK_SIZE
-- ((sctx-count + sizeof(bits)) % SHA1_BLOCK_SIZE);
-
-   sha1_update(desc, padding, padlen);
-   sha1_update(desc, (const u8 *)bits, sizeof(bits));
-
-   for (i = 0; i  SHA1_DIGEST_SIZE / sizeof(__be32); i++)
-   put_unaligned_be32(sctx-state[i], dst++);
+   kernel_neon_begin_partial(16);
+   crypto_sha1_base_do_update(desc, data, len, sha1_ce_transform, NULL);
+   kernel_neon_end();
 
-   *sctx = (struct sha1_state){};
return 0;
 }
 
-static int sha1_finup(struct shash_desc *desc, const u8 *data,
- unsigned int len, u8 *out)
+static int sha1_ce_finup(struct shash_desc *desc, const u8 *data,
+

[PATCH v2 04/14] crypto: sha512-generic: move to generic glue implementation

2015-03-30 Thread Ard Biesheuvel
This updated the generic SHA-512 implementation to use the
generic shared SHA-512 glue code.

It also implements a .finup hook crypto_sha512_finup() and exports
it to other modules.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig  |   1 +
 crypto/sha512_generic.c | 126 ++--
 include/crypto/sha.h|   2 +
 3 files changed, 28 insertions(+), 101 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 155cc15c2719..83bc1680391a 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -653,6 +653,7 @@ config CRYPTO_SHA512_BASE
 
 config CRYPTO_SHA512
tristate SHA384 and SHA512 digest algorithms
+   select CRYPTO_SHA512_BASE
select CRYPTO_HASH
help
  SHA512 secure hash standard (DFIPS 180-2).
diff --git a/crypto/sha512_generic.c b/crypto/sha512_generic.c
index 1c3c3767e079..88f36a6920ef 100644
--- a/crypto/sha512_generic.c
+++ b/crypto/sha512_generic.c
@@ -130,125 +130,48 @@ sha512_transform(u64 *state, const u8 *input)
a = b = c = d = e = f = g = h = t1 = t2 = 0;
 }
 
-static int
-sha512_init(struct shash_desc *desc)
+static void sha512_generic_block_fn(int blocks, u8 const *src, u64 *state,
+   const u8 *head, void *p)
 {
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA512_H0;
-   sctx-state[1] = SHA512_H1;
-   sctx-state[2] = SHA512_H2;
-   sctx-state[3] = SHA512_H3;
-   sctx-state[4] = SHA512_H4;
-   sctx-state[5] = SHA512_H5;
-   sctx-state[6] = SHA512_H6;
-   sctx-state[7] = SHA512_H7;
-   sctx-count[0] = sctx-count[1] = 0;
+   if (head)
+   sha512_transform(state, head);
 
-   return 0;
-}
-
-static int
-sha384_init(struct shash_desc *desc)
-{
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA384_H0;
-   sctx-state[1] = SHA384_H1;
-   sctx-state[2] = SHA384_H2;
-   sctx-state[3] = SHA384_H3;
-   sctx-state[4] = SHA384_H4;
-   sctx-state[5] = SHA384_H5;
-   sctx-state[6] = SHA384_H6;
-   sctx-state[7] = SHA384_H7;
-   sctx-count[0] = sctx-count[1] = 0;
-
-   return 0;
+   while (blocks--) {
+   sha512_transform(state, src);
+   src += SHA512_BLOCK_SIZE;
+   }
 }
 
 int crypto_sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
 {
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-
-   unsigned int i, index, part_len;
-
-   /* Compute number of bytes mod 128 */
-   index = sctx-count[0]  0x7f;
-
-   /* Update number of bytes */
-   if ((sctx-count[0] += len)  len)
-   sctx-count[1]++;
-
-part_len = 128 - index;
-
-   /* Transform as many times as possible. */
-   if (len = part_len) {
-   memcpy(sctx-buf[index], data, part_len);
-   sha512_transform(sctx-state, sctx-buf);
-
-   for (i = part_len; i + 127  len; i+=128)
-   sha512_transform(sctx-state, data[i]);
-
-   index = 0;
-   } else {
-   i = 0;
-   }
-
-   /* Buffer remaining input */
-   memcpy(sctx-buf[index], data[i], len - i);
-
-   return 0;
+   return crypto_sha512_base_do_update(desc, data, len,
+   sha512_generic_block_fn, NULL);
 }
 EXPORT_SYMBOL(crypto_sha512_update);
 
-static int
-sha512_final(struct shash_desc *desc, u8 *hash)
+int crypto_sha512_finup(struct shash_desc *desc, const u8 *data,
+   unsigned int len, u8 *hash)
 {
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-static u8 padding[128] = { 0x80, };
-   __be64 *dst = (__be64 *)hash;
-   __be64 bits[2];
-   unsigned int index, pad_len;
-   int i;
-
-   /* Save number of bits */
-   bits[1] = cpu_to_be64(sctx-count[0]  3);
-   bits[0] = cpu_to_be64(sctx-count[1]  3 | sctx-count[0]  61);
-
-   /* Pad out to 112 mod 128. */
-   index = sctx-count[0]  0x7f;
-   pad_len = (index  112) ? (112 - index) : ((128+112) - index);
-   crypto_sha512_update(desc, padding, pad_len);
-
-   /* Append length (before padding) */
-   crypto_sha512_update(desc, (const u8 *)bits, sizeof(bits));
-
-   /* Store state in digest */
-   for (i = 0; i  8; i++)
-   dst[i] = cpu_to_be64(sctx-state[i]);
-
-   /* Zeroize sensitive information. */
-   memset(sctx, 0, sizeof(struct sha512_state));
-
-   return 0;
+   if (len)
+   crypto_sha512_base_do_update(desc, data, len,
+sha512_generic_block_fn, NULL);
+   crypto_sha512_base_do_finalize(desc, sha512_generic_block_fn, NULL);
+   return crypto_sha512_base_finish(desc, hash);
 }
+EXPORT_SYMBOL(crypto_sha512_finup);
 
-static int sha384_final(struct shash_desc *desc, u8 *hash)
+int 

[RFC PATCH 5/6] arm64/crypto: move ARMv8 SHA-224/256 driver to SHA-256 base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/arm64/crypto/Kconfig|   1 +
 arch/arm64/crypto/sha2-ce-core.S |  11 +-
 arch/arm64/crypto/sha2-ce-glue.c | 211 ++-
 3 files changed, 40 insertions(+), 183 deletions(-)

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 2cf32e9887e1..13008362154b 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -17,6 +17,7 @@ config CRYPTO_SHA2_ARM64_CE
tristate SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)
depends on ARM64  KERNEL_MODE_NEON
select CRYPTO_HASH
+   select CRYPTO_SHA256_BASE
 
 config CRYPTO_GHASH_ARM64_CE
tristate GHASH (for GCM chaining mode) using ARMv8 Crypto Extensions
diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S
index 7f29fc031ea8..65ad56636fba 100644
--- a/arch/arm64/crypto/sha2-ce-core.S
+++ b/arch/arm64/crypto/sha2-ce-core.S
@@ -135,15 +135,18 @@ CPU_LE(   rev32   v19.16b, v19.16b)
 
/*
 * Final block: add padding and total bit count.
-* Skip if we have no total byte count in x4. In that case, the input
-* size was not a round multiple of the block size, and the padding is
-* handled by the C code.
+* Skip if the input size was not a round multiple of the block size,
+* the padding is handled by the C code in that case.
 */
cbz x4, 3f
+   ldr x5, [x2, #-8]   // sha256_state::count
+   tst x5, #0x3f   // round multiple of block size?
+   b.ne3f
+   str wzr, [x4]
moviv17.2d, #0
mov x8, #0x8000
moviv18.2d, #0
-   ror x7, x4, #29 // ror(lsl(x4, 3), 32)
+   ror x7, x5, #29 // ror(lsl(x4, 3), 32)
fmovd16, x8
mov x4, #0
mov v19.d[0], xzr
diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c
index ae67e88c28b9..8b35ca32538a 100644
--- a/arch/arm64/crypto/sha2-ce-glue.c
+++ b/arch/arm64/crypto/sha2-ce-glue.c
@@ -20,195 +20,48 @@ MODULE_DESCRIPTION(SHA-224/SHA-256 secure hash using 
ARMv8 Crypto Extensions);
 MODULE_AUTHOR(Ard Biesheuvel ard.biesheu...@linaro.org);
 MODULE_LICENSE(GPL v2);
 
-asmlinkage int sha2_ce_transform(int blocks, u8 const *src, u32 *state,
-u8 *head, long bytes);
+asmlinkage void sha2_ce_transform(int blocks, u8 const *src, u32 *state,
+ const u8 *head, void *p);
 
-static int sha224_init(struct shash_desc *desc)
+static int sha256_ce_update(struct shash_desc *desc, const u8 *data,
+   unsigned int len)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-
-   *sctx = (struct sha256_state){
-   .state = {
-   SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3,
-   SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7,
-   }
-   };
-   return 0;
-}
-
-static int sha256_init(struct shash_desc *desc)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-
-   *sctx = (struct sha256_state){
-   .state = {
-   SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3,
-   SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7,
-   }
-   };
-   return 0;
-}
-
-static int sha2_update(struct shash_desc *desc, const u8 *data,
-  unsigned int len)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial = sctx-count % SHA256_BLOCK_SIZE;
-
-   sctx-count += len;
-
-   if ((partial + len) = SHA256_BLOCK_SIZE) {
-   int blocks;
-
-   if (partial) {
-   int p = SHA256_BLOCK_SIZE - partial;
-
-   memcpy(sctx-buf + partial, data, p);
-   data += p;
-   len -= p;
-   }
-
-   blocks = len / SHA256_BLOCK_SIZE;
-   len %= SHA256_BLOCK_SIZE;
-
-   kernel_neon_begin_partial(28);
-   sha2_ce_transform(blocks, data, sctx-state,
- partial ? sctx-buf : NULL, 0);
-   kernel_neon_end();
-
-   data += blocks * SHA256_BLOCK_SIZE;
-   partial = 0;
-   }
-   if (len)
-   memcpy(sctx-buf + partial, data, len);
-   return 0;
-}
-
-static void sha2_final(struct shash_desc *desc)
-{
-   static const u8 padding[SHA256_BLOCK_SIZE] = { 0x80, };
-
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   __be64 bits = cpu_to_be64(sctx-count  3);
-   u32 padlen = SHA256_BLOCK_SIZE
-- ((sctx-count + sizeof(bits)) % SHA256_BLOCK_SIZE);
-
-   

[PATCH v2 08/14] crypto/arm: move SHA-1 ARMv8 implementation to base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/arm/crypto/Kconfig|   2 +-
 arch/arm/crypto/sha1-ce-glue.c | 110 +++--
 2 files changed, 31 insertions(+), 81 deletions(-)

diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
index c111d8992afb..31ad19f18af2 100644
--- a/arch/arm/crypto/Kconfig
+++ b/arch/arm/crypto/Kconfig
@@ -32,7 +32,7 @@ config CRYPTO_SHA1_ARM_CE
tristate SHA1 digest algorithm (ARM v8 Crypto Extensions)
depends on KERNEL_MODE_NEON
select CRYPTO_SHA1_ARM
-   select CRYPTO_SHA1
+   select CRYPTO_SHA1_BASE
select CRYPTO_HASH
help
  SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented
diff --git a/arch/arm/crypto/sha1-ce-glue.c b/arch/arm/crypto/sha1-ce-glue.c
index a9dd90df9fd7..29039d1bcdf9 100644
--- a/arch/arm/crypto/sha1-ce-glue.c
+++ b/arch/arm/crypto/sha1-ce-glue.c
@@ -13,114 +13,64 @@
 #include linux/crypto.h
 #include linux/module.h
 
-#include asm/crypto/sha1.h
 #include asm/hwcap.h
 #include asm/neon.h
 #include asm/simd.h
 #include asm/unaligned.h
 
+#include sha1.h
+
 MODULE_DESCRIPTION(SHA1 secure hash using ARMv8 Crypto Extensions);
 MODULE_AUTHOR(Ard Biesheuvel ard.biesheu...@linaro.org);
 MODULE_LICENSE(GPL v2);
 
-asmlinkage void sha1_ce_transform(int blocks, u8 const *src, u32 *state, 
- u8 *head);
-
-static int sha1_init(struct shash_desc *desc)
-{
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-
-   *sctx = (struct sha1_state){
-   .state = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4 },
-   };
-   return 0;
-}
+asmlinkage void sha1_ce_transform(int blocks, u8 const *src, u32 *state,
+ const u8 *head, void *p);
 
-static int sha1_update(struct shash_desc *desc, const u8 *data,
-  unsigned int len)
+static int sha1_ce_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len)
 {
struct sha1_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial;
 
-   if (!may_use_simd())
+   if (!may_use_simd() ||
+   (sctx-count % SHA1_BLOCK_SIZE) + len  SHA1_BLOCK_SIZE)
return sha1_update_arm(desc, data, len);
 
-   partial = sctx-count % SHA1_BLOCK_SIZE;
-   sctx-count += len;
+   kernel_neon_begin();
+   crypto_sha1_base_do_update(desc, data, len, sha1_ce_transform, NULL);
+   kernel_neon_end();
 
-   if ((partial + len) = SHA1_BLOCK_SIZE) {
-   int blocks;
-
-   if (partial) {
-   int p = SHA1_BLOCK_SIZE - partial;
-
-   memcpy(sctx-buffer + partial, data, p);
-   data += p;
-   len -= p;
-   }
-
-   blocks = len / SHA1_BLOCK_SIZE;
-   len %= SHA1_BLOCK_SIZE;
-
-   kernel_neon_begin();
-   sha1_ce_transform(blocks, data, sctx-state,
- partial ? sctx-buffer : NULL);
-   kernel_neon_end();
-
-   data += blocks * SHA1_BLOCK_SIZE;
-   partial = 0;
-   }
-   if (len)
-   memcpy(sctx-buffer + partial, data, len);
return 0;
 }
 
-static int sha1_final(struct shash_desc *desc, u8 *out)
+static int sha1_ce_finup(struct shash_desc *desc, const u8 *data,
+unsigned int len, u8 *out)
 {
-   static const u8 padding[SHA1_BLOCK_SIZE] = { 0x80, };
-
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   __be64 bits = cpu_to_be64(sctx-count  3);
-   __be32 *dst = (__be32 *)out;
-   int i;
-
-   u32 padlen = SHA1_BLOCK_SIZE
-- ((sctx-count + sizeof(bits)) % SHA1_BLOCK_SIZE);
-
-   sha1_update(desc, padding, padlen);
-   sha1_update(desc, (const u8 *)bits, sizeof(bits));
-
-   for (i = 0; i  SHA1_DIGEST_SIZE / sizeof(__be32); i++)
-   put_unaligned_be32(sctx-state[i], dst++);
-
-   *sctx = (struct sha1_state){};
-   return 0;
-}
+   if (!may_use_simd())
+   return sha1_finup_arm(desc, data, len, out);
 
-static int sha1_export(struct shash_desc *desc, void *out)
-{
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   struct sha1_state *dst = out;
+   kernel_neon_begin();
+   if (len)
+   crypto_sha1_base_do_update(desc, data, len,
+  sha1_ce_transform, NULL);
+   crypto_sha1_base_do_finalize(desc, sha1_ce_transform, NULL);
+   kernel_neon_end();
 
-   *dst = *sctx;
-   return 0;
+   return crypto_sha1_base_finish(desc, out);
 }
 
-static int sha1_import(struct shash_desc *desc, const void *in)
+static int sha1_ce_final(struct shash_desc *desc, u8 *out)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   struct sha1_state const *src = in;
-
-   *sctx = *src;
-   

[RFC PATCH 3/6] crypto: sha256: implement base layer for SHA-256

2015-03-30 Thread Ard Biesheuvel
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-256
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig   |   4 ++
 crypto/Makefile  |   1 +
 crypto/sha256_base.c | 138 +++
 include/crypto/sha.h |  17 +++
 4 files changed, 160 insertions(+)
 create mode 100644 crypto/sha256_base.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 880aa518c2eb..551bbf2e2ab5 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -602,6 +602,10 @@ config CRYPTO_SHA1_MB
  lanes remain unfilled, a flush operation will be initiated to
  process the crypto jobs, adding a slight latency.
 
+
+config CRYPTO_SHA256_BASE
+   tristate
+
 config CRYPTO_SHA256
tristate SHA224 and SHA256 digest algorithm
select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index 6174bf2592fe..bb9bafeb3ac7 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -44,6 +44,7 @@ obj-$(CONFIG_CRYPTO_RMD160) += rmd160.o
 obj-$(CONFIG_CRYPTO_RMD256) += rmd256.o
 obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
 obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
+obj-$(CONFIG_CRYPTO_SHA256_BASE) += sha256_base.o
 obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
 obj-$(CONFIG_CRYPTO_SHA512_BASE) += sha512_base.o
 obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
diff --git a/crypto/sha256_base.c b/crypto/sha256_base.c
new file mode 100644
index ..1ba2f6812c6b
--- /dev/null
+++ b/crypto/sha256_base.c
@@ -0,0 +1,138 @@
+/*
+ * sha256_base.c - core logic for SHA-256 implementations
+ *
+ * Copyright (C) 2015 Linaro Ltd ard.biesheu...@linaro.org
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include crypto/internal/hash.h
+#include crypto/sha.h
+#include linux/crypto.h
+#include linux/module.h
+
+#include asm/unaligned.h
+
+int sha224_base_init(struct shash_desc *desc)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+
+   *sctx = (struct sha256_state){
+   .state = {
+   SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3,
+   SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7,
+   }
+   };
+   return 0;
+}
+EXPORT_SYMBOL(sha224_base_init);
+
+int sha256_base_init(struct shash_desc *desc)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+
+   *sctx = (struct sha256_state){
+   .state = {
+   SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3,
+   SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7,
+   }
+   };
+   return 0;
+}
+EXPORT_SYMBOL(sha256_base_init);
+
+int sha256_base_export(struct shash_desc *desc, void *out)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   struct sha256_state *dst = out;
+
+   *dst = *sctx;
+
+   return 0;
+}
+EXPORT_SYMBOL(sha256_base_export);
+
+int sha256_base_import(struct shash_desc *desc, const void *in)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   struct sha256_state const *src = in;
+
+   *sctx = *src;
+
+   return 0;
+}
+EXPORT_SYMBOL(sha256_base_import);
+
+int sha256_base_do_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len, sha256_block_fn *block_fn, void *p)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   unsigned int partial = sctx-count % SHA256_BLOCK_SIZE;
+
+   sctx-count += len;
+
+   if ((partial + len) = SHA256_BLOCK_SIZE) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA256_BLOCK_SIZE - partial;
+
+   memcpy(sctx-buf + partial, data, p);
+   data += p;
+   len -= p;
+   }
+
+   blocks = len / SHA256_BLOCK_SIZE;
+   len %= SHA256_BLOCK_SIZE;
+
+   block_fn(blocks, data, sctx-state,
+   partial ? sctx-buf : NULL, p);
+   data += blocks * SHA256_BLOCK_SIZE;
+   partial = 0;
+   }
+   if (len)
+   memcpy(sctx-buf + partial, data, len);
+
+   return 0;
+}
+EXPORT_SYMBOL(sha256_base_do_update);
+
+int sha256_base_do_finalize(struct shash_desc *desc, sha256_block_fn *block_fn,
+   void *p)
+{
+   static const u8 padding[SHA256_BLOCK_SIZE] = { 0x80, };
+
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   unsigned int padlen;
+   __be64 bits;
+
+   padlen = SHA256_BLOCK_SIZE -
+(sctx-count + sizeof(bits)) % SHA256_BLOCK_SIZE;
+
+   bits = cpu_to_be64(sctx-count  3);
+
+   sha256_base_do_update(desc, 

[RFC PATCH 2/6] crypto: sha512-generic: move to generic glue implementation

2015-03-30 Thread Ard Biesheuvel
This updated the generic SHA-512 implementation to use the
generic shared SHA-512 glue code.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig  |   1 +
 crypto/sha512_generic.c | 117 +++-
 2 files changed, 16 insertions(+), 102 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 3400cf4e3cdb..880aa518c2eb 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -646,6 +646,7 @@ config CRYPTO_SHA512_BASE
 
 config CRYPTO_SHA512
tristate SHA384 and SHA512 digest algorithms
+   select CRYPTO_SHA512_BASE
select CRYPTO_HASH
help
  SHA512 secure hash standard (DFIPS 180-2).
diff --git a/crypto/sha512_generic.c b/crypto/sha512_generic.c
index 1c3c3767e079..0d8e973d0d4b 100644
--- a/crypto/sha512_generic.c
+++ b/crypto/sha512_generic.c
@@ -130,123 +130,36 @@ sha512_transform(u64 *state, const u8 *input)
a = b = c = d = e = f = g = h = t1 = t2 = 0;
 }
 
-static int
-sha512_init(struct shash_desc *desc)
-{
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA512_H0;
-   sctx-state[1] = SHA512_H1;
-   sctx-state[2] = SHA512_H2;
-   sctx-state[3] = SHA512_H3;
-   sctx-state[4] = SHA512_H4;
-   sctx-state[5] = SHA512_H5;
-   sctx-state[6] = SHA512_H6;
-   sctx-state[7] = SHA512_H7;
-   sctx-count[0] = sctx-count[1] = 0;
-
-   return 0;
-}
-
-static int
-sha384_init(struct shash_desc *desc)
+static void sha512_generic_block_fn(int blocks, u8 const *src, u64 *state,
+   const u8 *head, void *p)
 {
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA384_H0;
-   sctx-state[1] = SHA384_H1;
-   sctx-state[2] = SHA384_H2;
-   sctx-state[3] = SHA384_H3;
-   sctx-state[4] = SHA384_H4;
-   sctx-state[5] = SHA384_H5;
-   sctx-state[6] = SHA384_H6;
-   sctx-state[7] = SHA384_H7;
-   sctx-count[0] = sctx-count[1] = 0;
+   if (head)
+   sha512_transform(state, head);
 
-   return 0;
+   while (blocks--) {
+   sha512_transform(state, src);
+   src += SHA512_BLOCK_SIZE;
+   }
 }
 
 int crypto_sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
 {
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-
-   unsigned int i, index, part_len;
-
-   /* Compute number of bytes mod 128 */
-   index = sctx-count[0]  0x7f;
-
-   /* Update number of bytes */
-   if ((sctx-count[0] += len)  len)
-   sctx-count[1]++;
-
-part_len = 128 - index;
-
-   /* Transform as many times as possible. */
-   if (len = part_len) {
-   memcpy(sctx-buf[index], data, part_len);
-   sha512_transform(sctx-state, sctx-buf);
-
-   for (i = part_len; i + 127  len; i+=128)
-   sha512_transform(sctx-state, data[i]);
-
-   index = 0;
-   } else {
-   i = 0;
-   }
-
-   /* Buffer remaining input */
-   memcpy(sctx-buf[index], data[i], len - i);
-
-   return 0;
+   return sha512_base_do_update(desc, data, len, sha512_generic_block_fn,
+NULL);
 }
 EXPORT_SYMBOL(crypto_sha512_update);
 
 static int
 sha512_final(struct shash_desc *desc, u8 *hash)
 {
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-static u8 padding[128] = { 0x80, };
-   __be64 *dst = (__be64 *)hash;
-   __be64 bits[2];
-   unsigned int index, pad_len;
-   int i;
-
-   /* Save number of bits */
-   bits[1] = cpu_to_be64(sctx-count[0]  3);
-   bits[0] = cpu_to_be64(sctx-count[1]  3 | sctx-count[0]  61);
-
-   /* Pad out to 112 mod 128. */
-   index = sctx-count[0]  0x7f;
-   pad_len = (index  112) ? (112 - index) : ((128+112) - index);
-   crypto_sha512_update(desc, padding, pad_len);
-
-   /* Append length (before padding) */
-   crypto_sha512_update(desc, (const u8 *)bits, sizeof(bits));
-
-   /* Store state in digest */
-   for (i = 0; i  8; i++)
-   dst[i] = cpu_to_be64(sctx-state[i]);
-
-   /* Zeroize sensitive information. */
-   memset(sctx, 0, sizeof(struct sha512_state));
-
-   return 0;
-}
-
-static int sha384_final(struct shash_desc *desc, u8 *hash)
-{
-   u8 D[64];
-
-   sha512_final(desc, D);
-
-   memcpy(hash, D, 48);
-   memzero_explicit(D, 64);
-
-   return 0;
+   sha512_base_do_finalize(desc, sha512_generic_block_fn, NULL);
+   return sha512_base_finish(desc, hash);
 }
 
 static struct shash_alg sha512_algs[2] = { {
.digestsize =   SHA512_DIGEST_SIZE,
-   .init   =   sha512_init,
+   .init   =   sha512_base_init,
.update =   crypto_sha512_update,
.final  =   sha512_final,
.descsize   =   

[PATCH v2 09/14] crypto/arm: move SHA-224/256 ARMv8 implementation to base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/arm/crypto/Kconfig|   1 +
 arch/arm/crypto/sha2-ce-glue.c | 151 +
 2 files changed, 33 insertions(+), 119 deletions(-)

diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
index 31ad19f18af2..de91f0447240 100644
--- a/arch/arm/crypto/Kconfig
+++ b/arch/arm/crypto/Kconfig
@@ -42,6 +42,7 @@ config CRYPTO_SHA2_ARM_CE
tristate SHA-224/256 digest algorithm (ARM v8 Crypto Extensions)
depends on KERNEL_MODE_NEON
select CRYPTO_SHA256
+   select CRYPTO_SHA256_BASE
select CRYPTO_HASH
help
  SHA-256 secure hash standard (DFIPS 180-2) implemented
diff --git a/arch/arm/crypto/sha2-ce-glue.c b/arch/arm/crypto/sha2-ce-glue.c
index 9ffe8ad27402..df57192c41cd 100644
--- a/arch/arm/crypto/sha2-ce-glue.c
+++ b/arch/arm/crypto/sha2-ce-glue.c
@@ -23,140 +23,52 @@ MODULE_AUTHOR(Ard Biesheuvel 
ard.biesheu...@linaro.org);
 MODULE_LICENSE(GPL v2);
 
 asmlinkage void sha2_ce_transform(int blocks, u8 const *src, u32 *state,
- u8 *head);
+ const u8 *head, void *p);
 
-static int sha224_init(struct shash_desc *desc)
+static int sha2_ce_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len)
 {
struct sha256_state *sctx = shash_desc_ctx(desc);
 
-   *sctx = (struct sha256_state){
-   .state = {
-   SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3,
-   SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7,
-   }
-   };
-   return 0;
-}
+   if (!may_use_simd() ||
+   (sctx-count % SHA256_BLOCK_SIZE) + len  SHA256_BLOCK_SIZE)
+   return crypto_sha256_update(desc, data, len);
 
-static int sha256_init(struct shash_desc *desc)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
+   kernel_neon_begin();
+   crypto_sha256_base_do_update(desc, data, len, sha2_ce_transform, NULL);
+   kernel_neon_end();
 
-   *sctx = (struct sha256_state){
-   .state = {
-   SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3,
-   SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7,
-   }
-   };
return 0;
 }
 
-static int sha2_update(struct shash_desc *desc, const u8 *data,
-  unsigned int len)
+static int sha2_ce_finup(struct shash_desc *desc, const u8 *data,
+unsigned int len, u8 *out)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial;
-
if (!may_use_simd())
-   return crypto_sha256_update(desc, data, len);
-
-   partial = sctx-count % SHA256_BLOCK_SIZE;
-   sctx-count += len;
-
-   if ((partial + len) = SHA256_BLOCK_SIZE) {
-   int blocks;
-
-   if (partial) {
-   int p = SHA256_BLOCK_SIZE - partial;
-
-   memcpy(sctx-buf + partial, data, p);
-   data += p;
-   len -= p;
-   }
+   return crypto_sha256_finup(desc, data, len, out);
 
-   blocks = len / SHA256_BLOCK_SIZE;
-   len %= SHA256_BLOCK_SIZE;
-
-   kernel_neon_begin();
-   sha2_ce_transform(blocks, data, sctx-state,
- partial ? sctx-buf : NULL);
-   kernel_neon_end();
-
-   data += blocks * SHA256_BLOCK_SIZE;
-   partial = 0;
-   }
+   kernel_neon_begin();
if (len)
-   memcpy(sctx-buf + partial, data, len);
-   return 0;
-}
-
-static void sha2_final(struct shash_desc *desc)
-{
-   static const u8 padding[SHA256_BLOCK_SIZE] = { 0x80, };
-
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   __be64 bits = cpu_to_be64(sctx-count  3);
-   u32 padlen = SHA256_BLOCK_SIZE
-- ((sctx-count + sizeof(bits)) % SHA256_BLOCK_SIZE);
-
-   sha2_update(desc, padding, padlen);
-   sha2_update(desc, (const u8 *)bits, sizeof(bits));
-}
+   crypto_sha256_base_do_update(desc, data, len,
+sha2_ce_transform, NULL);
+   crypto_sha256_base_do_finalize(desc, sha2_ce_transform, NULL);
+   kernel_neon_end();
 
-static int sha224_final(struct shash_desc *desc, u8 *out)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   __be32 *dst = (__be32 *)out;
-   int i;
-
-   sha2_final(desc);
-
-   for (i = 0; i  SHA224_DIGEST_SIZE / sizeof(__be32); i++)
-   put_unaligned_be32(sctx-state[i], dst++);
-
-   *sctx = (struct sha256_state){};
-   return 0;
+   return crypto_sha256_base_finish(desc, out);
 }
 
-static int sha256_final(struct shash_desc *desc, u8 *out)
+static int sha2_ce_final(struct shash_desc *desc, u8 *out)
 {
-   struct 

[RFC PATCH 6/6] arm/crypto: accelerated SHA-512 using ARM generic ASM and NEON

2015-03-30 Thread Ard Biesheuvel
This updates the SHA-512 NEON module with the faster and more
versatile implementation from the OpenSSL project. It consists
of both a NEON and a generic ASM version of the core SHA-512
transform, where the NEON version reverts to the ASM version
when invoked in non-process context.

Performance relative to the generic implementation (measured
using tcrypt.ko mode=306 sec=1 running on a Cortex-A57 under
KVM):

  input sizeblock size  asm neonold neon

  1616  1.392.542.21
  6416  1.322.332.09
  6464  1.382.532.19
  256   16  1.312.282.06
  256   64  1.382.542.25
  256   256 1.402.772.39
  1024  16  1.292.222.01
  1024  256 1.402.822.45
  1024  10241.412.932.53
  2048  16  1.332.212.00
  2048  256 1.402.842.46
  2048  10241.412.962.55
  2048  20481.412.982.56
  4096  16  1.342.201.99
  4096  256 1.402.842.46
  4096  10241.412.972.56
  4096  40961.413.012.58
  8192  16  1.342.191.99
  8192  256 1.402.852.47
  8192  10241.412.982.56
  8192  40961.412.712.59
  8192  81921.513.512.69

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/arm/crypto/Kconfig   |8 +
 arch/arm/crypto/Makefile  |8 +-
 arch/arm/crypto/sha512-armv4.pl   |  656 
 arch/arm/crypto/sha512-core.S_shipped | 1814 +
 arch/arm/crypto/sha512-glue.c |  137 +++
 arch/arm/crypto/sha512-neon-glue.c|  111 ++
 arch/arm/crypto/sha512.h  |8 +
 7 files changed, 2741 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm/crypto/sha512-armv4.pl
 create mode 100644 arch/arm/crypto/sha512-core.S_shipped
 create mode 100644 arch/arm/crypto/sha512-glue.c
 create mode 100644 arch/arm/crypto/sha512-neon-glue.c
 create mode 100644 arch/arm/crypto/sha512.h

diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
index 458729d2ce22..6b50c6d77b77 100644
--- a/arch/arm/crypto/Kconfig
+++ b/arch/arm/crypto/Kconfig
@@ -53,6 +53,14 @@ config CRYPTO_SHA256_ARM
  SHA-256 secure hash standard (DFIPS 180-2) implemented
  using optimized ARM assembler and NEON, when available.
 
+config CRYPTO_SHA512_ARM
+   tristate SHA-384/512 digest algorithm (ARM-asm and NEON)
+   select CRYPTO_HASH
+   select CRYPTO_SHA512_BASE
+   help
+ SHA-512 secure hash standard (DFIPS 180-2) implemented
+ using optimized ARM assembler and NEON, when available.
+
 config CRYPTO_SHA512_ARM_NEON
tristate SHA384 and SHA512 digest algorithm (ARM NEON)
depends on KERNEL_MODE_NEON
diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
index ef46e898f98b..322a6ca999a2 100644
--- a/arch/arm/crypto/Makefile
+++ b/arch/arm/crypto/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
 obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o
 obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
 obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
+obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
 obj-$(CONFIG_CRYPTO_SHA512_ARM_NEON) += sha512-arm-neon.o
 obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
 obj-$(CONFIG_CRYPTO_SHA2_ARM_CE) += sha2-arm-ce.o
@@ -19,6 +20,8 @@ sha1-arm-y:= sha1-armv4-large.o sha1_glue.o
 sha1-arm-neon-y:= sha1-armv7-neon.o sha1_neon_glue.o
 sha256-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha256_neon_glue.o
 sha256-arm-y   := sha256-core.o sha256_glue.o $(sha256-arm-neon-y)
+sha512-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha512-neon-glue.o
+sha512-arm-y   := sha512-core.o sha512-glue.o $(sha512-arm-neon-y)
 sha512-arm-neon-y := sha512-armv7-neon.o sha512_neon_glue.o
 sha1-arm-ce-y  := sha1-ce-core.o sha1-ce-glue.o
 sha2-arm-ce-y  := sha2-ce-core.o sha2-ce-glue.o
@@ -34,4 +37,7 @@ $(src)/aesbs-core.S_shipped: $(src)/bsaes-armv7.pl
 $(src)/sha256-core.S_shipped: $(src)/sha256-armv4.pl
$(call cmd,perl)
 
-.PRECIOUS: $(obj)/aesbs-core.S $(obj)/sha256-core.S
+$(src)/sha512-core.S_shipped: $(src)/sha512-armv4.pl
+   $(call cmd,perl)
+
+.PRECIOUS: $(obj)/aesbs-core.S $(obj)/sha256-core.S $(obj)/sha512-core.S
diff --git a/arch/arm/crypto/sha512-armv4.pl b/arch/arm/crypto/sha512-armv4.pl
new file mode 100644
index ..7e540f8439da
--- /dev/null
+++ b/arch/arm/crypto/sha512-armv4.pl
@@ -0,0 +1,656 @@
+#!/usr/bin/env perl
+
+# 
+# Written by Andy Polyakov 

[PATCH v2 02/14] crypto: sha256: implement base layer for SHA-256

2015-03-30 Thread Ard Biesheuvel
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-256
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig   |   4 ++
 crypto/Makefile  |   1 +
 crypto/sha256_base.c | 140 +++
 include/crypto/sha.h |  17 +++
 4 files changed, 162 insertions(+)
 create mode 100644 crypto/sha256_base.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 3400cf4e3cdb..1664bd68b97d 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -602,6 +602,10 @@ config CRYPTO_SHA1_MB
  lanes remain unfilled, a flush operation will be initiated to
  process the crypto jobs, adding a slight latency.
 
+
+config CRYPTO_SHA256_BASE
+   tristate
+
 config CRYPTO_SHA256
tristate SHA224 and SHA256 digest algorithm
select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index 6174bf2592fe..bb9bafeb3ac7 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -44,6 +44,7 @@ obj-$(CONFIG_CRYPTO_RMD160) += rmd160.o
 obj-$(CONFIG_CRYPTO_RMD256) += rmd256.o
 obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
 obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
+obj-$(CONFIG_CRYPTO_SHA256_BASE) += sha256_base.o
 obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
 obj-$(CONFIG_CRYPTO_SHA512_BASE) += sha512_base.o
 obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
diff --git a/crypto/sha256_base.c b/crypto/sha256_base.c
new file mode 100644
index ..5fd728066912
--- /dev/null
+++ b/crypto/sha256_base.c
@@ -0,0 +1,140 @@
+/*
+ * sha256_base.c - core logic for SHA-256 implementations
+ *
+ * Copyright (C) 2015 Linaro Ltd ard.biesheu...@linaro.org
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include crypto/internal/hash.h
+#include crypto/sha.h
+#include linux/crypto.h
+#include linux/module.h
+
+#include asm/unaligned.h
+
+int crypto_sha224_base_init(struct shash_desc *desc)
+{
+   static const u32 sha224_init_state[] = {
+   SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3,
+   SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7,
+   };
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+
+   memcpy(sctx-state, sha224_init_state, sizeof(sctx-state));
+   sctx-count = 0;
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha224_base_init);
+
+int crypto_sha256_base_init(struct shash_desc *desc)
+{
+   static const u32 sha256_init_state[] = {
+   SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3,
+   SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7,
+   };
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+
+   memcpy(sctx-state, sha256_init_state, sizeof(sctx-state));
+   sctx-count = 0;
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha256_base_init);
+
+int crypto_sha256_base_export(struct shash_desc *desc, void *out)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   struct sha256_state *dst = out;
+
+   *dst = *sctx;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha256_base_export);
+
+int crypto_sha256_base_import(struct shash_desc *desc, const void *in)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   struct sha256_state const *src = in;
+
+   *sctx = *src;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha256_base_import);
+
+int crypto_sha256_base_do_update(struct shash_desc *desc, const u8 *data,
+unsigned int len, sha256_block_fn *block_fn,
+void *p)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   unsigned int partial = sctx-count % SHA256_BLOCK_SIZE;
+
+   sctx-count += len;
+
+   if (unlikely((partial + len) = SHA256_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA256_BLOCK_SIZE - partial;
+
+   memcpy(sctx-buf + partial, data, p);
+   data += p;
+   len -= p;
+   }
+
+   blocks = len / SHA256_BLOCK_SIZE;
+   len %= SHA256_BLOCK_SIZE;
+
+   block_fn(blocks, data, sctx-state,
+partial ? sctx-buf : NULL, p);
+   data += blocks * SHA256_BLOCK_SIZE;
+   partial = 0;
+   }
+   if (len)
+   memcpy(sctx-buf + partial, data, len);
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha256_base_do_update);
+
+int crypto_sha256_base_do_finalize(struct shash_desc *desc,
+  sha256_block_fn *block_fn, void *p)
+{
+   const int bit_offset = SHA256_BLOCK_SIZE - sizeof(__be64);
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   __be64 *bits = 

[RFC PATCH 4/6] crypto: sha256-generic: move to generic glue implementation

2015-03-30 Thread Ard Biesheuvel
This updates the generic SHA-256 implementation to use the
new shared SHA-256 glue code.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig  |   1 +
 crypto/sha256_generic.c | 131 +++-
 2 files changed, 18 insertions(+), 114 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 551bbf2e2ab5..59243df4ea13 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -608,6 +608,7 @@ config CRYPTO_SHA256_BASE
 
 config CRYPTO_SHA256
tristate SHA224 and SHA256 digest algorithm
+   select CRYPTO_SHA256_BASE
select CRYPTO_HASH
help
  SHA256 secure hash standard (DFIPS 180-2).
diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c
index b001ff5c2efc..7119346c2f41 100644
--- a/crypto/sha256_generic.c
+++ b/crypto/sha256_generic.c
@@ -214,136 +214,39 @@ static void sha256_transform(u32 *state, const u8 *input)
memzero_explicit(W, 64 * sizeof(u32));
 }
 
-static int sha224_init(struct shash_desc *desc)
+static void sha256_generic_block_fn(int blocks, u8 const *src, u32 *state,
+   const u8 *head, void *p)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA224_H0;
-   sctx-state[1] = SHA224_H1;
-   sctx-state[2] = SHA224_H2;
-   sctx-state[3] = SHA224_H3;
-   sctx-state[4] = SHA224_H4;
-   sctx-state[5] = SHA224_H5;
-   sctx-state[6] = SHA224_H6;
-   sctx-state[7] = SHA224_H7;
-   sctx-count = 0;
+   if (head)
+   sha256_transform(state, head);
 
-   return 0;
-}
-
-static int sha256_init(struct shash_desc *desc)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA256_H0;
-   sctx-state[1] = SHA256_H1;
-   sctx-state[2] = SHA256_H2;
-   sctx-state[3] = SHA256_H3;
-   sctx-state[4] = SHA256_H4;
-   sctx-state[5] = SHA256_H5;
-   sctx-state[6] = SHA256_H6;
-   sctx-state[7] = SHA256_H7;
-   sctx-count = 0;
-
-   return 0;
+   while (blocks--) {
+   sha256_transform(state, src);
+   src += SHA256_BLOCK_SIZE;
+   }
 }
 
 int crypto_sha256_update(struct shash_desc *desc, const u8 *data,
  unsigned int len)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial, done;
-   const u8 *src;
-
-   partial = sctx-count  0x3f;
-   sctx-count += len;
-   done = 0;
-   src = data;
-
-   if ((partial + len)  63) {
-   if (partial) {
-   done = -partial;
-   memcpy(sctx-buf + partial, data, done + 64);
-   src = sctx-buf;
-   }
-
-   do {
-   sha256_transform(sctx-state, src);
-   done += 64;
-   src = data + done;
-   } while (done + 63  len);
-
-   partial = 0;
-   }
-   memcpy(sctx-buf + partial, src, len - done);
-
-   return 0;
+   return sha256_base_do_update(desc, data, len, sha256_generic_block_fn,
+NULL);
 }
 EXPORT_SYMBOL(crypto_sha256_update);
 
 static int sha256_final(struct shash_desc *desc, u8 *out)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   __be32 *dst = (__be32 *)out;
-   __be64 bits;
-   unsigned int index, pad_len;
-   int i;
-   static const u8 padding[64] = { 0x80, };
-
-   /* Save number of bits */
-   bits = cpu_to_be64(sctx-count  3);
-
-   /* Pad out to 56 mod 64. */
-   index = sctx-count  0x3f;
-   pad_len = (index  56) ? (56 - index) : ((64+56) - index);
-   crypto_sha256_update(desc, padding, pad_len);
-
-   /* Append length (before padding) */
-   crypto_sha256_update(desc, (const u8 *)bits, sizeof(bits));
-
-   /* Store state in digest */
-   for (i = 0; i  8; i++)
-   dst[i] = cpu_to_be32(sctx-state[i]);
-
-   /* Zeroize sensitive information. */
-   memset(sctx, 0, sizeof(*sctx));
-
-   return 0;
-}
-
-static int sha224_final(struct shash_desc *desc, u8 *hash)
-{
-   u8 D[SHA256_DIGEST_SIZE];
-
-   sha256_final(desc, D);
-
-   memcpy(hash, D, SHA224_DIGEST_SIZE);
-   memzero_explicit(D, SHA256_DIGEST_SIZE);
-
-   return 0;
-}
-
-static int sha256_export(struct shash_desc *desc, void *out)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-
-   memcpy(out, sctx, sizeof(*sctx));
-   return 0;
-}
-
-static int sha256_import(struct shash_desc *desc, const void *in)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-
-   memcpy(sctx, in, sizeof(*sctx));
-   return 0;
+   sha256_base_do_finalize(desc, sha256_generic_block_fn, NULL);
+   return sha256_base_finish(desc, out);
 }
 
 static struct shash_alg sha256_algs[2] = { {
.digestsize =   SHA256_DIGEST_SIZE,
-   

[PATCH v2 resend 01/14] crypto: sha512: implement base layer for SHA-512

2015-03-30 Thread Ard Biesheuvel
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-512
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig   |   3 ++
 crypto/Makefile  |   1 +
 crypto/sha512_base.c | 143 +++
 include/crypto/sha.h |  20 +++
 4 files changed, 167 insertions(+)
 create mode 100644 crypto/sha512_base.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 88639937a934..3400cf4e3cdb 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -641,6 +641,9 @@ config CRYPTO_SHA256_SPARC64
  SHA-256 secure hash standard (DFIPS 180-2) implemented
  using sparc64 crypto instructions, when available.
 
+config CRYPTO_SHA512_BASE
+   tristate
+
 config CRYPTO_SHA512
tristate SHA384 and SHA512 digest algorithms
select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index 97b7d3ac87e7..6174bf2592fe 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -45,6 +45,7 @@ obj-$(CONFIG_CRYPTO_RMD256) += rmd256.o
 obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
 obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
 obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
+obj-$(CONFIG_CRYPTO_SHA512_BASE) += sha512_base.o
 obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
 obj-$(CONFIG_CRYPTO_WP512) += wp512.o
 obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
diff --git a/crypto/sha512_base.c b/crypto/sha512_base.c
new file mode 100644
index ..9a60829e06c4
--- /dev/null
+++ b/crypto/sha512_base.c
@@ -0,0 +1,143 @@
+/*
+ * sha512_base.c - core logic for SHA-512 implementations
+ *
+ * Copyright (C) 2015 Linaro Ltd ard.biesheu...@linaro.org
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include crypto/internal/hash.h
+#include crypto/sha.h
+#include linux/crypto.h
+#include linux/module.h
+
+#include asm/unaligned.h
+
+int crypto_sha384_base_init(struct shash_desc *desc)
+{
+   static const u64 sha384_init_state[] = {
+   SHA384_H0, SHA384_H1, SHA384_H2, SHA384_H3,
+   SHA384_H4, SHA384_H5, SHA384_H6, SHA384_H7,
+   };
+   struct sha512_state *sctx = shash_desc_ctx(desc);
+
+   memcpy(sctx-state, sha384_init_state, sizeof(sctx-state));
+   sctx-count[0] = sctx-count[1] = 0;
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha384_base_init);
+
+int crypto_sha512_base_init(struct shash_desc *desc)
+{
+   static const u64 sha512_init_state[] = {
+   SHA512_H0, SHA512_H1, SHA512_H2, SHA512_H3,
+   SHA512_H4, SHA512_H5, SHA512_H6, SHA512_H7,
+   };
+   struct sha512_state *sctx = shash_desc_ctx(desc);
+
+   memcpy(sctx-state, sha512_init_state, sizeof(sctx-state));
+   sctx-count[0] = sctx-count[1] = 0;
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha512_base_init);
+
+int crypto_sha512_base_export(struct shash_desc *desc, void *out)
+{
+   struct sha512_state *sctx = shash_desc_ctx(desc);
+   struct sha512_state *dst = out;
+
+   *dst = *sctx;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha512_base_export);
+
+int crypto_sha512_base_import(struct shash_desc *desc, const void *in)
+{
+   struct sha512_state *sctx = shash_desc_ctx(desc);
+   struct sha512_state const *src = in;
+
+   *sctx = *src;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha512_base_import);
+
+int crypto_sha512_base_do_update(struct shash_desc *desc, const u8 *data,
+unsigned int len, sha512_block_fn *block_fn,
+void *p)
+{
+   struct sha512_state *sctx = shash_desc_ctx(desc);
+   unsigned int partial = sctx-count[0] % SHA512_BLOCK_SIZE;
+
+   sctx-count[0] += len;
+   if (sctx-count[0]  len)
+   sctx-count[1]++;
+
+   if (unlikely((partial + len) = SHA512_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA512_BLOCK_SIZE - partial;
+
+   memcpy(sctx-buf + partial, data, p);
+   data += p;
+   len -= p;
+   }
+
+   blocks = len / SHA512_BLOCK_SIZE;
+   len %= SHA512_BLOCK_SIZE;
+
+   block_fn(blocks, data, sctx-state,
+partial ? sctx-buf : NULL, p);
+   data += blocks * SHA512_BLOCK_SIZE;
+   partial = 0;
+   }
+   if (len)
+   memcpy(sctx-buf + partial, data, len);
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha512_base_do_update);
+
+int crypto_sha512_base_do_finalize(struct shash_desc *desc,
+  sha512_block_fn *block_fn, void *p)
+{
+   const int bit_offset = SHA512_BLOCK_SIZE - 

[PATCH v2 resend 05/14] crypto: sha256-generic: move to generic glue implementation

2015-03-30 Thread Ard Biesheuvel
This updates the generic SHA-256 implementation to use the
new shared SHA-256 glue code.

It also implements a .finup hook crypto_sha256_finup() and exports
it to other modules.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig  |   1 +
 crypto/sha256_generic.c | 139 ++--
 include/crypto/sha.h|   3 ++
 3 files changed, 31 insertions(+), 112 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 83bc1680391a..72bf5af7240d 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -611,6 +611,7 @@ config CRYPTO_SHA256_BASE
 
 config CRYPTO_SHA256
tristate SHA224 and SHA256 digest algorithm
+   select CRYPTO_SHA256_BASE
select CRYPTO_HASH
help
  SHA256 secure hash standard (DFIPS 180-2).
diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c
index b001ff5c2efc..d5c18c08b3da 100644
--- a/crypto/sha256_generic.c
+++ b/crypto/sha256_generic.c
@@ -214,136 +214,50 @@ static void sha256_transform(u32 *state, const u8 *input)
memzero_explicit(W, 64 * sizeof(u32));
 }
 
-static int sha224_init(struct shash_desc *desc)
+static void sha256_generic_block_fn(int blocks, u8 const *src, u32 *state,
+   const u8 *head, void *p)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA224_H0;
-   sctx-state[1] = SHA224_H1;
-   sctx-state[2] = SHA224_H2;
-   sctx-state[3] = SHA224_H3;
-   sctx-state[4] = SHA224_H4;
-   sctx-state[5] = SHA224_H5;
-   sctx-state[6] = SHA224_H6;
-   sctx-state[7] = SHA224_H7;
-   sctx-count = 0;
+   if (head)
+   sha256_transform(state, head);
 
-   return 0;
-}
-
-static int sha256_init(struct shash_desc *desc)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA256_H0;
-   sctx-state[1] = SHA256_H1;
-   sctx-state[2] = SHA256_H2;
-   sctx-state[3] = SHA256_H3;
-   sctx-state[4] = SHA256_H4;
-   sctx-state[5] = SHA256_H5;
-   sctx-state[6] = SHA256_H6;
-   sctx-state[7] = SHA256_H7;
-   sctx-count = 0;
-
-   return 0;
+   while (blocks--) {
+   sha256_transform(state, src);
+   src += SHA256_BLOCK_SIZE;
+   }
 }
 
 int crypto_sha256_update(struct shash_desc *desc, const u8 *data,
  unsigned int len)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial, done;
-   const u8 *src;
-
-   partial = sctx-count  0x3f;
-   sctx-count += len;
-   done = 0;
-   src = data;
-
-   if ((partial + len)  63) {
-   if (partial) {
-   done = -partial;
-   memcpy(sctx-buf + partial, data, done + 64);
-   src = sctx-buf;
-   }
-
-   do {
-   sha256_transform(sctx-state, src);
-   done += 64;
-   src = data + done;
-   } while (done + 63  len);
-
-   partial = 0;
-   }
-   memcpy(sctx-buf + partial, src, len - done);
-
-   return 0;
+   return crypto_sha256_base_do_update(desc, data, len,
+   sha256_generic_block_fn, NULL);
 }
 EXPORT_SYMBOL(crypto_sha256_update);
 
-static int sha256_final(struct shash_desc *desc, u8 *out)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   __be32 *dst = (__be32 *)out;
-   __be64 bits;
-   unsigned int index, pad_len;
-   int i;
-   static const u8 padding[64] = { 0x80, };
-
-   /* Save number of bits */
-   bits = cpu_to_be64(sctx-count  3);
-
-   /* Pad out to 56 mod 64. */
-   index = sctx-count  0x3f;
-   pad_len = (index  56) ? (56 - index) : ((64+56) - index);
-   crypto_sha256_update(desc, padding, pad_len);
-
-   /* Append length (before padding) */
-   crypto_sha256_update(desc, (const u8 *)bits, sizeof(bits));
-
-   /* Store state in digest */
-   for (i = 0; i  8; i++)
-   dst[i] = cpu_to_be32(sctx-state[i]);
-
-   /* Zeroize sensitive information. */
-   memset(sctx, 0, sizeof(*sctx));
-
-   return 0;
-}
-
-static int sha224_final(struct shash_desc *desc, u8 *hash)
-{
-   u8 D[SHA256_DIGEST_SIZE];
-
-   sha256_final(desc, D);
-
-   memcpy(hash, D, SHA224_DIGEST_SIZE);
-   memzero_explicit(D, SHA256_DIGEST_SIZE);
-
-   return 0;
-}
-
-static int sha256_export(struct shash_desc *desc, void *out)
+int crypto_sha256_finup(struct shash_desc *desc, const u8 *data,
+   unsigned int len, u8 *hash)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-
-   memcpy(out, sctx, sizeof(*sctx));
-   return 0;
+   if (len)
+   crypto_sha256_base_do_update(desc, data, len,
+sha256_generic_block_fn, NULL);
+ 

[PATCH v2 resend 07/14] crypto/arm: move SHA-1 ARM asm implementation to base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/arm/crypto/Kconfig  |   1 +
 arch/arm/{include/asm = }/crypto/sha1.h |   3 +
 arch/arm/crypto/sha1_glue.c  | 117 +++
 3 files changed, 28 insertions(+), 93 deletions(-)
 rename arch/arm/{include/asm = }/crypto/sha1.h (67%)

diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
index d63f319924d2..c111d8992afb 100644
--- a/arch/arm/crypto/Kconfig
+++ b/arch/arm/crypto/Kconfig
@@ -11,6 +11,7 @@ if ARM_CRYPTO
 config CRYPTO_SHA1_ARM
tristate SHA1 digest algorithm (ARM-asm)
select CRYPTO_SHA1
+   select CRYPTO_SHA1_BASE
select CRYPTO_HASH
help
  SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented
diff --git a/arch/arm/include/asm/crypto/sha1.h b/arch/arm/crypto/sha1.h
similarity index 67%
rename from arch/arm/include/asm/crypto/sha1.h
rename to arch/arm/crypto/sha1.h
index 75e6a417416b..ffd8bd08b1a7 100644
--- a/arch/arm/include/asm/crypto/sha1.h
+++ b/arch/arm/crypto/sha1.h
@@ -7,4 +7,7 @@
 extern int sha1_update_arm(struct shash_desc *desc, const u8 *data,
   unsigned int len);
 
+extern int sha1_finup_arm(struct shash_desc *desc, const u8 *data,
+  unsigned int len, u8 *out);
+
 #endif
diff --git a/arch/arm/crypto/sha1_glue.c b/arch/arm/crypto/sha1_glue.c
index e31b0440c613..b6a78be0367f 100644
--- a/arch/arm/crypto/sha1_glue.c
+++ b/arch/arm/crypto/sha1_glue.c
@@ -23,124 +23,55 @@
 #include linux/types.h
 #include crypto/sha.h
 #include asm/byteorder.h
-#include asm/crypto/sha1.h
 
+#include sha1.h
 
 asmlinkage void sha1_block_data_order(u32 *digest,
const unsigned char *data, unsigned int rounds);
 
-
-static int sha1_init(struct shash_desc *desc)
-{
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-
-   *sctx = (struct sha1_state){
-   .state = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4 },
-   };
-
-   return 0;
-}
-
-
-static int __sha1_update(struct sha1_state *sctx, const u8 *data,
-unsigned int len, unsigned int partial)
+static void sha1_arm_block_fn(int blocks, u8 const *src, u32 *state,
+ const u8 *head, void *p)
 {
-   unsigned int done = 0;
-
-   sctx-count += len;
-
-   if (partial) {
-   done = SHA1_BLOCK_SIZE - partial;
-   memcpy(sctx-buffer + partial, data, done);
-   sha1_block_data_order(sctx-state, sctx-buffer, 1);
-   }
-
-   if (len - done = SHA1_BLOCK_SIZE) {
-   const unsigned int rounds = (len - done) / SHA1_BLOCK_SIZE;
-   sha1_block_data_order(sctx-state, data + done, rounds);
-   done += rounds * SHA1_BLOCK_SIZE;
-   }
-
-   memcpy(sctx-buffer, data + done, len - done);
-   return 0;
+   if (head)
+   sha1_block_data_order(state, head, 1);
+   if (blocks)
+   sha1_block_data_order(state, src, blocks);
 }
 
-
 int sha1_update_arm(struct shash_desc *desc, const u8 *data,
unsigned int len)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial = sctx-count % SHA1_BLOCK_SIZE;
-   int res;
-
-   /* Handle the fast case right here */
-   if (partial + len  SHA1_BLOCK_SIZE) {
-   sctx-count += len;
-   memcpy(sctx-buffer + partial, data, len);
-   return 0;
-   }
-   res = __sha1_update(sctx, data, len, partial);
-   return res;
+   return crypto_sha1_base_do_update(desc, data, len, sha1_arm_block_fn,
+ NULL);
 }
 EXPORT_SYMBOL_GPL(sha1_update_arm);
 
-
-/* Add padding and return the message digest. */
-static int sha1_final(struct shash_desc *desc, u8 *out)
+int sha1_finup_arm(struct shash_desc *desc, const u8 *data,
+  unsigned int len, u8 *out)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   unsigned int i, index, padlen;
-   __be32 *dst = (__be32 *)out;
-   __be64 bits;
-   static const u8 padding[SHA1_BLOCK_SIZE] = { 0x80, };
-
-   bits = cpu_to_be64(sctx-count  3);
-
-   /* Pad out to 56 mod 64 and append length */
-   index = sctx-count % SHA1_BLOCK_SIZE;
-   padlen = (index  56) ? (56 - index) : ((SHA1_BLOCK_SIZE+56) - index);
-   /* We need to fill a whole block for __sha1_update() */
-   if (padlen = 56) {
-   sctx-count += padlen;
-   memcpy(sctx-buffer + index, padding, padlen);
-   } else {
-   __sha1_update(sctx, padding, padlen, index);
-   }
-   __sha1_update(sctx, (const u8 *)bits, sizeof(bits), 56);
-
-   /* Store state in digest */
-   for (i = 0; i  5; i++)
-   dst[i] = cpu_to_be32(sctx-state[i]);
-
-   /* Wipe context */
-   memset(sctx, 0, sizeof(*sctx));
-   return 0;
-}
-
+   if (len)
+  

[PATCH v2 resend 03/14] crypto: sha1: implement base layer for SHA-1

2015-03-30 Thread Ard Biesheuvel
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-1
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig   |   3 ++
 crypto/Makefile  |   1 +
 crypto/sha1_base.c   | 125 +++
 include/crypto/sha.h |  17 +++
 4 files changed, 146 insertions(+)
 create mode 100644 crypto/sha1_base.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1664bd68b97d..155cc15c2719 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -516,6 +516,9 @@ config CRYPTO_RMD320
  Developed by Hans Dobbertin, Antoon Bosselaers and Bart Preneel.
  See http://homes.esat.kuleuven.be/~bosselae/ripemd160.html
 
+config CRYPTO_SHA1_BASE
+   tristate
+
 config CRYPTO_SHA1
tristate SHA1 digest algorithm
select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index bb9bafeb3ac7..42446cab15f3 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -43,6 +43,7 @@ obj-$(CONFIG_CRYPTO_RMD128) += rmd128.o
 obj-$(CONFIG_CRYPTO_RMD160) += rmd160.o
 obj-$(CONFIG_CRYPTO_RMD256) += rmd256.o
 obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
+obj-$(CONFIG_CRYPTO_SHA1_BASE) += sha1_base.o
 obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
 obj-$(CONFIG_CRYPTO_SHA256_BASE) += sha256_base.o
 obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
diff --git a/crypto/sha1_base.c b/crypto/sha1_base.c
new file mode 100644
index ..30fb0f9b47cf
--- /dev/null
+++ b/crypto/sha1_base.c
@@ -0,0 +1,125 @@
+/*
+ * sha1_base.c - core logic for SHA-1 implementations
+ *
+ * Copyright (C) 2015 Linaro Ltd ard.biesheu...@linaro.org
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include crypto/internal/hash.h
+#include crypto/sha.h
+#include linux/crypto.h
+#include linux/module.h
+
+#include asm/unaligned.h
+
+int crypto_sha1_base_init(struct shash_desc *desc)
+{
+   static const u32 sha1_init_state[] = {
+   SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4,
+   };
+   struct sha1_state *sctx = shash_desc_ctx(desc);
+
+   memcpy(sctx-state, sha1_init_state, sizeof(sctx-state));
+   sctx-count = 0;
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha1_base_init);
+
+int crypto_sha1_base_export(struct shash_desc *desc, void *out)
+{
+   struct sha1_state *sctx = shash_desc_ctx(desc);
+   struct sha1_state *dst = out;
+
+   *dst = *sctx;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha1_base_export);
+
+int crypto_sha1_base_import(struct shash_desc *desc, const void *in)
+{
+   struct sha1_state *sctx = shash_desc_ctx(desc);
+   struct sha1_state const *src = in;
+
+   *sctx = *src;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha1_base_import);
+
+int crypto_sha1_base_do_update(struct shash_desc *desc, const u8 *data,
+unsigned int len, sha1_block_fn *block_fn,
+void *p)
+{
+   struct sha1_state *sctx = shash_desc_ctx(desc);
+   unsigned int partial = sctx-count % SHA1_BLOCK_SIZE;
+
+   sctx-count += len;
+
+   if (unlikely((partial + len) = SHA1_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx-buffer + partial, data, p);
+   data += p;
+   len -= p;
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   block_fn(blocks, data, sctx-state,
+partial ? sctx-buffer : NULL, p);
+   data += blocks * SHA1_BLOCK_SIZE;
+   partial = 0;
+   }
+   if (len)
+   memcpy(sctx-buffer + partial, data, len);
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha1_base_do_update);
+
+int crypto_sha1_base_do_finalize(struct shash_desc *desc,
+sha1_block_fn *block_fn, void *p)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   struct sha1_state *sctx = shash_desc_ctx(desc);
+   __be64 *bits = (__be64 *)(sctx-buffer + bit_offset);
+   unsigned int partial = sctx-count % SHA1_BLOCK_SIZE;
+
+   sctx-buffer[partial++] = 0x80;
+   if (partial  bit_offset) {
+   memset(sctx-buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+   block_fn(1, sctx-buffer, sctx-state, NULL, p);
+   }
+
+   memset(sctx-buffer + partial, 0x0, bit_offset - partial);
+   *bits = cpu_to_be64(sctx-count  3);
+   block_fn(1, sctx-buffer, sctx-state, NULL, p);
+
+   return 0;
+}

[PATCH v2 resend 04/14] crypto: sha512-generic: move to generic glue implementation

2015-03-30 Thread Ard Biesheuvel
This updated the generic SHA-512 implementation to use the
generic shared SHA-512 glue code.

It also implements a .finup hook crypto_sha512_finup() and exports
it to other modules.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig  |   1 +
 crypto/sha512_generic.c | 126 ++--
 include/crypto/sha.h|   2 +
 3 files changed, 28 insertions(+), 101 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 155cc15c2719..83bc1680391a 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -653,6 +653,7 @@ config CRYPTO_SHA512_BASE
 
 config CRYPTO_SHA512
tristate SHA384 and SHA512 digest algorithms
+   select CRYPTO_SHA512_BASE
select CRYPTO_HASH
help
  SHA512 secure hash standard (DFIPS 180-2).
diff --git a/crypto/sha512_generic.c b/crypto/sha512_generic.c
index 1c3c3767e079..88f36a6920ef 100644
--- a/crypto/sha512_generic.c
+++ b/crypto/sha512_generic.c
@@ -130,125 +130,48 @@ sha512_transform(u64 *state, const u8 *input)
a = b = c = d = e = f = g = h = t1 = t2 = 0;
 }
 
-static int
-sha512_init(struct shash_desc *desc)
+static void sha512_generic_block_fn(int blocks, u8 const *src, u64 *state,
+   const u8 *head, void *p)
 {
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA512_H0;
-   sctx-state[1] = SHA512_H1;
-   sctx-state[2] = SHA512_H2;
-   sctx-state[3] = SHA512_H3;
-   sctx-state[4] = SHA512_H4;
-   sctx-state[5] = SHA512_H5;
-   sctx-state[6] = SHA512_H6;
-   sctx-state[7] = SHA512_H7;
-   sctx-count[0] = sctx-count[1] = 0;
+   if (head)
+   sha512_transform(state, head);
 
-   return 0;
-}
-
-static int
-sha384_init(struct shash_desc *desc)
-{
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-   sctx-state[0] = SHA384_H0;
-   sctx-state[1] = SHA384_H1;
-   sctx-state[2] = SHA384_H2;
-   sctx-state[3] = SHA384_H3;
-   sctx-state[4] = SHA384_H4;
-   sctx-state[5] = SHA384_H5;
-   sctx-state[6] = SHA384_H6;
-   sctx-state[7] = SHA384_H7;
-   sctx-count[0] = sctx-count[1] = 0;
-
-   return 0;
+   while (blocks--) {
+   sha512_transform(state, src);
+   src += SHA512_BLOCK_SIZE;
+   }
 }
 
 int crypto_sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
 {
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-
-   unsigned int i, index, part_len;
-
-   /* Compute number of bytes mod 128 */
-   index = sctx-count[0]  0x7f;
-
-   /* Update number of bytes */
-   if ((sctx-count[0] += len)  len)
-   sctx-count[1]++;
-
-part_len = 128 - index;
-
-   /* Transform as many times as possible. */
-   if (len = part_len) {
-   memcpy(sctx-buf[index], data, part_len);
-   sha512_transform(sctx-state, sctx-buf);
-
-   for (i = part_len; i + 127  len; i+=128)
-   sha512_transform(sctx-state, data[i]);
-
-   index = 0;
-   } else {
-   i = 0;
-   }
-
-   /* Buffer remaining input */
-   memcpy(sctx-buf[index], data[i], len - i);
-
-   return 0;
+   return crypto_sha512_base_do_update(desc, data, len,
+   sha512_generic_block_fn, NULL);
 }
 EXPORT_SYMBOL(crypto_sha512_update);
 
-static int
-sha512_final(struct shash_desc *desc, u8 *hash)
+int crypto_sha512_finup(struct shash_desc *desc, const u8 *data,
+   unsigned int len, u8 *hash)
 {
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-static u8 padding[128] = { 0x80, };
-   __be64 *dst = (__be64 *)hash;
-   __be64 bits[2];
-   unsigned int index, pad_len;
-   int i;
-
-   /* Save number of bits */
-   bits[1] = cpu_to_be64(sctx-count[0]  3);
-   bits[0] = cpu_to_be64(sctx-count[1]  3 | sctx-count[0]  61);
-
-   /* Pad out to 112 mod 128. */
-   index = sctx-count[0]  0x7f;
-   pad_len = (index  112) ? (112 - index) : ((128+112) - index);
-   crypto_sha512_update(desc, padding, pad_len);
-
-   /* Append length (before padding) */
-   crypto_sha512_update(desc, (const u8 *)bits, sizeof(bits));
-
-   /* Store state in digest */
-   for (i = 0; i  8; i++)
-   dst[i] = cpu_to_be64(sctx-state[i]);
-
-   /* Zeroize sensitive information. */
-   memset(sctx, 0, sizeof(struct sha512_state));
-
-   return 0;
+   if (len)
+   crypto_sha512_base_do_update(desc, data, len,
+sha512_generic_block_fn, NULL);
+   crypto_sha512_base_do_finalize(desc, sha512_generic_block_fn, NULL);
+   return crypto_sha512_base_finish(desc, hash);
 }
+EXPORT_SYMBOL(crypto_sha512_finup);
 
-static int sha384_final(struct shash_desc *desc, u8 *hash)
+int 

[PATCH v2 resend 12/14] crypto/x86: move SHA-1 SSSE3 implementation to base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/x86/crypto/sha1_ssse3_glue.c | 139 +-
 crypto/Kconfig|   1 +
 2 files changed, 34 insertions(+), 106 deletions(-)

diff --git a/arch/x86/crypto/sha1_ssse3_glue.c 
b/arch/x86/crypto/sha1_ssse3_glue.c
index 6c20fe04a738..ee0b775f2b1f 100644
--- a/arch/x86/crypto/sha1_ssse3_glue.c
+++ b/arch/x86/crypto/sha1_ssse3_glue.c
@@ -49,127 +49,53 @@ asmlinkage void sha1_transform_avx2(u32 *digest, const 
char *data,
 
 static asmlinkage void (*sha1_transform_asm)(u32 *, const char *, unsigned 
int);
 
-
-static int sha1_ssse3_init(struct shash_desc *desc)
+static void sha1_ssse3_block_fn(int blocks, u8 const *src, u32 *state,
+   const u8 *head, void *p)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-
-   *sctx = (struct sha1_state){
-   .state = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4 },
-   };
-
-   return 0;
-}
-
-static int __sha1_ssse3_update(struct shash_desc *desc, const u8 *data,
-  unsigned int len, unsigned int partial)
-{
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   unsigned int done = 0;
-
-   sctx-count += len;
-
-   if (partial) {
-   done = SHA1_BLOCK_SIZE - partial;
-   memcpy(sctx-buffer + partial, data, done);
-   sha1_transform_asm(sctx-state, sctx-buffer, 1);
-   }
-
-   if (len - done = SHA1_BLOCK_SIZE) {
-   const unsigned int rounds = (len - done) / SHA1_BLOCK_SIZE;
-
-   sha1_transform_asm(sctx-state, data + done, rounds);
-   done += rounds * SHA1_BLOCK_SIZE;
-   }
-
-   memcpy(sctx-buffer, data + done, len - done);
-
-   return 0;
+   if (head)
+   sha1_transform_asm(state, head, 1);
+   if (blocks)
+   sha1_transform_asm(state, src, blocks);
 }
 
 static int sha1_ssse3_update(struct shash_desc *desc, const u8 *data,
 unsigned int len)
 {
struct sha1_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial = sctx-count % SHA1_BLOCK_SIZE;
-   int res;
-
-   /* Handle the fast case right here */
-   if (partial + len  SHA1_BLOCK_SIZE) {
-   sctx-count += len;
-   memcpy(sctx-buffer + partial, data, len);
+   int err;
 
-   return 0;
-   }
+   if (!irq_fpu_usable() ||
+   (sctx-count % SHA1_BLOCK_SIZE) + len  SHA1_BLOCK_SIZE)
+   return crypto_sha1_update(desc, data, len);
 
-   if (!irq_fpu_usable()) {
-   res = crypto_sha1_update(desc, data, len);
-   } else {
-   kernel_fpu_begin();
-   res = __sha1_ssse3_update(desc, data, len, partial);
-   kernel_fpu_end();
-   }
+   kernel_fpu_begin();
+   err = crypto_sha1_base_do_update(desc, data, len,
+sha1_ssse3_block_fn, NULL);
+   kernel_fpu_end();
 
-   return res;
+   return err;
 }
 
-
-/* Add padding and return the message digest. */
-static int sha1_ssse3_final(struct shash_desc *desc, u8 *out)
-{
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   unsigned int i, index, padlen;
-   __be32 *dst = (__be32 *)out;
-   __be64 bits;
-   static const u8 padding[SHA1_BLOCK_SIZE] = { 0x80, };
-
-   bits = cpu_to_be64(sctx-count  3);
-
-   /* Pad out to 56 mod 64 and append length */
-   index = sctx-count % SHA1_BLOCK_SIZE;
-   padlen = (index  56) ? (56 - index) : ((SHA1_BLOCK_SIZE+56) - index);
-   if (!irq_fpu_usable()) {
-   crypto_sha1_update(desc, padding, padlen);
-   crypto_sha1_update(desc, (const u8 *)bits, sizeof(bits));
-   } else {
-   kernel_fpu_begin();
-   /* We need to fill a whole block for __sha1_ssse3_update() */
-   if (padlen = 56) {
-   sctx-count += padlen;
-   memcpy(sctx-buffer + index, padding, padlen);
-   } else {
-   __sha1_ssse3_update(desc, padding, padlen, index);
-   }
-   __sha1_ssse3_update(desc, (const u8 *)bits, sizeof(bits), 56);
-   kernel_fpu_end();
-   }
-
-   /* Store state in digest */
-   for (i = 0; i  5; i++)
-   dst[i] = cpu_to_be32(sctx-state[i]);
-
-   /* Wipe context */
-   memset(sctx, 0, sizeof(*sctx));
-
-   return 0;
-}
-
-static int sha1_ssse3_export(struct shash_desc *desc, void *out)
+static int sha1_ssse3_finup(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
+   if (!irq_fpu_usable())
+   return crypto_sha1_finup(desc, data, len, out);
 
-   memcpy(out, sctx, sizeof(*sctx));
+   kernel_fpu_begin();
+   if 

[PATCH v2 resend 08/14] crypto/arm: move SHA-1 ARMv8 implementation to base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/arm/crypto/Kconfig|   2 +-
 arch/arm/crypto/sha1-ce-glue.c | 110 +++--
 2 files changed, 31 insertions(+), 81 deletions(-)

diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
index c111d8992afb..31ad19f18af2 100644
--- a/arch/arm/crypto/Kconfig
+++ b/arch/arm/crypto/Kconfig
@@ -32,7 +32,7 @@ config CRYPTO_SHA1_ARM_CE
tristate SHA1 digest algorithm (ARM v8 Crypto Extensions)
depends on KERNEL_MODE_NEON
select CRYPTO_SHA1_ARM
-   select CRYPTO_SHA1
+   select CRYPTO_SHA1_BASE
select CRYPTO_HASH
help
  SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented
diff --git a/arch/arm/crypto/sha1-ce-glue.c b/arch/arm/crypto/sha1-ce-glue.c
index a9dd90df9fd7..29039d1bcdf9 100644
--- a/arch/arm/crypto/sha1-ce-glue.c
+++ b/arch/arm/crypto/sha1-ce-glue.c
@@ -13,114 +13,64 @@
 #include linux/crypto.h
 #include linux/module.h
 
-#include asm/crypto/sha1.h
 #include asm/hwcap.h
 #include asm/neon.h
 #include asm/simd.h
 #include asm/unaligned.h
 
+#include sha1.h
+
 MODULE_DESCRIPTION(SHA1 secure hash using ARMv8 Crypto Extensions);
 MODULE_AUTHOR(Ard Biesheuvel ard.biesheu...@linaro.org);
 MODULE_LICENSE(GPL v2);
 
-asmlinkage void sha1_ce_transform(int blocks, u8 const *src, u32 *state, 
- u8 *head);
-
-static int sha1_init(struct shash_desc *desc)
-{
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-
-   *sctx = (struct sha1_state){
-   .state = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4 },
-   };
-   return 0;
-}
+asmlinkage void sha1_ce_transform(int blocks, u8 const *src, u32 *state,
+ const u8 *head, void *p);
 
-static int sha1_update(struct shash_desc *desc, const u8 *data,
-  unsigned int len)
+static int sha1_ce_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len)
 {
struct sha1_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial;
 
-   if (!may_use_simd())
+   if (!may_use_simd() ||
+   (sctx-count % SHA1_BLOCK_SIZE) + len  SHA1_BLOCK_SIZE)
return sha1_update_arm(desc, data, len);
 
-   partial = sctx-count % SHA1_BLOCK_SIZE;
-   sctx-count += len;
+   kernel_neon_begin();
+   crypto_sha1_base_do_update(desc, data, len, sha1_ce_transform, NULL);
+   kernel_neon_end();
 
-   if ((partial + len) = SHA1_BLOCK_SIZE) {
-   int blocks;
-
-   if (partial) {
-   int p = SHA1_BLOCK_SIZE - partial;
-
-   memcpy(sctx-buffer + partial, data, p);
-   data += p;
-   len -= p;
-   }
-
-   blocks = len / SHA1_BLOCK_SIZE;
-   len %= SHA1_BLOCK_SIZE;
-
-   kernel_neon_begin();
-   sha1_ce_transform(blocks, data, sctx-state,
- partial ? sctx-buffer : NULL);
-   kernel_neon_end();
-
-   data += blocks * SHA1_BLOCK_SIZE;
-   partial = 0;
-   }
-   if (len)
-   memcpy(sctx-buffer + partial, data, len);
return 0;
 }
 
-static int sha1_final(struct shash_desc *desc, u8 *out)
+static int sha1_ce_finup(struct shash_desc *desc, const u8 *data,
+unsigned int len, u8 *out)
 {
-   static const u8 padding[SHA1_BLOCK_SIZE] = { 0x80, };
-
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   __be64 bits = cpu_to_be64(sctx-count  3);
-   __be32 *dst = (__be32 *)out;
-   int i;
-
-   u32 padlen = SHA1_BLOCK_SIZE
-- ((sctx-count + sizeof(bits)) % SHA1_BLOCK_SIZE);
-
-   sha1_update(desc, padding, padlen);
-   sha1_update(desc, (const u8 *)bits, sizeof(bits));
-
-   for (i = 0; i  SHA1_DIGEST_SIZE / sizeof(__be32); i++)
-   put_unaligned_be32(sctx-state[i], dst++);
-
-   *sctx = (struct sha1_state){};
-   return 0;
-}
+   if (!may_use_simd())
+   return sha1_finup_arm(desc, data, len, out);
 
-static int sha1_export(struct shash_desc *desc, void *out)
-{
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   struct sha1_state *dst = out;
+   kernel_neon_begin();
+   if (len)
+   crypto_sha1_base_do_update(desc, data, len,
+  sha1_ce_transform, NULL);
+   crypto_sha1_base_do_finalize(desc, sha1_ce_transform, NULL);
+   kernel_neon_end();
 
-   *dst = *sctx;
-   return 0;
+   return crypto_sha1_base_finish(desc, out);
 }
 
-static int sha1_import(struct shash_desc *desc, const void *in)
+static int sha1_ce_final(struct shash_desc *desc, u8 *out)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   struct sha1_state const *src = in;
-
-   *sctx = *src;
-   

[PATCH v2 resend 11/14] crypto/arm64: move SHA-224/256 ARMv8 implementation to base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/arm64/crypto/Kconfig|   1 +
 arch/arm64/crypto/sha2-ce-core.S |  11 ++-
 arch/arm64/crypto/sha2-ce-glue.c | 208 ++-
 3 files changed, 38 insertions(+), 182 deletions(-)

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index c87792dfaacc..238727dc24ba 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -18,6 +18,7 @@ config CRYPTO_SHA2_ARM64_CE
tristate SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)
depends on ARM64  KERNEL_MODE_NEON
select CRYPTO_HASH
+   select CRYPTO_SHA256_BASE
 
 config CRYPTO_GHASH_ARM64_CE
tristate GHASH (for GCM chaining mode) using ARMv8 Crypto Extensions
diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S
index 7f29fc031ea8..65ad56636fba 100644
--- a/arch/arm64/crypto/sha2-ce-core.S
+++ b/arch/arm64/crypto/sha2-ce-core.S
@@ -135,15 +135,18 @@ CPU_LE(   rev32   v19.16b, v19.16b)
 
/*
 * Final block: add padding and total bit count.
-* Skip if we have no total byte count in x4. In that case, the input
-* size was not a round multiple of the block size, and the padding is
-* handled by the C code.
+* Skip if the input size was not a round multiple of the block size,
+* the padding is handled by the C code in that case.
 */
cbz x4, 3f
+   ldr x5, [x2, #-8]   // sha256_state::count
+   tst x5, #0x3f   // round multiple of block size?
+   b.ne3f
+   str wzr, [x4]
moviv17.2d, #0
mov x8, #0x8000
moviv18.2d, #0
-   ror x7, x4, #29 // ror(lsl(x4, 3), 32)
+   ror x7, x5, #29 // ror(lsl(x4, 3), 32)
fmovd16, x8
mov x4, #0
mov v19.d[0], xzr
diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c
index ae67e88c28b9..3791c6139628 100644
--- a/arch/arm64/crypto/sha2-ce-glue.c
+++ b/arch/arm64/crypto/sha2-ce-glue.c
@@ -20,195 +20,47 @@ MODULE_DESCRIPTION(SHA-224/SHA-256 secure hash using 
ARMv8 Crypto Extensions);
 MODULE_AUTHOR(Ard Biesheuvel ard.biesheu...@linaro.org);
 MODULE_LICENSE(GPL v2);
 
-asmlinkage int sha2_ce_transform(int blocks, u8 const *src, u32 *state,
-u8 *head, long bytes);
+asmlinkage void sha2_ce_transform(int blocks, u8 const *src, u32 *state,
+ const u8 *head, void *p);
 
-static int sha224_init(struct shash_desc *desc)
+static int sha256_ce_update(struct shash_desc *desc, const u8 *data,
+   unsigned int len)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-
-   *sctx = (struct sha256_state){
-   .state = {
-   SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3,
-   SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7,
-   }
-   };
-   return 0;
-}
-
-static int sha256_init(struct shash_desc *desc)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-
-   *sctx = (struct sha256_state){
-   .state = {
-   SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3,
-   SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7,
-   }
-   };
-   return 0;
-}
-
-static int sha2_update(struct shash_desc *desc, const u8 *data,
-  unsigned int len)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial = sctx-count % SHA256_BLOCK_SIZE;
-
-   sctx-count += len;
-
-   if ((partial + len) = SHA256_BLOCK_SIZE) {
-   int blocks;
-
-   if (partial) {
-   int p = SHA256_BLOCK_SIZE - partial;
-
-   memcpy(sctx-buf + partial, data, p);
-   data += p;
-   len -= p;
-   }
-
-   blocks = len / SHA256_BLOCK_SIZE;
-   len %= SHA256_BLOCK_SIZE;
-
-   kernel_neon_begin_partial(28);
-   sha2_ce_transform(blocks, data, sctx-state,
- partial ? sctx-buf : NULL, 0);
-   kernel_neon_end();
-
-   data += blocks * SHA256_BLOCK_SIZE;
-   partial = 0;
-   }
-   if (len)
-   memcpy(sctx-buf + partial, data, len);
-   return 0;
-}
-
-static void sha2_final(struct shash_desc *desc)
-{
-   static const u8 padding[SHA256_BLOCK_SIZE] = { 0x80, };
-
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   __be64 bits = cpu_to_be64(sctx-count  3);
-   u32 padlen = SHA256_BLOCK_SIZE
-- ((sctx-count + sizeof(bits)) % SHA256_BLOCK_SIZE);
-
-   

[PATCH v2 resend 13/14] crypto/x86: move SHA-224/256 SSSE3 implementation to base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/x86/crypto/sha256_ssse3_glue.c | 186 
 crypto/Kconfig  |   1 +
 2 files changed, 39 insertions(+), 148 deletions(-)

diff --git a/arch/x86/crypto/sha256_ssse3_glue.c 
b/arch/x86/crypto/sha256_ssse3_glue.c
index 8fad72f4dfd2..bd9f5ec718fd 100644
--- a/arch/x86/crypto/sha256_ssse3_glue.c
+++ b/arch/x86/crypto/sha256_ssse3_glue.c
@@ -55,174 +55,63 @@ asmlinkage void sha256_transform_rorx(const char *data, 
u32 *digest,
 
 static asmlinkage void (*sha256_transform_asm)(const char *, u32 *, u64);
 
-
-static int sha256_ssse3_init(struct shash_desc *desc)
+static void sha256_ssse3_block_fn(int blocks, u8 const *src, u32 *state,
+ const u8 *head, void *p)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-
-   sctx-state[0] = SHA256_H0;
-   sctx-state[1] = SHA256_H1;
-   sctx-state[2] = SHA256_H2;
-   sctx-state[3] = SHA256_H3;
-   sctx-state[4] = SHA256_H4;
-   sctx-state[5] = SHA256_H5;
-   sctx-state[6] = SHA256_H6;
-   sctx-state[7] = SHA256_H7;
-   sctx-count = 0;
-
-   return 0;
-}
-
-static int __sha256_ssse3_update(struct shash_desc *desc, const u8 *data,
-  unsigned int len, unsigned int partial)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   unsigned int done = 0;
-
-   sctx-count += len;
-
-   if (partial) {
-   done = SHA256_BLOCK_SIZE - partial;
-   memcpy(sctx-buf + partial, data, done);
-   sha256_transform_asm(sctx-buf, sctx-state, 1);
-   }
-
-   if (len - done = SHA256_BLOCK_SIZE) {
-   const unsigned int rounds = (len - done) / SHA256_BLOCK_SIZE;
-
-   sha256_transform_asm(data + done, sctx-state, (u64) rounds);
-
-   done += rounds * SHA256_BLOCK_SIZE;
-   }
-
-   memcpy(sctx-buf, data + done, len - done);
-
-   return 0;
+   if (head)
+   sha256_transform_asm(head, state, 1);
+   if (blocks)
+   sha256_transform_asm(src, state, blocks);
 }
 
 static int sha256_ssse3_update(struct shash_desc *desc, const u8 *data,
 unsigned int len)
 {
struct sha256_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial = sctx-count % SHA256_BLOCK_SIZE;
-   int res;
+   int err;
 
-   /* Handle the fast case right here */
-   if (partial + len  SHA256_BLOCK_SIZE) {
-   sctx-count += len;
-   memcpy(sctx-buf + partial, data, len);
+   if (!irq_fpu_usable() ||
+   (sctx-count % SHA256_BLOCK_SIZE) + len  SHA256_BLOCK_SIZE)
+   return crypto_sha256_update(desc, data, len);
 
-   return 0;
-   }
+   kernel_fpu_begin();
+   err = crypto_sha256_base_do_update(desc, data, len,
+  sha256_ssse3_block_fn, NULL);
+   kernel_fpu_end();
 
-   if (!irq_fpu_usable()) {
-   res = crypto_sha256_update(desc, data, len);
-   } else {
-   kernel_fpu_begin();
-   res = __sha256_ssse3_update(desc, data, len, partial);
-   kernel_fpu_end();
-   }
-
-   return res;
+   return err;
 }
 
-
-/* Add padding and return the message digest. */
-static int sha256_ssse3_final(struct shash_desc *desc, u8 *out)
+static int sha256_ssse3_finup(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   unsigned int i, index, padlen;
-   __be32 *dst = (__be32 *)out;
-   __be64 bits;
-   static const u8 padding[SHA256_BLOCK_SIZE] = { 0x80, };
+   if (!irq_fpu_usable())
+   return crypto_sha256_finup(desc, data, len, out);
 
-   bits = cpu_to_be64(sctx-count  3);
-
-   /* Pad out to 56 mod 64 and append length */
-   index = sctx-count % SHA256_BLOCK_SIZE;
-   padlen = (index  56) ? (56 - index) : ((SHA256_BLOCK_SIZE+56)-index);
-
-   if (!irq_fpu_usable()) {
-   crypto_sha256_update(desc, padding, padlen);
-   crypto_sha256_update(desc, (const u8 *)bits, sizeof(bits));
-   } else {
-   kernel_fpu_begin();
-   /* We need to fill a whole block for __sha256_ssse3_update() */
-   if (padlen = 56) {
-   sctx-count += padlen;
-   memcpy(sctx-buf + index, padding, padlen);
-   } else {
-   __sha256_ssse3_update(desc, padding, padlen, index);
-   }
-   __sha256_ssse3_update(desc, (const u8 *)bits,
-   sizeof(bits), 56);
-   kernel_fpu_end();
-   }
+   kernel_fpu_begin();
+   if (len)
+   crypto_sha256_base_do_update(desc, data, len,
+   

[PATCH v2 resend 14/14] crypto/x86: move SHA-384/512 SSSE3 implementation to base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/x86/crypto/sha512_ssse3_glue.c | 195 +++-
 crypto/Kconfig  |   1 +
 2 files changed, 39 insertions(+), 157 deletions(-)

diff --git a/arch/x86/crypto/sha512_ssse3_glue.c 
b/arch/x86/crypto/sha512_ssse3_glue.c
index 0b6af26832bf..f5ab7275e50b 100644
--- a/arch/x86/crypto/sha512_ssse3_glue.c
+++ b/arch/x86/crypto/sha512_ssse3_glue.c
@@ -54,183 +54,63 @@ asmlinkage void sha512_transform_rorx(const char *data, 
u64 *digest,
 
 static asmlinkage void (*sha512_transform_asm)(const char *, u64 *, u64);
 
-
-static int sha512_ssse3_init(struct shash_desc *desc)
+static void sha512_ssse3_block_fn(int blocks, u8 const *src, u64 *state,
+ const u8 *head, void *p)
 {
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-
-   sctx-state[0] = SHA512_H0;
-   sctx-state[1] = SHA512_H1;
-   sctx-state[2] = SHA512_H2;
-   sctx-state[3] = SHA512_H3;
-   sctx-state[4] = SHA512_H4;
-   sctx-state[5] = SHA512_H5;
-   sctx-state[6] = SHA512_H6;
-   sctx-state[7] = SHA512_H7;
-   sctx-count[0] = sctx-count[1] = 0;
-
-   return 0;
-}
-
-static int __sha512_ssse3_update(struct shash_desc *desc, const u8 *data,
-  unsigned int len, unsigned int partial)
-{
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-   unsigned int done = 0;
-
-   sctx-count[0] += len;
-   if (sctx-count[0]  len)
-   sctx-count[1]++;
-
-   if (partial) {
-   done = SHA512_BLOCK_SIZE - partial;
-   memcpy(sctx-buf + partial, data, done);
-   sha512_transform_asm(sctx-buf, sctx-state, 1);
-   }
-
-   if (len - done = SHA512_BLOCK_SIZE) {
-   const unsigned int rounds = (len - done) / SHA512_BLOCK_SIZE;
-
-   sha512_transform_asm(data + done, sctx-state, (u64) rounds);
-
-   done += rounds * SHA512_BLOCK_SIZE;
-   }
-
-   memcpy(sctx-buf, data + done, len - done);
-
-   return 0;
+   if (head)
+   sha512_transform_asm(head, state, 1);
+   if (blocks)
+   sha512_transform_asm(src, state, blocks);
 }
 
 static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data,
 unsigned int len)
 {
struct sha512_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial = sctx-count[0] % SHA512_BLOCK_SIZE;
-   int res;
-
-   /* Handle the fast case right here */
-   if (partial + len  SHA512_BLOCK_SIZE) {
-   sctx-count[0] += len;
-   if (sctx-count[0]  len)
-   sctx-count[1]++;
-   memcpy(sctx-buf + partial, data, len);
-
-   return 0;
-   }
-
-   if (!irq_fpu_usable()) {
-   res = crypto_sha512_update(desc, data, len);
-   } else {
-   kernel_fpu_begin();
-   res = __sha512_ssse3_update(desc, data, len, partial);
-   kernel_fpu_end();
-   }
-
-   return res;
-}
-
-
-/* Add padding and return the message digest. */
-static int sha512_ssse3_final(struct shash_desc *desc, u8 *out)
-{
-   struct sha512_state *sctx = shash_desc_ctx(desc);
-   unsigned int i, index, padlen;
-   __be64 *dst = (__be64 *)out;
-   __be64 bits[2];
-   static const u8 padding[SHA512_BLOCK_SIZE] = { 0x80, };
-
-   /* save number of bits */
-   bits[1] = cpu_to_be64(sctx-count[0]  3);
-   bits[0] = cpu_to_be64(sctx-count[1]  3 | sctx-count[0]  61);
-
-   /* Pad out to 112 mod 128 and append length */
-   index = sctx-count[0]  0x7f;
-   padlen = (index  112) ? (112 - index) : ((128+112) - index);
-
-   if (!irq_fpu_usable()) {
-   crypto_sha512_update(desc, padding, padlen);
-   crypto_sha512_update(desc, (const u8 *)bits, sizeof(bits));
-   } else {
-   kernel_fpu_begin();
-   /* We need to fill a whole block for __sha512_ssse3_update() */
-   if (padlen = 112) {
-   sctx-count[0] += padlen;
-   if (sctx-count[0]  padlen)
-   sctx-count[1]++;
-   memcpy(sctx-buf + index, padding, padlen);
-   } else {
-   __sha512_ssse3_update(desc, padding, padlen, index);
-   }
-   __sha512_ssse3_update(desc, (const u8 *)bits,
-   sizeof(bits), 112);
-   kernel_fpu_end();
-   }
+   int err;
 
-   /* Store state in digest */
-   for (i = 0; i  8; i++)
-   dst[i] = cpu_to_be64(sctx-state[i]);
+   if (!irq_fpu_usable() ||
+   (sctx-count[0] % SHA512_BLOCK_SIZE) + len  SHA512_BLOCK_SIZE)
+   return crypto_sha512_update(desc, data, len);
 
-   /* Wipe context */
-   memset(sctx, 0, 

[PATCH v2 resend 09/14] crypto/arm: move SHA-224/256 ARMv8 implementation to base layer

2015-03-30 Thread Ard Biesheuvel
Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 arch/arm/crypto/Kconfig|   1 +
 arch/arm/crypto/sha2-ce-glue.c | 151 +
 2 files changed, 33 insertions(+), 119 deletions(-)

diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
index 31ad19f18af2..de91f0447240 100644
--- a/arch/arm/crypto/Kconfig
+++ b/arch/arm/crypto/Kconfig
@@ -42,6 +42,7 @@ config CRYPTO_SHA2_ARM_CE
tristate SHA-224/256 digest algorithm (ARM v8 Crypto Extensions)
depends on KERNEL_MODE_NEON
select CRYPTO_SHA256
+   select CRYPTO_SHA256_BASE
select CRYPTO_HASH
help
  SHA-256 secure hash standard (DFIPS 180-2) implemented
diff --git a/arch/arm/crypto/sha2-ce-glue.c b/arch/arm/crypto/sha2-ce-glue.c
index 9ffe8ad27402..df57192c41cd 100644
--- a/arch/arm/crypto/sha2-ce-glue.c
+++ b/arch/arm/crypto/sha2-ce-glue.c
@@ -23,140 +23,52 @@ MODULE_AUTHOR(Ard Biesheuvel 
ard.biesheu...@linaro.org);
 MODULE_LICENSE(GPL v2);
 
 asmlinkage void sha2_ce_transform(int blocks, u8 const *src, u32 *state,
- u8 *head);
+ const u8 *head, void *p);
 
-static int sha224_init(struct shash_desc *desc)
+static int sha2_ce_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len)
 {
struct sha256_state *sctx = shash_desc_ctx(desc);
 
-   *sctx = (struct sha256_state){
-   .state = {
-   SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3,
-   SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7,
-   }
-   };
-   return 0;
-}
+   if (!may_use_simd() ||
+   (sctx-count % SHA256_BLOCK_SIZE) + len  SHA256_BLOCK_SIZE)
+   return crypto_sha256_update(desc, data, len);
 
-static int sha256_init(struct shash_desc *desc)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
+   kernel_neon_begin();
+   crypto_sha256_base_do_update(desc, data, len, sha2_ce_transform, NULL);
+   kernel_neon_end();
 
-   *sctx = (struct sha256_state){
-   .state = {
-   SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3,
-   SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7,
-   }
-   };
return 0;
 }
 
-static int sha2_update(struct shash_desc *desc, const u8 *data,
-  unsigned int len)
+static int sha2_ce_finup(struct shash_desc *desc, const u8 *data,
+unsigned int len, u8 *out)
 {
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial;
-
if (!may_use_simd())
-   return crypto_sha256_update(desc, data, len);
-
-   partial = sctx-count % SHA256_BLOCK_SIZE;
-   sctx-count += len;
-
-   if ((partial + len) = SHA256_BLOCK_SIZE) {
-   int blocks;
-
-   if (partial) {
-   int p = SHA256_BLOCK_SIZE - partial;
-
-   memcpy(sctx-buf + partial, data, p);
-   data += p;
-   len -= p;
-   }
+   return crypto_sha256_finup(desc, data, len, out);
 
-   blocks = len / SHA256_BLOCK_SIZE;
-   len %= SHA256_BLOCK_SIZE;
-
-   kernel_neon_begin();
-   sha2_ce_transform(blocks, data, sctx-state,
- partial ? sctx-buf : NULL);
-   kernel_neon_end();
-
-   data += blocks * SHA256_BLOCK_SIZE;
-   partial = 0;
-   }
+   kernel_neon_begin();
if (len)
-   memcpy(sctx-buf + partial, data, len);
-   return 0;
-}
-
-static void sha2_final(struct shash_desc *desc)
-{
-   static const u8 padding[SHA256_BLOCK_SIZE] = { 0x80, };
-
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   __be64 bits = cpu_to_be64(sctx-count  3);
-   u32 padlen = SHA256_BLOCK_SIZE
-- ((sctx-count + sizeof(bits)) % SHA256_BLOCK_SIZE);
-
-   sha2_update(desc, padding, padlen);
-   sha2_update(desc, (const u8 *)bits, sizeof(bits));
-}
+   crypto_sha256_base_do_update(desc, data, len,
+sha2_ce_transform, NULL);
+   crypto_sha256_base_do_finalize(desc, sha2_ce_transform, NULL);
+   kernel_neon_end();
 
-static int sha224_final(struct shash_desc *desc, u8 *out)
-{
-   struct sha256_state *sctx = shash_desc_ctx(desc);
-   __be32 *dst = (__be32 *)out;
-   int i;
-
-   sha2_final(desc);
-
-   for (i = 0; i  SHA224_DIGEST_SIZE / sizeof(__be32); i++)
-   put_unaligned_be32(sctx-state[i], dst++);
-
-   *sctx = (struct sha256_state){};
-   return 0;
+   return crypto_sha256_base_finish(desc, out);
 }
 
-static int sha256_final(struct shash_desc *desc, u8 *out)
+static int sha2_ce_final(struct shash_desc *desc, u8 *out)
 {
-   struct 

[PATCH v2 resend 02/14] crypto: sha256: implement base layer for SHA-256

2015-03-30 Thread Ard Biesheuvel
To reduce the number of copies of boilerplate code throughout
the tree, this patch implements generic glue for the SHA-256
algorithm. This allows a specific arch or hardware implementation
to only implement the special handling that it needs.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig   |   4 ++
 crypto/Makefile  |   1 +
 crypto/sha256_base.c | 140 +++
 include/crypto/sha.h |  17 +++
 4 files changed, 162 insertions(+)
 create mode 100644 crypto/sha256_base.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 3400cf4e3cdb..1664bd68b97d 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -602,6 +602,10 @@ config CRYPTO_SHA1_MB
  lanes remain unfilled, a flush operation will be initiated to
  process the crypto jobs, adding a slight latency.
 
+
+config CRYPTO_SHA256_BASE
+   tristate
+
 config CRYPTO_SHA256
tristate SHA224 and SHA256 digest algorithm
select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index 6174bf2592fe..bb9bafeb3ac7 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -44,6 +44,7 @@ obj-$(CONFIG_CRYPTO_RMD160) += rmd160.o
 obj-$(CONFIG_CRYPTO_RMD256) += rmd256.o
 obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
 obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
+obj-$(CONFIG_CRYPTO_SHA256_BASE) += sha256_base.o
 obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
 obj-$(CONFIG_CRYPTO_SHA512_BASE) += sha512_base.o
 obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
diff --git a/crypto/sha256_base.c b/crypto/sha256_base.c
new file mode 100644
index ..5fd728066912
--- /dev/null
+++ b/crypto/sha256_base.c
@@ -0,0 +1,140 @@
+/*
+ * sha256_base.c - core logic for SHA-256 implementations
+ *
+ * Copyright (C) 2015 Linaro Ltd ard.biesheu...@linaro.org
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include crypto/internal/hash.h
+#include crypto/sha.h
+#include linux/crypto.h
+#include linux/module.h
+
+#include asm/unaligned.h
+
+int crypto_sha224_base_init(struct shash_desc *desc)
+{
+   static const u32 sha224_init_state[] = {
+   SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3,
+   SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7,
+   };
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+
+   memcpy(sctx-state, sha224_init_state, sizeof(sctx-state));
+   sctx-count = 0;
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha224_base_init);
+
+int crypto_sha256_base_init(struct shash_desc *desc)
+{
+   static const u32 sha256_init_state[] = {
+   SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3,
+   SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7,
+   };
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+
+   memcpy(sctx-state, sha256_init_state, sizeof(sctx-state));
+   sctx-count = 0;
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha256_base_init);
+
+int crypto_sha256_base_export(struct shash_desc *desc, void *out)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   struct sha256_state *dst = out;
+
+   *dst = *sctx;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha256_base_export);
+
+int crypto_sha256_base_import(struct shash_desc *desc, const void *in)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   struct sha256_state const *src = in;
+
+   *sctx = *src;
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha256_base_import);
+
+int crypto_sha256_base_do_update(struct shash_desc *desc, const u8 *data,
+unsigned int len, sha256_block_fn *block_fn,
+void *p)
+{
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   unsigned int partial = sctx-count % SHA256_BLOCK_SIZE;
+
+   sctx-count += len;
+
+   if (unlikely((partial + len) = SHA256_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA256_BLOCK_SIZE - partial;
+
+   memcpy(sctx-buf + partial, data, p);
+   data += p;
+   len -= p;
+   }
+
+   blocks = len / SHA256_BLOCK_SIZE;
+   len %= SHA256_BLOCK_SIZE;
+
+   block_fn(blocks, data, sctx-state,
+partial ? sctx-buf : NULL, p);
+   data += blocks * SHA256_BLOCK_SIZE;
+   partial = 0;
+   }
+   if (len)
+   memcpy(sctx-buf + partial, data, len);
+
+   return 0;
+}
+EXPORT_SYMBOL(crypto_sha256_base_do_update);
+
+int crypto_sha256_base_do_finalize(struct shash_desc *desc,
+  sha256_block_fn *block_fn, void *p)
+{
+   const int bit_offset = SHA256_BLOCK_SIZE - sizeof(__be64);
+   struct sha256_state *sctx = shash_desc_ctx(desc);
+   __be64 *bits = 

[PATCH v2 resend 06/14] crypto: sha1-generic: move to generic glue implementation

2015-03-30 Thread Ard Biesheuvel
This updated the generic SHA-1 implementation to use the generic
shared SHA-1 glue code.

It also implements a .finup hook crypto_sha1_finup() and exports
it to other modules.

Signed-off-by: Ard Biesheuvel ard.biesheu...@linaro.org
---
 crypto/Kconfig|   1 +
 crypto/sha1_generic.c | 105 --
 include/crypto/sha.h  |   3 ++
 3 files changed, 29 insertions(+), 80 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 72bf5af7240d..8f16d90f7c55 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -522,6 +522,7 @@ config CRYPTO_SHA1_BASE
 config CRYPTO_SHA1
tristate SHA1 digest algorithm
select CRYPTO_HASH
+   select CRYPTO_SHA1_BASE
help
  SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
 
diff --git a/crypto/sha1_generic.c b/crypto/sha1_generic.c
index a3e50c37eb6f..3975f63ea6f9 100644
--- a/crypto/sha1_generic.c
+++ b/crypto/sha1_generic.c
@@ -25,107 +25,52 @@
 #include crypto/sha.h
 #include asm/byteorder.h
 
-static int sha1_init(struct shash_desc *desc)
+static void sha1_generic_block_fn(int blocks, u8 const *src, u32 *state,
+ const u8 *head, void *p)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
+   u32 temp[SHA_WORKSPACE_WORDS];
 
-   *sctx = (struct sha1_state){
-   .state = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4 },
-   };
+   if (head)
+   sha_transform(state, head, temp);
 
-   return 0;
+   while (blocks--) {
+   sha_transform(state, src, temp);
+   src += SHA1_BLOCK_SIZE;
+   }
 }
 
 int crypto_sha1_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   unsigned int partial, done;
-   const u8 *src;
-
-   partial = sctx-count % SHA1_BLOCK_SIZE;
-   sctx-count += len;
-   done = 0;
-   src = data;
-
-   if ((partial + len) = SHA1_BLOCK_SIZE) {
-   u32 temp[SHA_WORKSPACE_WORDS];
-
-   if (partial) {
-   done = -partial;
-   memcpy(sctx-buffer + partial, data,
-  done + SHA1_BLOCK_SIZE);
-   src = sctx-buffer;
-   }
-
-   do {
-   sha_transform(sctx-state, src, temp);
-   done += SHA1_BLOCK_SIZE;
-   src = data + done;
-   } while (done + SHA1_BLOCK_SIZE = len);
-
-   memzero_explicit(temp, sizeof(temp));
-   partial = 0;
-   }
-   memcpy(sctx-buffer + partial, src, len - done);
-
-   return 0;
+   return crypto_sha1_base_do_update(desc, data, len,
+ sha1_generic_block_fn, NULL);
 }
 EXPORT_SYMBOL(crypto_sha1_update);
 
-
-/* Add padding and return the message digest. */
-static int sha1_final(struct shash_desc *desc, u8 *out)
-{
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-   __be32 *dst = (__be32 *)out;
-   u32 i, index, padlen;
-   __be64 bits;
-   static const u8 padding[64] = { 0x80, };
-
-   bits = cpu_to_be64(sctx-count  3);
-
-   /* Pad out to 56 mod 64 */
-   index = sctx-count  0x3f;
-   padlen = (index  56) ? (56 - index) : ((64+56) - index);
-   crypto_sha1_update(desc, padding, padlen);
-
-   /* Append length */
-   crypto_sha1_update(desc, (const u8 *)bits, sizeof(bits));
-
-   /* Store state in digest */
-   for (i = 0; i  5; i++)
-   dst[i] = cpu_to_be32(sctx-state[i]);
-
-   /* Wipe context */
-   memset(sctx, 0, sizeof *sctx);
-
-   return 0;
-}
-
-static int sha1_export(struct shash_desc *desc, void *out)
+int crypto_sha1_finup(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-
-   memcpy(out, sctx, sizeof(*sctx));
-   return 0;
+   if (len)
+   crypto_sha1_base_do_update(desc, data, len,
+  sha1_generic_block_fn, NULL);
+   crypto_sha1_base_do_finalize(desc, sha1_generic_block_fn, NULL);
+   return crypto_sha1_base_finish(desc, out);
 }
+EXPORT_SYMBOL(crypto_sha1_finup);
 
-static int sha1_import(struct shash_desc *desc, const void *in)
+/* Add padding and return the message digest. */
+static int sha1_final(struct shash_desc *desc, u8 *out)
 {
-   struct sha1_state *sctx = shash_desc_ctx(desc);
-
-   memcpy(sctx, in, sizeof(*sctx));
-   return 0;
+   return crypto_sha1_finup(desc, NULL, 0, out);
 }
 
 static struct shash_alg alg = {
.digestsize =   SHA1_DIGEST_SIZE,
-   .init   =   sha1_init,
+   .init   =   crypto_sha1_base_init,
.update =   crypto_sha1_update,
.final  =   sha1_final,
- 

[PATCH v2 resend 00/14] crypto: SHA glue code consolidation

2015-03-30 Thread Ard Biesheuvel
NOTE: I appear to have screwed up something when I just sent this, so
  resending now with no patches missing and no duplicates.

Hello all,

This is v2 of what is now a complete glue code consolidation series
for generic, x86, arm and arm64 implementations of SHA-1, SHA-224/256
and SHA-384/512.

The base layer implements all the update and finalization logic around
the block transforms, where the prototypes of the latter look something
like this:

typedef void (shaXXX_block_fn)(int blocks, u8 const *src, uXX *state,
 const u8 *head, void *p);

The block implementation should process the head block first, then
process the requested number of block starting at 'src'. The generic
pointer 'p' is passed down from the do_update/do_finalize() versions;
this is used for instance by the ARM64 implementations to indicate to
the core ASM implementation that it should finalize the digest, which
it will do only if the input was a round multiple of the block size.
The generic pointer is used here as a means of conveying that information
back and forth.

Note that the base functions prototypes are all 'returning int' but
they all return 0. They should be invoked as tail calls where possible
to eliminate some of the function call overhead. If that is not possible,
the return values can be safely ignored.

Changes since v1 (RFC):
- prefixed globally visible generic symbols with crypto_
- added SHA-1 base layer
- updated init code to only set the initial constants and clear the
  count, clearing the buffer is unnecessary [Markus]
- favor the small update path in crypto_sha_XXX_base_do_update() [Markus]
- update crypto_sha_XXX_do_finalize() to use memset() on the buffer directly
  rather than copying a statically allocated padding buffer into it
  [Markus]
- moved a bunch of existing arm and x86 implementations to use the new base
  layers

Note: looking at the generated asm (for arm64), I noticed that the memcpy/memset
invocations with compile time constant src and len arguments (which includes
the empty struct assignments) are eliminated completely, and replaced by
direct loads and stores. Hopefully this addresses the concern raised by Markus
regarding this.

Ard Biesheuvel (14):
  crypto: sha512: implement base layer for SHA-512
  crypto: sha256: implement base layer for SHA-256
  crypto: sha1: implement base layer for SHA-1
  crypto: sha512-generic: move to generic glue implementation
  crypto: sha256-generic: move to generic glue implementation
  crypto: sha1-generic: move to generic glue implementation
  crypto/arm: move SHA-1 ARM asm implementation to base layer
  crypto/arm: move SHA-1 ARMv8 implementation to base layer
  crypto/arm: move SHA-224/256 ARMv8 implementation to base layer
  crypto/arm64: move SHA-1 ARMv8 implementation to base layer
  crypto/arm64: move SHA-224/256 ARMv8 implementation to base layer
  crypto/x86: move SHA-1 SSSE3 implementation to base layer
  crypto/x86: move SHA-224/256 SSSE3 implementation to base layer
  crypto/x86: move SHA-384/512 SSSE3 implementation to base layer

 arch/arm/crypto/Kconfig  |   4 +-
 arch/arm/crypto/sha1-ce-glue.c   | 110 +---
 arch/arm/{include/asm = }/crypto/sha1.h |   3 +
 arch/arm/crypto/sha1_glue.c  | 117 -
 arch/arm/crypto/sha2-ce-glue.c   | 151 +-
 arch/arm64/crypto/Kconfig|   2 +
 arch/arm64/crypto/sha1-ce-core.S |  11 +-
 arch/arm64/crypto/sha1-ce-glue.c | 132 
 arch/arm64/crypto/sha2-ce-core.S |  11 +-
 arch/arm64/crypto/sha2-ce-glue.c | 208 +--
 arch/x86/crypto/sha1_ssse3_glue.c| 139 +
 arch/x86/crypto/sha256_ssse3_glue.c  | 186 ++-
 arch/x86/crypto/sha512_ssse3_glue.c  | 195 ++---
 crypto/Kconfig   |  16 +++
 crypto/Makefile  |   3 +
 crypto/sha1_base.c   | 125 +++
 crypto/sha1_generic.c| 105 
 crypto/sha256_base.c | 140 +
 crypto/sha256_generic.c  | 139 -
 crypto/sha512_base.c | 143 +
 crypto/sha512_generic.c  | 126 ---
 include/crypto/sha.h |  62 +
 22 files changed, 836 insertions(+), 1292 deletions(-)
 rename arch/arm/{include/asm = }/crypto/sha1.h (67%)
 create mode 100644 crypto/sha1_base.c
 create mode 100644 crypto/sha256_base.c
 create mode 100644 crypto/sha512_base.c

-- 
1.8.3.2

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 1/6] crypto/ccp: drop linux/pci dependencies

2015-03-30 Thread Michael S. Tsirkin
pci code is in ccp-pci.c, don't include pci
headers from ccp/ccp-ops.c.

Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
 drivers/crypto/ccp/ccp-ops.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 8729364..3da1140 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -12,8 +12,6 @@
 
 #include linux/module.h
 #include linux/kernel.h
-#include linux/pci.h
-#include linux/pci_ids.h
 #include linux/kthread.h
 #include linux/sched.h
 #include linux/interrupt.h
-- 
MST

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 02/20] crypto: testmgr to use CRYPTO_ALG_INTERNAL

2015-03-30 Thread Herbert Xu
On Fri, Mar 27, 2015 at 11:50:42PM +0100, Stephan Mueller wrote:
 If a cipher allocation fails with -ENOENT, the testmgr now retries
 to allocate the cipher with CRYPTO_ALG_INTERNAL flag.
 
 As all ciphers, including the internal ciphers will be processed by
 the testmgr, it needs to be able to allocate those ciphers.
 
 Signed-off-by: Stephan Mueller smuel...@chronox.de
 ---
  crypto/testmgr.c | 22 ++
  1 file changed, 22 insertions(+)
 
 diff --git a/crypto/testmgr.c b/crypto/testmgr.c
 index 1f879ad..609bafa 100644
 --- a/crypto/testmgr.c
 +++ b/crypto/testmgr.c
 @@ -1506,6 +1506,9 @@ static int alg_test_aead(const struct alg_test_desc 
 *desc, const char *driver,
   int err = 0;
  
   tfm = crypto_alloc_aead(driver, type, mask);
 + if (PTR_ERR(tfm) == -ENOENT)
 + tfm = crypto_alloc_aead(driver, type | CRYPTO_ALG_INTERNAL,
 + mask | CRYPTO_ALG_INTERNAL);

We need to be able to say give me an algorithm regardless of the
INTERNAL bit.  How about treating (type  CRYPTO_ALG_INTERNAL) 
!(mask  CRYPTO_ALG_INTERNAL) as that special case?

So in patch 1 you would do

if (!((type | mask)  CRYPTO_ALG_INTERNAL))
mask |= CRYPTO_ALG_INTERNAL;

Thanks,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


problem with testing a CTR block cipher mode which is partially working

2015-03-30 Thread Corentin LABBE
hello

I am trying to add the CTR (counter) block cipher mode for AES on my Security 
System driver.

When testing with the tcrypt module I got the following result:
[ 1256.986989] alg: skcipher: Test 1 failed on encryption for ctr-aes-sunxi-ss
[ 1256.987004] : 87 4d 61 91 b6 20 e3 26 1b ef 68 64 99 0d b6 ce
[ 1256.987013] 0010: 40 94 25 91 d7 b4 4f 49 ab c1 9d 33 a4 4e f6 54
[ 1256.987023] 0020: ce 58 d2 f0 01 8f 92 a2 5f 2c bb 66 13 8b 9d 76
[ 1256.987032] 0030: 30 fa 4a 40 b1 67 2e f3 46 b7 9a 7c ba 91 0b a2

As you can see the first ciphered block is correct (according to testmgr.h), 
the subsequent blocks are bad.

So Could I assume that the setting of key and IV are good (at least for the 
first cipher pass.

The number of inputs(register) are limited and I have tested near all the 
possibility.
Any idea of what could be wrong.

Regards
Thanks in advance
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


AW: problem with testing a CTR block cipher mode which is partially working

2015-03-30 Thread Markus Stockhausen
 Von: linux-crypto-ow...@vger.kernel.org 
 [linux-crypto-ow...@vger.kernel.org]quot; im Auftrag von quot;Corentin 
 LABBE [clabbe.montj...@gmail.com]
 Gesendet: Montag, 30. März 2015 19:59
 An: linux-crypto@vger.kernel.org
 Cc: linux-su...@googlegroups.com
 Betreff: problem with testing a CTR block cipher mode which is partially 
 working
 
 hello
 
 I am trying to add the CTR (counter) block cipher mode for AES on my Security 
 System driver.
 
 When testing with the tcrypt module I got the following result:
 [ 1256.986989] alg: skcipher: Test 1 failed on encryption for ctr-aes-sunxi-ss
 [ 1256.987004] : 87 4d 61 91 b6 20 e3 26 1b ef 68 64 99 0d b6 ce
 [ 1256.987013] 0010: 40 94 25 91 d7 b4 4f 49 ab c1 9d 33 a4 4e f6 54
 [ 1256.987023] 0020: ce 58 d2 f0 01 8f 92 a2 5f 2c bb 66 13 8b 9d 76
 [ 1256.987032] 0030: 30 fa 4a 40 b1 67 2e f3 46 b7 9a 7c ba 91 0b a2
 
 As you can see the first ciphered block is correct (according to testmgr.h), 
 the subsequent blocks are bad.
 
 So Could I assume that the setting of key and IV are good (at least for the 
 first cipher pass.
 
 The number of inputs(register) are limited and I have tested near all the 
 possibility.
 Any idea of what could be wrong.
 

had a similar challenge a few months ago. I had to take care about

- counter IV is big endian (implemented it little endian in first place)
- CTR allows to encrypt data that does not need to be amultiple of 16 bytes.

Markus


Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497




Re: AW: problem with testing a CTR block cipher mode which is partially working

2015-03-30 Thread Stephan Mueller
Am Montag, 30. März 2015, 18:08:28 schrieb Markus Stockhausen:

Hi Markus,

  Von: linux-crypto-ow...@vger.kernel.org
  [linux-crypto-ow...@vger.kernel.org]quot; im Auftrag von quot;Corentin
  LABBE [clabbe.montj...@gmail.com] Gesendet: Montag, 30. März 2015 19:59
  An: linux-crypto@vger.kernel.org
  Cc: linux-su...@googlegroups.com
  Betreff: problem with testing a CTR block cipher mode which is partially
  working
  
  hello
  
  I am trying to add the CTR (counter) block cipher mode for AES on my
  Security System driver.
  
  When testing with the tcrypt module I got the following result:
  [ 1256.986989] alg: skcipher: Test 1 failed on encryption for
  ctr-aes-sunxi-ss [ 1256.987004] : 87 4d 61 91 b6 20 e3 26 1b ef
  68 64 99 0d b6 ce [ 1256.987013] 0010: 40 94 25 91 d7 b4 4f 49 ab c1
  9d 33 a4 4e f6 54 [ 1256.987023] 0020: ce 58 d2 f0 01 8f 92 a2 5f 2c
  bb 66 13 8b 9d 76 [ 1256.987032] 0030: 30 fa 4a 40 b1 67 2e f3 46 b7
  9a 7c ba 91 0b a2
  
  As you can see the first ciphered block is correct (according to
  testmgr.h), the subsequent blocks are bad.
  
  So Could I assume that the setting of key and IV are good (at least for
  the first cipher pass.
  
  The number of inputs(register) are limited and I have tested near all the
  possibility. Any idea of what could be wrong.
 
 had a similar challenge a few months ago. I had to take care about
 
 - counter IV is big endian (implemented it little endian in first place)

Use crypto_inc for the counter which properly increments in big endian.

 - CTR allows to encrypt data that does not need to be amultiple of 16 bytes.
 
 Markus


-- 
Ciao
Stephan
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 19/20] crypto: mcryptd to process CRYPTO_ALG_INTERNAL

2015-03-30 Thread Stephan Mueller
The mcryptd is used as a wrapper around internal ciphers. Therefore,
the mcryptd must process the internal cipher by marking mcryptd as
internal if the underlying cipher is an internal cipher.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 crypto/mcryptd.c | 25 +++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c
index a8e8704..fe5b495a 100644
--- a/crypto/mcryptd.c
+++ b/crypto/mcryptd.c
@@ -258,6 +258,20 @@ out_free_inst:
goto out;
 }
 
+static inline void mcryptd_check_internal(struct rtattr **tb, u32 *type,
+ u32 *mask)
+{
+   struct crypto_attr_type *algt;
+
+   algt = crypto_get_attr_type(tb);
+   if (IS_ERR(algt))
+   return;
+   if ((algt-type  CRYPTO_ALG_INTERNAL))
+   *type |= CRYPTO_ALG_INTERNAL;
+   if ((algt-mask  CRYPTO_ALG_INTERNAL))
+   *mask |= CRYPTO_ALG_INTERNAL;
+}
+
 static int mcryptd_hash_init_tfm(struct crypto_tfm *tfm)
 {
struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
@@ -480,9 +494,13 @@ static int mcryptd_create_hash(struct crypto_template 
*tmpl, struct rtattr **tb,
struct ahash_instance *inst;
struct shash_alg *salg;
struct crypto_alg *alg;
+   u32 type = 0;
+   u32 mask = 0;
int err;
 
-   salg = shash_attr_alg(tb[1], 0, 0);
+   mcryptd_check_internal(tb, type, mask);
+
+   salg = shash_attr_alg(tb[1], type, mask);
if (IS_ERR(salg))
return PTR_ERR(salg);
 
@@ -502,7 +520,10 @@ static int mcryptd_create_hash(struct crypto_template 
*tmpl, struct rtattr **tb,
if (err)
goto out_free_inst;
 
-   inst-alg.halg.base.cra_flags = CRYPTO_ALG_ASYNC;
+   type = CRYPTO_ALG_ASYNC;
+   if (alg-cra_flags  CRYPTO_ALG_INTERNAL)
+   type |= CRYPTO_ALG_INTERNAL;
+   inst-alg.halg.base.cra_flags = type;
 
inst-alg.halg.digestsize = salg-digestsize;
inst-alg.halg.base.cra_ctxsize = sizeof(struct mcryptd_hash_ctx);
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 10/20] crypto: mark AVX Camellia helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all AVX Camellia helper ciphers as internal ciphers to prevent
them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/camellia_aesni_avx_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c 
b/arch/x86/crypto/camellia_aesni_avx_glue.c
index ed38d95..78818a1 100644
--- a/arch/x86/crypto/camellia_aesni_avx_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx_glue.c
@@ -335,7 +335,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __ecb-camellia-aesni,
.cra_driver_name= __driver-ecb-camellia-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -354,7 +355,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __cbc-camellia-aesni,
.cra_driver_name= __driver-cbc-camellia-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -373,7 +375,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __ctr-camellia-aesni,
.cra_driver_name= __driver-ctr-camellia-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -393,7 +396,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __lrw-camellia-aesni,
.cra_driver_name= __driver-lrw-camellia-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_lrw_ctx),
.cra_alignmask  = 0,
@@ -416,7 +420,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __xts-camellia-aesni,
.cra_driver_name= __driver-xts-camellia-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 04/20] crypto: /proc/crypto: identify internal ciphers

2015-03-30 Thread Stephan Mueller
With ciphers that now cannot be accessed via the kernel crypto API,
callers shall be able to identify the ciphers that are not callable. The
/proc/crypto file is added a boolean field identifying that such
internal ciphers.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 crypto/proc.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/crypto/proc.c b/crypto/proc.c
index 4a0a7aa..4ffe73b 100644
--- a/crypto/proc.c
+++ b/crypto/proc.c
@@ -89,6 +89,9 @@ static int c_show(struct seq_file *m, void *p)
seq_printf(m, selftest : %s\n,
   (alg-cra_flags  CRYPTO_ALG_TESTED) ?
   passed : unknown);
+   seq_printf(m, internal : %s\n,
+  (alg-cra_flags  CRYPTO_ALG_INTERNAL) ?
+  yes : no);
 
if (alg-cra_flags  CRYPTO_ALG_LARVAL) {
seq_printf(m, type : larval\n);
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 02/20] crypto: testmgr to use CRYPTO_ALG_INTERNAL

2015-03-30 Thread Stephan Mueller
Allocate the ciphers irrespectively if they are marked as internal
or not. As all ciphers, including the internal ciphers will be
processed by the testmgr, it needs to be able to allocate those
ciphers.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 crypto/testmgr.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 1f879ad..f9bce3d 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -1505,7 +1505,7 @@ static int alg_test_aead(const struct alg_test_desc 
*desc, const char *driver,
struct crypto_aead *tfm;
int err = 0;
 
-   tfm = crypto_alloc_aead(driver, type, mask);
+   tfm = crypto_alloc_aead(driver, type | CRYPTO_ALG_INTERNAL, mask);
if (IS_ERR(tfm)) {
printk(KERN_ERR alg: aead: Failed to load transform for %s: 
   %ld\n, driver, PTR_ERR(tfm));
@@ -1534,7 +1534,7 @@ static int alg_test_cipher(const struct alg_test_desc 
*desc,
struct crypto_cipher *tfm;
int err = 0;
 
-   tfm = crypto_alloc_cipher(driver, type, mask);
+   tfm = crypto_alloc_cipher(driver, type | CRYPTO_ALG_INTERNAL, mask);
if (IS_ERR(tfm)) {
printk(KERN_ERR alg: cipher: Failed to load transform for 
   %s: %ld\n, driver, PTR_ERR(tfm));
@@ -1563,7 +1563,7 @@ static int alg_test_skcipher(const struct alg_test_desc 
*desc,
struct crypto_ablkcipher *tfm;
int err = 0;
 
-   tfm = crypto_alloc_ablkcipher(driver, type, mask);
+   tfm = crypto_alloc_ablkcipher(driver, type | CRYPTO_ALG_INTERNAL, mask);
if (IS_ERR(tfm)) {
printk(KERN_ERR alg: skcipher: Failed to load transform for 
   %s: %ld\n, driver, PTR_ERR(tfm));
@@ -1636,7 +1636,7 @@ static int alg_test_hash(const struct alg_test_desc 
*desc, const char *driver,
struct crypto_ahash *tfm;
int err;
 
-   tfm = crypto_alloc_ahash(driver, type, mask);
+   tfm = crypto_alloc_ahash(driver, type | CRYPTO_ALG_INTERNAL, mask);
if (IS_ERR(tfm)) {
printk(KERN_ERR alg: hash: Failed to load transform for %s: 
   %ld\n, driver, PTR_ERR(tfm));
@@ -1664,7 +1664,7 @@ static int alg_test_crc32c(const struct alg_test_desc 
*desc,
if (err)
goto out;
 
-   tfm = crypto_alloc_shash(driver, type, mask);
+   tfm = crypto_alloc_shash(driver, type | CRYPTO_ALG_INTERNAL, mask);
if (IS_ERR(tfm)) {
printk(KERN_ERR alg: crc32c: Failed to load transform for %s: 
   %ld\n, driver, PTR_ERR(tfm));
@@ -1706,7 +1706,7 @@ static int alg_test_cprng(const struct alg_test_desc 
*desc, const char *driver,
struct crypto_rng *rng;
int err;
 
-   rng = crypto_alloc_rng(driver, type, mask);
+   rng = crypto_alloc_rng(driver, type | CRYPTO_ALG_INTERNAL, mask);
if (IS_ERR(rng)) {
printk(KERN_ERR alg: cprng: Failed to load transform for %s: 
   %ld\n, driver, PTR_ERR(rng));
@@ -1733,7 +1733,7 @@ static int drbg_cavs_test(struct drbg_testvec *test, int 
pr,
if (!buf)
return -ENOMEM;
 
-   drng = crypto_alloc_rng(driver, type, mask);
+   drng = crypto_alloc_rng(driver, type | CRYPTO_ALG_INTERNAL, mask);
if (IS_ERR(drng)) {
printk(KERN_ERR alg: drbg: could not allocate DRNG handle for 
   %s\n, driver);
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 03/20] crypto: cryptd to process CRYPTO_ALG_INTERNAL

2015-03-30 Thread Stephan Mueller
The cryptd is used as a wrapper around internal ciphers. Therefore, the
cryptd must process the internal cipher by marking cryptd as internal if
the underlying cipher is an internal cipher.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 crypto/ablk_helper.c |  3 ++-
 crypto/cryptd.c  | 49 +
 2 files changed, 43 insertions(+), 9 deletions(-)

diff --git a/crypto/ablk_helper.c b/crypto/ablk_helper.c
index ffe7278..e1fcf53 100644
--- a/crypto/ablk_helper.c
+++ b/crypto/ablk_helper.c
@@ -124,7 +124,8 @@ int ablk_init_common(struct crypto_tfm *tfm, const char 
*drv_name)
struct async_helper_ctx *ctx = crypto_tfm_ctx(tfm);
struct cryptd_ablkcipher *cryptd_tfm;
 
-   cryptd_tfm = cryptd_alloc_ablkcipher(drv_name, 0, 0);
+   cryptd_tfm = cryptd_alloc_ablkcipher(drv_name, CRYPTO_ALG_INTERNAL,
+CRYPTO_ALG_INTERNAL);
if (IS_ERR(cryptd_tfm))
return PTR_ERR(cryptd_tfm);
 
diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index 650afac1..b0602ba 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -168,6 +168,20 @@ static inline struct cryptd_queue *cryptd_get_queue(struct 
crypto_tfm *tfm)
return ictx-queue;
 }
 
+static inline void cryptd_check_internal(struct rtattr **tb, u32 *type,
+u32 *mask)
+{
+   struct crypto_attr_type *algt;
+
+   algt = crypto_get_attr_type(tb);
+   if (IS_ERR(algt))
+   return;
+   if ((algt-type  CRYPTO_ALG_INTERNAL))
+   *type |= CRYPTO_ALG_INTERNAL;
+   if ((algt-mask  CRYPTO_ALG_INTERNAL))
+   *mask |= CRYPTO_ALG_INTERNAL;
+}
+
 static int cryptd_blkcipher_setkey(struct crypto_ablkcipher *parent,
   const u8 *key, unsigned int keylen)
 {
@@ -321,10 +335,13 @@ static int cryptd_create_blkcipher(struct crypto_template 
*tmpl,
struct cryptd_instance_ctx *ctx;
struct crypto_instance *inst;
struct crypto_alg *alg;
+   u32 type = CRYPTO_ALG_TYPE_BLKCIPHER;
+   u32 mask = CRYPTO_ALG_TYPE_MASK;
int err;
 
-   alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_BLKCIPHER,
- CRYPTO_ALG_TYPE_MASK);
+   cryptd_check_internal(tb, type, mask);
+
+   alg = crypto_get_attr_alg(tb, type, mask);
if (IS_ERR(alg))
return PTR_ERR(alg);
 
@@ -341,7 +358,10 @@ static int cryptd_create_blkcipher(struct crypto_template 
*tmpl,
if (err)
goto out_free_inst;
 
-   inst-alg.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC;
+   type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC;
+   if (alg-cra_flags  CRYPTO_ALG_INTERNAL)
+   type |= CRYPTO_ALG_INTERNAL;
+   inst-alg.cra_flags = type;
inst-alg.cra_type = crypto_ablkcipher_type;
 
inst-alg.cra_ablkcipher.ivsize = alg-cra_blkcipher.ivsize;
@@ -577,9 +597,13 @@ static int cryptd_create_hash(struct crypto_template 
*tmpl, struct rtattr **tb,
struct ahash_instance *inst;
struct shash_alg *salg;
struct crypto_alg *alg;
+   u32 type = 0;
+   u32 mask = 0;
int err;
 
-   salg = shash_attr_alg(tb[1], 0, 0);
+   cryptd_check_internal(tb, type, mask);
+
+   salg = shash_attr_alg(tb[1], type, mask);
if (IS_ERR(salg))
return PTR_ERR(salg);
 
@@ -598,7 +622,10 @@ static int cryptd_create_hash(struct crypto_template 
*tmpl, struct rtattr **tb,
if (err)
goto out_free_inst;
 
-   inst-alg.halg.base.cra_flags = CRYPTO_ALG_ASYNC;
+   type = CRYPTO_ALG_ASYNC;
+   if (alg-cra_flags  CRYPTO_ALG_INTERNAL)
+   type |= CRYPTO_ALG_INTERNAL;
+   inst-alg.halg.base.cra_flags = type;
 
inst-alg.halg.digestsize = salg-digestsize;
inst-alg.halg.base.cra_ctxsize = sizeof(struct cryptd_hash_ctx);
@@ -719,10 +746,13 @@ static int cryptd_create_aead(struct crypto_template 
*tmpl,
struct aead_instance_ctx *ctx;
struct crypto_instance *inst;
struct crypto_alg *alg;
+   u32 type = CRYPTO_ALG_TYPE_AEAD;
+   u32 mask = CRYPTO_ALG_TYPE_MASK;
int err;
 
-   alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_AEAD,
-   CRYPTO_ALG_TYPE_MASK);
+   cryptd_check_internal(tb, type, mask);
+
+   alg = crypto_get_attr_alg(tb, type, mask);
 if (IS_ERR(alg))
return PTR_ERR(alg);
 
@@ -739,7 +769,10 @@ static int cryptd_create_aead(struct crypto_template *tmpl,
if (err)
goto out_free_inst;
 
-   inst-alg.cra_flags = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_ASYNC;
+   type = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_ASYNC;
+   if (alg-cra_flags  CRYPTO_ALG_INTERNAL)
+   type |= CRYPTO_ALG_INTERNAL;
+   inst-alg.cra_flags = type;
inst-alg.cra_type = alg-cra_type;

[PATCH v3 11/20] crypto: mark CAST6 helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all CAST6 helper ciphers as internal ciphers to prevent them
from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/cast6_avx_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c
index 0160f68..f448810 100644
--- a/arch/x86/crypto/cast6_avx_glue.c
+++ b/arch/x86/crypto/cast6_avx_glue.c
@@ -372,7 +372,8 @@ static struct crypto_alg cast6_algs[10] = { {
.cra_name   = __ecb-cast6-avx,
.cra_driver_name= __driver-ecb-cast6-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST6_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast6_ctx),
.cra_alignmask  = 0,
@@ -391,7 +392,8 @@ static struct crypto_alg cast6_algs[10] = { {
.cra_name   = __cbc-cast6-avx,
.cra_driver_name= __driver-cbc-cast6-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST6_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast6_ctx),
.cra_alignmask  = 0,
@@ -410,7 +412,8 @@ static struct crypto_alg cast6_algs[10] = { {
.cra_name   = __ctr-cast6-avx,
.cra_driver_name= __driver-ctr-cast6-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct cast6_ctx),
.cra_alignmask  = 0,
@@ -430,7 +433,8 @@ static struct crypto_alg cast6_algs[10] = { {
.cra_name   = __lrw-cast6-avx,
.cra_driver_name= __driver-lrw-cast6-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST6_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast6_lrw_ctx),
.cra_alignmask  = 0,
@@ -453,7 +457,8 @@ static struct crypto_alg cast6_algs[10] = { {
.cra_name   = __xts-cast6-avx,
.cra_driver_name= __driver-xts-cast6-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST6_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast6_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 14/20] crypto: mark Serpent SSE2 helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all Serpent SSE2 helper ciphers as internal ciphers to prevent
them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/serpent_sse2_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/serpent_sse2_glue.c 
b/arch/x86/crypto/serpent_sse2_glue.c
index bf025ad..3643dd5 100644
--- a/arch/x86/crypto/serpent_sse2_glue.c
+++ b/arch/x86/crypto/serpent_sse2_glue.c
@@ -387,7 +387,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __ecb-serpent-sse2,
.cra_driver_name= __driver-ecb-serpent-sse2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -406,7 +407,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __cbc-serpent-sse2,
.cra_driver_name= __driver-cbc-serpent-sse2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -425,7 +427,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __ctr-serpent-sse2,
.cra_driver_name= __driver-ctr-serpent-sse2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -445,7 +448,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __lrw-serpent-sse2,
.cra_driver_name= __driver-lrw-serpent-sse2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_lrw_ctx),
.cra_alignmask  = 0,
@@ -468,7 +472,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __xts-serpent-sse2,
.cra_driver_name= __driver-xts-serpent-sse2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 06/20] crypto: mark ghash clmulni helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all ash clmulni helper ciphers as internal ciphers to prevent them
from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/ghash-clmulni-intel_glue.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c 
b/arch/x86/crypto/ghash-clmulni-intel_glue.c
index 8253d85..2079baf 100644
--- a/arch/x86/crypto/ghash-clmulni-intel_glue.c
+++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c
@@ -154,7 +154,8 @@ static struct shash_alg ghash_alg = {
.cra_name   = __ghash,
.cra_driver_name= __ghash-pclmulqdqni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_SHASH,
+   .cra_flags  = CRYPTO_ALG_TYPE_SHASH |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = GHASH_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct ghash_ctx),
.cra_module = THIS_MODULE,
@@ -261,7 +262,9 @@ static int ghash_async_init_tfm(struct crypto_tfm *tfm)
struct cryptd_ahash *cryptd_tfm;
struct ghash_async_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   cryptd_tfm = cryptd_alloc_ahash(__ghash-pclmulqdqni, 0, 0);
+   cryptd_tfm = cryptd_alloc_ahash(__ghash-pclmulqdqni,
+   CRYPTO_ALG_INTERNAL,
+   CRYPTO_ALG_INTERNAL);
if (IS_ERR(cryptd_tfm))
return PTR_ERR(cryptd_tfm);
ctx-cryptd_tfm = cryptd_tfm;
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 07/20] crypto: mark GHASH ARMv8 vmull.p64 helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all GHASH ARMv8 vmull.p64 helper ciphers as internal ciphers
to prevent them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/arm/crypto/ghash-ce-glue.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/arm/crypto/ghash-ce-glue.c b/arch/arm/crypto/ghash-ce-glue.c
index 8c959d1..03a39fe 100644
--- a/arch/arm/crypto/ghash-ce-glue.c
+++ b/arch/arm/crypto/ghash-ce-glue.c
@@ -141,7 +141,7 @@ static struct shash_alg ghash_alg = {
.cra_name   = ghash,
.cra_driver_name = __driver-ghash-ce,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_SHASH,
+   .cra_flags  = CRYPTO_ALG_TYPE_SHASH | CRYPTO_ALG_INTERNAL,
.cra_blocksize  = GHASH_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct ghash_key),
.cra_module = THIS_MODULE,
@@ -248,7 +248,9 @@ static int ghash_async_init_tfm(struct crypto_tfm *tfm)
struct cryptd_ahash *cryptd_tfm;
struct ghash_async_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   cryptd_tfm = cryptd_alloc_ahash(__driver-ghash-ce, 0, 0);
+   cryptd_tfm = cryptd_alloc_ahash(__driver-ghash-ce,
+   CRYPTO_ALG_INTERNAL,
+   CRYPTO_ALG_INTERNAL);
if (IS_ERR(cryptd_tfm))
return PTR_ERR(cryptd_tfm);
ctx-cryptd_tfm = cryptd_tfm;
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 08/20] crypto: mark AES-NI Camellia helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all AES-NI Camellia helper ciphers as internal ciphers to
prevent them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/camellia_aesni_avx2_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c 
b/arch/x86/crypto/camellia_aesni_avx2_glue.c
index 9a07faf..baf0ac2 100644
--- a/arch/x86/crypto/camellia_aesni_avx2_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c
@@ -343,7 +343,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __ecb-camellia-aesni-avx2,
.cra_driver_name= __driver-ecb-camellia-aesni-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -362,7 +363,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __cbc-camellia-aesni-avx2,
.cra_driver_name= __driver-cbc-camellia-aesni-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -381,7 +383,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __ctr-camellia-aesni-avx2,
.cra_driver_name= __driver-ctr-camellia-aesni-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -401,7 +404,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __lrw-camellia-aesni-avx2,
.cra_driver_name= __driver-lrw-camellia-aesni-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_lrw_ctx),
.cra_alignmask  = 0,
@@ -424,7 +428,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __xts-camellia-aesni-avx2,
.cra_driver_name= __driver-xts-camellia-aesni-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 05/20] crypto: mark AES-NI helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all AES-NI helper ciphers as internal ciphers to prevent them from
being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/aesni-intel_glue.c | 23 +++
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/arch/x86/crypto/aesni-intel_glue.c 
b/arch/x86/crypto/aesni-intel_glue.c
index 6893f49..f9a78f3 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -797,7 +797,9 @@ static int rfc4106_init(struct crypto_tfm *tfm)
PTR_ALIGN((u8 *)crypto_tfm_ctx(tfm), AESNI_ALIGN);
struct crypto_aead *cryptd_child;
struct aesni_rfc4106_gcm_ctx *child_ctx;
-   cryptd_tfm = cryptd_alloc_aead(__driver-gcm-aes-aesni, 0, 0);
+   cryptd_tfm = cryptd_alloc_aead(__driver-gcm-aes-aesni,
+  CRYPTO_ALG_INTERNAL,
+  CRYPTO_ALG_INTERNAL);
if (IS_ERR(cryptd_tfm))
return PTR_ERR(cryptd_tfm);
 
@@ -1262,7 +1264,7 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __aes-aesni,
.cra_driver_name= __driver-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_CIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_CIPHER | CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx) +
  AESNI_ALIGN - 1,
@@ -1281,7 +1283,8 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __ecb-aes-aesni,
.cra_driver_name= __driver-ecb-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx) +
  AESNI_ALIGN - 1,
@@ -1301,7 +1304,8 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __cbc-aes-aesni,
.cra_driver_name= __driver-cbc-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx) +
  AESNI_ALIGN - 1,
@@ -1365,7 +1369,8 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __ctr-aes-aesni,
.cra_driver_name= __driver-ctr-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct crypto_aes_ctx) +
  AESNI_ALIGN - 1,
@@ -1409,7 +1414,7 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __gcm-aes-aesni,
.cra_driver_name= __driver-gcm-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_AEAD,
+   .cra_flags  = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct aesni_rfc4106_gcm_ctx) +
  AESNI_ALIGN,
@@ -1479,7 +1484,8 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __lrw-aes-aesni,
.cra_driver_name= __driver-lrw-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct aesni_lrw_ctx),
.cra_alignmask  = 0,
@@ -1500,7 +1506,8 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __xts-aes-aesni,
.cra_driver_name= __driver-xts-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct aesni_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 02/20] crypto: testmgr to use CRYPTO_ALG_INTERNAL

2015-03-30 Thread Stephan Mueller
Am Dienstag, 31. März 2015, 00:10:34 schrieb Herbert Xu:

Hi Herbert,

 On Fri, Mar 27, 2015 at 11:50:42PM +0100, Stephan Mueller wrote:
  If a cipher allocation fails with -ENOENT, the testmgr now retries
  to allocate the cipher with CRYPTO_ALG_INTERNAL flag.
  
  As all ciphers, including the internal ciphers will be processed by
  the testmgr, it needs to be able to allocate those ciphers.
  
  Signed-off-by: Stephan Mueller smuel...@chronox.de
  ---
  
   crypto/testmgr.c | 22 ++
   1 file changed, 22 insertions(+)
  
  diff --git a/crypto/testmgr.c b/crypto/testmgr.c
  index 1f879ad..609bafa 100644
  --- a/crypto/testmgr.c
  +++ b/crypto/testmgr.c
  @@ -1506,6 +1506,9 @@ static int alg_test_aead(const struct alg_test_desc
  *desc, const char *driver, 
  int err = 0;
  
  tfm = crypto_alloc_aead(driver, type, mask);
  
  +   if (PTR_ERR(tfm) == -ENOENT)
  +   tfm = crypto_alloc_aead(driver, type | CRYPTO_ALG_INTERNAL,
  +   mask | CRYPTO_ALG_INTERNAL);
 
 We need to be able to say give me an algorithm regardless of the
 INTERNAL bit.  How about treating (type  CRYPTO_ALG_INTERNAL) 
 !(mask  CRYPTO_ALG_INTERNAL) as that special case?
 
 So in patch 1 you would do
 
   if (!((type | mask)  CRYPTO_ALG_INTERNAL))
   mask |= CRYPTO_ALG_INTERNAL;

Thank you for the hint. It works and I will release a patch shortly.
 
 Thanks,


-- 
Ciao
Stephan
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 18/20] crypto: mark 64 bit ARMv8 AES helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all 64 bit ARMv8 AES helper ciphers as internal ciphers to
prevent them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/arm64/crypto/aes-glue.c | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index b1b5b89..05d9e16 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -284,7 +284,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __ecb-aes- MODE,
.cra_driver_name= __driver-ecb-aes- MODE,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx),
.cra_alignmask  = 7,
@@ -302,7 +303,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __cbc-aes- MODE,
.cra_driver_name= __driver-cbc-aes- MODE,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx),
.cra_alignmask  = 7,
@@ -320,7 +322,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __ctr-aes- MODE,
.cra_driver_name= __driver-ctr-aes- MODE,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct crypto_aes_ctx),
.cra_alignmask  = 7,
@@ -338,7 +341,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __xts-aes- MODE,
.cra_driver_name= __driver-xts-aes- MODE,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_xts_ctx),
.cra_alignmask  = 7,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 09/20] crypto: mark CAST5 helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all CAST5 helper ciphers as internal ciphers to prevent them
from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/cast5_avx_glue.c | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/crypto/cast5_avx_glue.c b/arch/x86/crypto/cast5_avx_glue.c
index 60ada67..236c809 100644
--- a/arch/x86/crypto/cast5_avx_glue.c
+++ b/arch/x86/crypto/cast5_avx_glue.c
@@ -341,7 +341,8 @@ static struct crypto_alg cast5_algs[6] = { {
.cra_name   = __ecb-cast5-avx,
.cra_driver_name= __driver-ecb-cast5-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST5_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast5_ctx),
.cra_alignmask  = 0,
@@ -360,7 +361,8 @@ static struct crypto_alg cast5_algs[6] = { {
.cra_name   = __cbc-cast5-avx,
.cra_driver_name= __driver-cbc-cast5-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST5_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast5_ctx),
.cra_alignmask  = 0,
@@ -379,7 +381,8 @@ static struct crypto_alg cast5_algs[6] = { {
.cra_name   = __ctr-cast5-avx,
.cra_driver_name= __driver-ctr-cast5-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct cast5_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 12/20] crypto: mark Serpent AVX2 helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all Serpent AVX2 helper ciphers as internal ciphers to prevent
them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/serpent_avx2_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/serpent_avx2_glue.c 
b/arch/x86/crypto/serpent_avx2_glue.c
index 437e47a..2f63dc8 100644
--- a/arch/x86/crypto/serpent_avx2_glue.c
+++ b/arch/x86/crypto/serpent_avx2_glue.c
@@ -309,7 +309,8 @@ static struct crypto_alg srp_algs[10] = { {
.cra_name   = __ecb-serpent-avx2,
.cra_driver_name= __driver-ecb-serpent-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -329,7 +330,8 @@ static struct crypto_alg srp_algs[10] = { {
.cra_name   = __cbc-serpent-avx2,
.cra_driver_name= __driver-cbc-serpent-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -349,7 +351,8 @@ static struct crypto_alg srp_algs[10] = { {
.cra_name   = __ctr-serpent-avx2,
.cra_driver_name= __driver-ctr-serpent-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -370,7 +373,8 @@ static struct crypto_alg srp_algs[10] = { {
.cra_name   = __lrw-serpent-avx2,
.cra_driver_name= __driver-lrw-serpent-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_lrw_ctx),
.cra_alignmask  = 0,
@@ -394,7 +398,8 @@ static struct crypto_alg srp_algs[10] = { {
.cra_name   = __xts-serpent-avx2,
.cra_driver_name= __driver-xts-serpent-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 01/20] crypto: prevent helper ciphers from being used

2015-03-30 Thread Stephan Mueller
Several hardware related cipher implementations are implemented as
follows: a helper cipher implementation is registered with the
kernel crypto API.

Such helper ciphers are never intended to be called by normal users. In
some cases, calling them via the normal crypto API may even cause
failures including kernel crashes. In a normal case, the wrapping
ciphers that use the helpers ensure that these helpers are invoked
such that they cannot cause any calamity.

Considering the AF_ALG user space interface, unprivileged users can
call all ciphers registered with the crypto API, including these
helper ciphers that are not intended to be called directly. That
means, with AF_ALG user space may invoke these helper ciphers
and may cause undefined states or side effects.

To avoid any potential side effects with such helpers, the patch
prevents the helpers to be called directly. A new cipher type
flag is added: CRYPTO_ALG_INTERNAL. This flag shall be used
to mark helper ciphers. These ciphers can only be used if the
caller invoke the cipher with CRYPTO_ALG_INTERNAL in the type and
mask field.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 crypto/api.c   | 10 ++
 include/linux/crypto.h |  6 ++
 2 files changed, 16 insertions(+)

diff --git a/crypto/api.c b/crypto/api.c
index 2a81e98..afe4610 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -257,6 +257,16 @@ struct crypto_alg *crypto_alg_mod_lookup(const char *name, 
u32 type, u32 mask)
mask |= CRYPTO_ALG_TESTED;
}
 
+   /*
+* If the internal flag is set for a cipher, require a caller to
+* to invoke the cipher with the internal flag to use that cipher.
+* Also, if a caller wants to allocate a cipher that may or may
+* not be an internal cipher, use type | CRYPTO_ALG_INTERNAL and
+* !(mask  CRYPTO_ALG_INTERNAL).
+*/
+   if (!((type | mask)  CRYPTO_ALG_INTERNAL))
+   mask |= CRYPTO_ALG_INTERNAL;
+
larval = crypto_larval_lookup(name, type, mask);
if (IS_ERR(larval) || !crypto_is_larval(larval))
return larval;
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index fb5ef16..10df5d2 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -95,6 +95,12 @@
 #define CRYPTO_ALG_KERN_DRIVER_ONLY0x1000
 
 /*
+ * Mark a cipher as a service implementation only usable by another
+ * cipher and never by a normal user of the kernel crypto API
+ */
+#define CRYPTO_ALG_INTERNAL0x2000
+
+/*
  * Transform masks and values (for crt_flags).
  */
 #define CRYPTO_TFM_REQ_MASK0x000fff00
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 16/20] crypto: mark NEON bit sliced AES helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all NEON bit sliced AES helper ciphers as internal ciphers to
prevent them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/arm/crypto/aesbs-glue.c | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/arm/crypto/aesbs-glue.c b/arch/arm/crypto/aesbs-glue.c
index 15468fb..6d68529 100644
--- a/arch/arm/crypto/aesbs-glue.c
+++ b/arch/arm/crypto/aesbs-glue.c
@@ -301,7 +301,8 @@ static struct crypto_alg aesbs_algs[] = { {
.cra_name   = __cbc-aes-neonbs,
.cra_driver_name= __driver-cbc-aes-neonbs,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct aesbs_cbc_ctx),
.cra_alignmask  = 7,
@@ -319,7 +320,8 @@ static struct crypto_alg aesbs_algs[] = { {
.cra_name   = __ctr-aes-neonbs,
.cra_driver_name= __driver-ctr-aes-neonbs,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct aesbs_ctr_ctx),
.cra_alignmask  = 7,
@@ -337,7 +339,8 @@ static struct crypto_alg aesbs_algs[] = { {
.cra_name   = __xts-aes-neonbs,
.cra_driver_name= __driver-xts-aes-neonbs,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct aesbs_xts_ctx),
.cra_alignmask  = 7,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 15/20] crypto: mark Twofish AVX helper ciphers

2015-03-30 Thread Stephan Mueller
Flag all Twofish AVX helper ciphers as internal ciphers to prevent
them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/twofish_avx_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/twofish_avx_glue.c 
b/arch/x86/crypto/twofish_avx_glue.c
index 1ac531e..b5e2d56 100644
--- a/arch/x86/crypto/twofish_avx_glue.c
+++ b/arch/x86/crypto/twofish_avx_glue.c
@@ -340,7 +340,8 @@ static struct crypto_alg twofish_algs[10] = { {
.cra_name   = __ecb-twofish-avx,
.cra_driver_name= __driver-ecb-twofish-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = TF_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct twofish_ctx),
.cra_alignmask  = 0,
@@ -359,7 +360,8 @@ static struct crypto_alg twofish_algs[10] = { {
.cra_name   = __cbc-twofish-avx,
.cra_driver_name= __driver-cbc-twofish-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = TF_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct twofish_ctx),
.cra_alignmask  = 0,
@@ -378,7 +380,8 @@ static struct crypto_alg twofish_algs[10] = { {
.cra_name   = __ctr-twofish-avx,
.cra_driver_name= __driver-ctr-twofish-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct twofish_ctx),
.cra_alignmask  = 0,
@@ -398,7 +401,8 @@ static struct crypto_alg twofish_algs[10] = { {
.cra_name   = __lrw-twofish-avx,
.cra_driver_name= __driver-lrw-twofish-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = TF_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct twofish_lrw_ctx),
.cra_alignmask  = 0,
@@ -421,7 +425,8 @@ static struct crypto_alg twofish_algs[10] = { {
.cra_name   = __xts-twofish-avx,
.cra_driver_name= __driver-xts-twofish-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = TF_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct twofish_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 00/20] crypto: restrict usage of helper ciphers

2015-03-30 Thread Stephan Mueller
Hi,

Based on the discussion in the thread [1], a flag is added to the
kernel crypto API to allow ciphers to be marked as internal.

The patch set is tested in FIPS and non-FIPS mode. In addition,
the enforcement that the helper cipher of __driver-gcm-aes-aesni
cannot be loaded, but the wrapper of rfc4106-gcm-aesni can be used
is tested to demonstrate that the patch works. The testing also shows
that of__driver-gcm-aes-aesni is subject to the testmgr self test an
can therefore be used in FIPS mode.

All cipher implementation whose definition has a cra_priority of 0
as well as the ciphers that are wrapped by cryptd and mcryptd
are marked as internal ciphers to prevent them from being called by
users.

The testing also includes the invocation of normal crypto operations
from user space via AF_ALG and libkcapi showing that all of them work
unaffected.

[1] http://comments.gmane.org/gmane.linux.kernel.cryptoapi/13705

Changes v2:
* Overhaul enforcement of the internal flag as suggested by Herbert:
  a cipher marked as internal can only be invoked if the caller
  instantiates it with the internal flag set in the type and mask
  field.
* The overhaul implies that cryptd and mcryptd instances are marked
  as internal if the underlying cipher is marked as internal.
* The overhaul implies that the testmgr must try to allocate a
  cipher again with the internal flag in case the first allocation
  failed with -ENOENT.
* Mark internal cipher in arch/x86/crypto/sha-mb/sha1_mb.c.

Changes v3:
* Allow a caller to specify type  CRYPTO_ALG_INTERNAL and
  !(mask  CRYPTO_ALG_INTERNAL) when caller requests a cipher and
  does not care whether it is marked as internal or not (suggested by
  Herbert Xu)

Stephan Mueller (20):
  crypto: prevent helper ciphers from being used
  crypto: testmgr to use CRYPTO_ALG_INTERNAL
  crypto: cryptd to process CRYPTO_ALG_INTERNAL
  crypto: /proc/crypto: identify internal ciphers
  crypto: mark AES-NI helper ciphers
  crypto: mark ghash clmulni helper ciphers
  crypto: mark GHASH ARMv8 vmull.p64 helper ciphers
  crypto: mark AES-NI Camellia helper ciphers
  crypto: mark CAST5 helper ciphers
  crypto: mark AVX Camellia helper ciphers
  crypto: mark CAST6 helper ciphers
  crypto: mark Serpent AVX2 helper ciphers
  crypto: mark Serpent AVX helper ciphers
  crypto: mark Serpent SSE2 helper ciphers
  crypto: mark Twofish AVX helper ciphers
  crypto: mark NEON bit sliced AES helper ciphers
  crypto: mark ARMv8 AES helper ciphers
  crypto: mark 64 bit ARMv8 AES helper ciphers
  crypto: mcryptd to process CRYPTO_ALG_INTERNAL
  crypto: mark Multi buffer SHA1 helper cipher

 arch/arm/crypto/aes-ce-glue.c  | 12 +---
 arch/arm/crypto/aesbs-glue.c   |  9 --
 arch/arm/crypto/ghash-ce-glue.c|  6 ++--
 arch/arm64/crypto/aes-glue.c   | 12 +---
 arch/x86/crypto/aesni-intel_glue.c | 23 +-
 arch/x86/crypto/camellia_aesni_avx2_glue.c | 15 ++---
 arch/x86/crypto/camellia_aesni_avx_glue.c  | 15 ++---
 arch/x86/crypto/cast5_avx_glue.c   |  9 --
 arch/x86/crypto/cast6_avx_glue.c   | 15 ++---
 arch/x86/crypto/ghash-clmulni-intel_glue.c |  7 +++--
 arch/x86/crypto/serpent_avx2_glue.c| 15 ++---
 arch/x86/crypto/serpent_avx_glue.c | 15 ++---
 arch/x86/crypto/serpent_sse2_glue.c| 15 ++---
 arch/x86/crypto/sha-mb/sha1_mb.c   |  7 +++--
 arch/x86/crypto/twofish_avx_glue.c | 15 ++---
 crypto/ablk_helper.c   |  3 +-
 crypto/api.c   | 10 ++
 crypto/cryptd.c| 49 +-
 crypto/mcryptd.c   | 25 +--
 crypto/proc.c  |  3 ++
 crypto/testmgr.c   | 14 -
 include/linux/crypto.h |  6 
 22 files changed, 219 insertions(+), 81 deletions(-)

-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH net-next] crypto: algif - explicitly mark end of data

2015-03-30 Thread Tadeusz Struk
After the TX sgl is expanded we need to explicitly mark end of data
at the last buffer that contains data.

Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com
---
 crypto/algif_skcipher.c |   13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 8276f21..9492dd5 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -509,11 +509,10 @@ static int skcipher_recvmsg_async(struct socket *sock, 
struct msghdr *msg,
struct skcipher_async_req *sreq;
struct ablkcipher_request *req;
struct skcipher_async_rsgl *last_rsgl = NULL;
-   unsigned int len = 0, tx_nents = skcipher_all_sg_nents(ctx);
+   unsigned int txbufs = 0, len = 0, tx_nents = skcipher_all_sg_nents(ctx);
unsigned int reqlen = sizeof(struct skcipher_async_req) +
GET_REQ_SIZE(ctx) + GET_IV_SIZE(ctx);
-   int i = 0;
-   int err = -ENOMEM;
+   int mark = 0, err = -ENOMEM;
 
lock_sock(sk);
req = kmalloc(reqlen, GFP_KERNEL);
@@ -555,7 +554,7 @@ static int skcipher_recvmsg_async(struct socket *sock, 
struct msghdr *msg,
 iov_iter_count(msg-msg_iter));
used = min_t(unsigned long, used, sg-length);
 
-   if (i == tx_nents) {
+   if (txbufs == tx_nents) {
struct scatterlist *tmp;
int x;
/* Ran out of tx slots in async request
@@ -573,10 +572,11 @@ static int skcipher_recvmsg_async(struct socket *sock, 
struct msghdr *msg,
kfree(sreq-tsg);
sreq-tsg = tmp;
tx_nents *= 2;
+   mark = 1;
}
/* Need to take over the tx sgl from ctx
 * to the asynch req - these sgls will be freed later */
-   sg_set_page(sreq-tsg + i++, sg_page(sg), sg-length,
+   sg_set_page(sreq-tsg + txbufs++, sg_page(sg), sg-length,
sg-offset);
 
if (list_empty(sreq-list)) {
@@ -604,6 +604,9 @@ static int skcipher_recvmsg_async(struct socket *sock, 
struct msghdr *msg,
iov_iter_advance(msg-msg_iter, used);
}
 
+   if (mark)
+   sg_mark_end(sreq-tsg + txbufs - 1);
+
ablkcipher_request_set_crypt(req, sreq-tsg, sreq-first_sgl.sgl.sg,
 len, sreq-iv);
err = ctx-enc ? crypto_ablkcipher_encrypt(req) :

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH net-next] crypto: af_alg - make exports consistant

2015-03-30 Thread Tadeusz Struk
Use EXPORT_SYMBOL_GPL instead of EXPORT_SYMBOL.

Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com
---
 crypto/af_alg.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 26089d1..f22cc56 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -381,7 +381,7 @@ void af_alg_link_sg(struct af_alg_sgl *sgl_prev, struct 
af_alg_sgl *sgl_new)
sg_unmark_end(sgl_prev-sg + sgl_prev-npages - 1);
sg_chain(sgl_prev-sg, sgl_prev-npages + 1, sgl_new-sg);
 }
-EXPORT_SYMBOL(af_alg_link_sg);
+EXPORT_SYMBOL_GPL(af_alg_link_sg);
 
 void af_alg_free_sg(struct af_alg_sgl *sgl)
 {

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH net-next] crypto: algif - use kmalloc instead of kzalloc

2015-03-30 Thread Tadeusz Struk
No need to use kzalloc to allocate sgls as the structure is initialized anyway.

Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com
---
 crypto/algif_skcipher.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 9492dd5..5fd631f 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -583,7 +583,7 @@ static int skcipher_recvmsg_async(struct socket *sock, 
struct msghdr *msg,
rsgl = sreq-first_sgl;
list_add_tail(rsgl-list, sreq-list);
} else {
-   rsgl = kzalloc(sizeof(*rsgl), GFP_KERNEL);
+   rsgl = kmalloc(sizeof(*rsgl), GFP_KERNEL);
if (!rsgl) {
err = -ENOMEM;
goto free;

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: omap-sham: Check for HIGHMEM before mapping SG pages

2015-03-30 Thread Lokesh Vutla
Commit 26a05489ee0e (crypto: omap-sham - Map SG pages if they are HIGHMEM 
before accessing)
says that HIGHMEM pages may not be mapped so we must
kmap them before accessing, but it doesn't check whether the
corresponding page is in highmem or not. Because of this all
the crypto tests are failing.

: d9 a1 1b 7c aa 90 3b aa 11 ab cb 25 00 b8 ac bf
[2.338169] 0010: c1 39 cd ff 48 d0 a8 e2 2b fa 33 a1
[2.344008] alg: hash: Chunking test 1 failed for omap-sha256

So Checking for HIGHMEM before mapping SG pages.

Fixes: 26a05489ee0 (crypto: omap-sham - Map SG pages if they are HIGHMEM 
before accessing)
Signed-off-by: Lokesh Vutla lokeshvu...@ti.com
---
 drivers/crypto/omap-sham.c | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index 3c76696..ace5852 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -639,13 +639,17 @@ static size_t omap_sham_append_sg(struct omap_sham_reqctx 
*ctx)
const u8 *vaddr;
 
while (ctx-sg) {
-   vaddr = kmap_atomic(sg_page(ctx-sg));
+   if (PageHighMem(sg_page(ctx-sg)))
+   vaddr = kmap_atomic(sg_page(ctx-sg));
+   else
+   vaddr = sg_virt(ctx-sg);
 
count = omap_sham_append_buffer(ctx,
vaddr + ctx-offset,
ctx-sg-length - ctx-offset);
 
-   kunmap_atomic((void *)vaddr);
+   if (PageHighMem(sg_page(ctx-sg)))
+   kunmap_atomic((void *)vaddr);
 
if (!count)
break;
-- 
1.9.1

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: omap-aes: Fix support for unequal lengths

2015-03-30 Thread Lokesh Vutla
For cases where total length of an input SGs is not same as
length of the input data for encryption, omap-aes driver
crashes. This happens in the case when IPsec is trying to use
omap-aes driver.

To avoid this, we copy all the pages from the input SG list
into a contiguous buffer and prepare a single element SG list
for this buffer with length as the total bytes to crypt, which is
similar thing that is done in case of unaligned lengths.

Fixes: 6242332ff2f3 (crypto: omap-aes - Add support for cases of unaligned 
lengths)
Signed-off-by: Lokesh Vutla lokeshvu...@ti.com
---
 drivers/crypto/omap-aes.c | 14 +++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index 42f95a4..9a28b7e 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -554,15 +554,23 @@ static int omap_aes_crypt_dma_stop(struct omap_aes_dev 
*dd)
return err;
 }
 
-static int omap_aes_check_aligned(struct scatterlist *sg)
+static int omap_aes_check_aligned(struct scatterlist *sg, int total)
 {
+   int len = 0;
+
while (sg) {
if (!IS_ALIGNED(sg-offset, 4))
return -1;
if (!IS_ALIGNED(sg-length, AES_BLOCK_SIZE))
return -1;
+
+   len += sg-length;
sg = sg_next(sg);
}
+
+   if (len != total)
+   return -1;
+
return 0;
 }
 
@@ -633,8 +641,8 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
dd-in_sg = req-src;
dd-out_sg = req-dst;
 
-   if (omap_aes_check_aligned(dd-in_sg) ||
-   omap_aes_check_aligned(dd-out_sg)) {
+   if (omap_aes_check_aligned(dd-in_sg, dd-total) ||
+   omap_aes_check_aligned(dd-out_sg, dd-total)) {
if (omap_aes_copy_sgs(dd))
pr_err(Failed to copy SGs for unaligned cases\n);
dd-sgs_copied = 1;
-- 
1.9.1

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: omap-sham: Use pm_runtime_irq_safe()

2015-03-30 Thread Lokesh Vutla
omap_sham_handle_queue() can be called as part of done_task tasklet.
During this its atomic and any calls to pm functions cannot sleep.

But there is a call to pm_runtime_get_sync() (which can sleep) in
omap_sham_handle_queue(), because of which the following appears:
 [  116.169969] BUG: scheduling while atomic: kworker/0:2/2676/0x0100

Add pm_runtime_irq_safe() to avoid this.

Signed-off-by: Lokesh Vutla lokeshvu...@ti.com
---
 drivers/crypto/omap-sham.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index ace5852..81ed511 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -1949,6 +1949,7 @@ static int omap_sham_probe(struct platform_device *pdev)
dd-flags |= dd-pdata-flags;
 
pm_runtime_enable(dev);
+   pm_runtime_irq_safe(dev);
pm_runtime_get_sync(dev);
rev = omap_sham_read(dd, SHA_REG_REV(dd));
pm_runtime_put_sync(pdev-dev);
-- 
1.9.1

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html