[PATCH 01/16] crypto: prevent helper ciphers from being used

2015-03-19 Thread Stephan Mueller
Several hardware related cipher implementations are implemented as
follows: a helper cipher implementation is registered with the
kernel crypto API.

Such helper ciphers are never intended to be called by normal users. In
some cases, calling them via the normal crypto API may even cause
failures including kernel crashes. In a normal case, the wrapping
ciphers that use the helpers ensure that these helpers are invoked
such that they cannot cause any calamity.

Considering the AF_ALG user space interface, unprivileged users can
call all ciphers registered with the crypto API, including these
helper ciphers that are not intended to be called directly. That
means, with AF_ALG user space may invoke these helper ciphers
and may cause undefined states or side effects.

To avoid any potential side effects with such helpers, the patch
prevents the helpers to be called directly. A new cipher type
flag is added: CRYPTO_ALG_INTERNAL. This flag shall be used
to mark helper ciphers. All ciphers with that flag can be used
by wrapping ciphers via the crypto_*_spawn* API calls. In addtion
the testmgr also can directly use the ciphers via the
kernel crypto API. Any other callers cannot use ciphers marked
with this flag using the kernel crypto API. The various crypto_alloc_*
calls will return an error.

This patch modified all callers of __crypto_alloc_tfm to honor the new
flag, except the crypto_spawn_tfm function that services the
crypto_*_spawn_* API.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 crypto/ablkcipher.c|  2 +-
 crypto/aead.c  |  2 +-
 crypto/api.c   | 21 -
 crypto/internal.h  |  2 ++
 include/linux/crypto.h |  6 ++
 5 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
index db201bca..2cd83ad 100644
--- a/crypto/ablkcipher.c
+++ b/crypto/ablkcipher.c
@@ -688,7 +688,7 @@ struct crypto_ablkcipher *crypto_alloc_ablkcipher(const 
char *alg_name,
goto err;
}
 
-   tfm = __crypto_alloc_tfm(alg, type, mask);
+   tfm = __crypto_alloc_tfm_safe(alg, type, mask);
if (!IS_ERR(tfm))
return __crypto_ablkcipher_cast(tfm);
 
diff --git a/crypto/aead.c b/crypto/aead.c
index 710..9ae3aa9 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -542,7 +542,7 @@ struct crypto_aead *crypto_alloc_aead(const char *alg_name, 
u32 type, u32 mask)
goto err;
}
 
-   tfm = __crypto_alloc_tfm(alg, type, mask);
+   tfm = __crypto_alloc_tfm_safe(alg, type, mask);
if (!IS_ERR(tfm))
return __crypto_aead_cast(tfm);
 
diff --git a/crypto/api.c b/crypto/api.c
index 2a81e98..3fdc47b 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -389,6 +389,25 @@ out:
 }
 EXPORT_SYMBOL_GPL(__crypto_alloc_tfm);
 
+struct crypto_tfm *__crypto_alloc_tfm_safe(struct crypto_alg *alg, u32 type,
+  u32 mask)
+{
+   /*
+* Prevent all ciphers from being loaded which have are marked
+* as CRYPTO_ALG_INTERNAL. Those cipher implementations are helper
+* ciphers and are not intended for general consumption.
+*
+* The only exceptions are allocation of ciphers for executing
+* the testing via the test manager.
+*/
+
+   if ((alg-cra_flags  CRYPTO_ALG_INTERNAL) 
+   !(mask  CRYPTO_ALG_TESTED))
+   return ERR_PTR(-ENOENT);
+
+   return __crypto_alloc_tfm(alg, type, mask);
+}
+EXPORT_SYMBOL_GPL(__crypto_alloc_tfm_safe);
 /*
  * crypto_alloc_base - Locate algorithm and allocate transform
  * @alg_name: Name of algorithm
@@ -425,7 +444,7 @@ struct crypto_tfm *crypto_alloc_base(const char *alg_name, 
u32 type, u32 mask)
goto err;
}
 
-   tfm = __crypto_alloc_tfm(alg, type, mask);
+   tfm = __crypto_alloc_tfm_safe(alg, type, mask);
if (!IS_ERR(tfm))
return tfm;
 
diff --git a/crypto/internal.h b/crypto/internal.h
index bd39bfc..8526a37 100644
--- a/crypto/internal.h
+++ b/crypto/internal.h
@@ -91,6 +91,8 @@ void crypto_remove_final(struct list_head *list);
 void crypto_shoot_alg(struct crypto_alg *alg);
 struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
  u32 mask);
+struct crypto_tfm *__crypto_alloc_tfm_safe(struct crypto_alg *alg, u32 type,
+  u32 mask);
 void *crypto_create_tfm(struct crypto_alg *alg,
const struct crypto_type *frontend);
 struct crypto_alg *crypto_find_alg(const char *alg_name,
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index fb5ef16..10df5d2 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -95,6 +95,12 @@
 #define CRYPTO_ALG_KERN_DRIVER_ONLY0x1000
 
 

[PATCH 04/16] crypto: mark AES-NI Camellia helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all AES-NI Camellia helper ciphers as internal ciphers to
prevent them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/camellia_aesni_avx2_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c 
b/arch/x86/crypto/camellia_aesni_avx2_glue.c
index 9a07faf..baf0ac2 100644
--- a/arch/x86/crypto/camellia_aesni_avx2_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c
@@ -343,7 +343,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __ecb-camellia-aesni-avx2,
.cra_driver_name= __driver-ecb-camellia-aesni-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -362,7 +363,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __cbc-camellia-aesni-avx2,
.cra_driver_name= __driver-cbc-camellia-aesni-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -381,7 +383,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __ctr-camellia-aesni-avx2,
.cra_driver_name= __driver-ctr-camellia-aesni-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -401,7 +404,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __lrw-camellia-aesni-avx2,
.cra_driver_name= __driver-lrw-camellia-aesni-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_lrw_ctx),
.cra_alignmask  = 0,
@@ -424,7 +428,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __xts-camellia-aesni-avx2,
.cra_driver_name= __driver-xts-camellia-aesni-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 10/16] crypto: mark Serpent AVX helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all Serpent AVX helper ciphers as internal ciphers to prevent
them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/serpent_avx_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/serpent_avx_glue.c 
b/arch/x86/crypto/serpent_avx_glue.c
index 7e21739..c8d478a 100644
--- a/arch/x86/crypto/serpent_avx_glue.c
+++ b/arch/x86/crypto/serpent_avx_glue.c
@@ -378,7 +378,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __ecb-serpent-avx,
.cra_driver_name= __driver-ecb-serpent-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -397,7 +398,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __cbc-serpent-avx,
.cra_driver_name= __driver-cbc-serpent-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -416,7 +418,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __ctr-serpent-avx,
.cra_driver_name= __driver-ctr-serpent-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -436,7 +439,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __lrw-serpent-avx,
.cra_driver_name= __driver-lrw-serpent-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_lrw_ctx),
.cra_alignmask  = 0,
@@ -459,7 +463,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __xts-serpent-avx,
.cra_driver_name= __driver-xts-serpent-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 09/16] crypto: mark Serpent AVX2 helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all Serpent AVX2 helper ciphers as internal ciphers to prevent
them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/serpent_avx2_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/serpent_avx2_glue.c 
b/arch/x86/crypto/serpent_avx2_glue.c
index 437e47a..2f63dc8 100644
--- a/arch/x86/crypto/serpent_avx2_glue.c
+++ b/arch/x86/crypto/serpent_avx2_glue.c
@@ -309,7 +309,8 @@ static struct crypto_alg srp_algs[10] = { {
.cra_name   = __ecb-serpent-avx2,
.cra_driver_name= __driver-ecb-serpent-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -329,7 +330,8 @@ static struct crypto_alg srp_algs[10] = { {
.cra_name   = __cbc-serpent-avx2,
.cra_driver_name= __driver-cbc-serpent-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -349,7 +351,8 @@ static struct crypto_alg srp_algs[10] = { {
.cra_name   = __ctr-serpent-avx2,
.cra_driver_name= __driver-ctr-serpent-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -370,7 +373,8 @@ static struct crypto_alg srp_algs[10] = { {
.cra_name   = __lrw-serpent-avx2,
.cra_driver_name= __driver-lrw-serpent-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_lrw_ctx),
.cra_alignmask  = 0,
@@ -394,7 +398,8 @@ static struct crypto_alg srp_algs[10] = { {
.cra_name   = __xts-serpent-avx2,
.cra_driver_name= __driver-xts-serpent-avx2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 02/16] crypto: /proc/crypto: identify internal ciphers

2015-03-19 Thread Stephan Mueller
With ciphers that now cannot be accessed via the kernel crypto API,
callers shall be able to identify the ciphers that are not callable. The
/proc/crypto file is added a boolean field identifying that such
internal ciphers.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 crypto/proc.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/crypto/proc.c b/crypto/proc.c
index 4a0a7aa..4ffe73b 100644
--- a/crypto/proc.c
+++ b/crypto/proc.c
@@ -89,6 +89,9 @@ static int c_show(struct seq_file *m, void *p)
seq_printf(m, selftest : %s\n,
   (alg-cra_flags  CRYPTO_ALG_TESTED) ?
   passed : unknown);
+   seq_printf(m, internal : %s\n,
+  (alg-cra_flags  CRYPTO_ALG_INTERNAL) ?
+  yes : no);
 
if (alg-cra_flags  CRYPTO_ALG_LARVAL) {
seq_printf(m, type : larval\n);
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 06/16] crypto: mark AVX Camellia helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all AVX Camellia helper ciphers as internal ciphers to prevent
them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/camellia_aesni_avx_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c 
b/arch/x86/crypto/camellia_aesni_avx_glue.c
index ed38d95..78818a1 100644
--- a/arch/x86/crypto/camellia_aesni_avx_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx_glue.c
@@ -335,7 +335,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __ecb-camellia-aesni,
.cra_driver_name= __driver-ecb-camellia-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -354,7 +355,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __cbc-camellia-aesni,
.cra_driver_name= __driver-cbc-camellia-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -373,7 +375,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __ctr-camellia-aesni,
.cra_driver_name= __driver-ctr-camellia-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct camellia_ctx),
.cra_alignmask  = 0,
@@ -393,7 +396,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __lrw-camellia-aesni,
.cra_driver_name= __driver-lrw-camellia-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_lrw_ctx),
.cra_alignmask  = 0,
@@ -416,7 +420,8 @@ static struct crypto_alg cmll_algs[10] = { {
.cra_name   = __xts-camellia-aesni,
.cra_driver_name= __driver-xts-camellia-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAMELLIA_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct camellia_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 00/16] crypto: restrict usage of helper ciphers

2015-03-19 Thread Stephan Mueller
Hi,

Based on the discussion in the thread [1], a flag is added to the
kernel crypto API to allow ciphers to be marked as internal.

The patch set is tested in FIPS and non-FIPS mode. In addition,
the enforcement that the helper cipher of __driver-gcm-aes-aesni
cannot be loaded, but the wrapper of rfc4106-gcm-aesni can be used
is tested to demonstrate that the patch works. The testing also shows
that of__driver-gcm-aes-aesni is subject to the testmgr self test an
can therefore be used in FIPS mode.

All cipher implementation whose definition has a cra_priority of 0
are marked as internal ciphers to prevent them from being called by
users.

The testing also includes the invocation of normal crypto operations
from user space via AF_ALG and libkcapi showing that all of them work
unaffected.

[1] http://comments.gmane.org/gmane.linux.kernel.cryptoapi/13705

Stephan Mueller (16):
  crypto: prevent helper ciphers from being used
  crypto: /proc/crypto: identify internal ciphers
  crypto: mark AES-NI helper ciphers
  crypto: mark AES-NI Camellia helper ciphers
  crypto: mark CAST5 helper ciphers
  crypto: mark AVX Camellia helper ciphers
  crypto: mark CAST6 helper ciphers
  crypto: mark ghash clmulni helper ciphers
  crypto: mark Serpent AVX2 helper ciphers
  crypto: mark Serpent AVX helper ciphers
  crypto: mark Serpent SSE2 helper ciphers
  crypto: mark Twofish AVX helper ciphers
  crypto: mark NEON bit sliced AES helper ciphers
  crypto: mark ARMv8 AES helper ciphers
  crypto: mark GHASH ARMv8 vmull.p64 helper ciphers
  crypto: mark 64 bit ARMv8 AES helper ciphers

 arch/arm/crypto/aes-ce-glue.c  | 12 
 arch/arm/crypto/aesbs-glue.c   |  9 ++---
 arch/arm/crypto/ghash-ce-glue.c|  2 +-
 arch/arm64/crypto/aes-glue.c   | 12 
 arch/x86/crypto/aesni-intel_glue.c | 19 ---
 arch/x86/crypto/camellia_aesni_avx2_glue.c | 15 ++-
 arch/x86/crypto/camellia_aesni_avx_glue.c  | 15 ++-
 arch/x86/crypto/cast5_avx_glue.c   |  9 ++---
 arch/x86/crypto/cast6_avx_glue.c   | 15 ++-
 arch/x86/crypto/ghash-clmulni-intel_glue.c |  3 ++-
 arch/x86/crypto/serpent_avx2_glue.c| 15 ++-
 arch/x86/crypto/serpent_avx_glue.c | 15 ++-
 arch/x86/crypto/serpent_sse2_glue.c| 15 ++-
 arch/x86/crypto/twofish_avx_glue.c | 15 ++-
 crypto/ablkcipher.c|  2 +-
 crypto/aead.c  |  2 +-
 crypto/api.c   | 21 -
 crypto/internal.h  |  2 ++
 crypto/proc.c  |  3 +++
 include/linux/crypto.h |  6 ++
 20 files changed, 146 insertions(+), 61 deletions(-)

-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 11/16] crypto: mark Serpent SSE2 helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all Serpent SSE2 helper ciphers as internal ciphers to prevent
them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/serpent_sse2_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/serpent_sse2_glue.c 
b/arch/x86/crypto/serpent_sse2_glue.c
index bf025ad..3643dd5 100644
--- a/arch/x86/crypto/serpent_sse2_glue.c
+++ b/arch/x86/crypto/serpent_sse2_glue.c
@@ -387,7 +387,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __ecb-serpent-sse2,
.cra_driver_name= __driver-ecb-serpent-sse2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -406,7 +407,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __cbc-serpent-sse2,
.cra_driver_name= __driver-cbc-serpent-sse2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -425,7 +427,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __ctr-serpent-sse2,
.cra_driver_name= __driver-ctr-serpent-sse2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct serpent_ctx),
.cra_alignmask  = 0,
@@ -445,7 +448,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __lrw-serpent-sse2,
.cra_driver_name= __driver-lrw-serpent-sse2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_lrw_ctx),
.cra_alignmask  = 0,
@@ -468,7 +472,8 @@ static struct crypto_alg serpent_algs[10] = { {
.cra_name   = __xts-serpent-sse2,
.cra_driver_name= __driver-xts-serpent-sse2,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = SERPENT_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct serpent_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 03/16] crypto: mark AES-NI helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all AES-NI helper ciphers as internal ciphers to prevent them from
being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/aesni-intel_glue.c | 19 ---
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/arch/x86/crypto/aesni-intel_glue.c 
b/arch/x86/crypto/aesni-intel_glue.c
index 6893f49..3d10a84 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -1262,7 +1262,7 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __aes-aesni,
.cra_driver_name= __driver-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_CIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_CIPHER | CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx) +
  AESNI_ALIGN - 1,
@@ -1281,7 +1281,8 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __ecb-aes-aesni,
.cra_driver_name= __driver-ecb-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx) +
  AESNI_ALIGN - 1,
@@ -1301,7 +1302,8 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __cbc-aes-aesni,
.cra_driver_name= __driver-cbc-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx) +
  AESNI_ALIGN - 1,
@@ -1365,7 +1367,8 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __ctr-aes-aesni,
.cra_driver_name= __driver-ctr-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct crypto_aes_ctx) +
  AESNI_ALIGN - 1,
@@ -1409,7 +1412,7 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __gcm-aes-aesni,
.cra_driver_name= __driver-gcm-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_AEAD,
+   .cra_flags  = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct aesni_rfc4106_gcm_ctx) +
  AESNI_ALIGN,
@@ -1479,7 +1482,8 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __lrw-aes-aesni,
.cra_driver_name= __driver-lrw-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct aesni_lrw_ctx),
.cra_alignmask  = 0,
@@ -1500,7 +1504,8 @@ static struct crypto_alg aesni_algs[] = { {
.cra_name   = __xts-aes-aesni,
.cra_driver_name= __driver-xts-aes-aesni,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct aesni_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 07/16] crypto: mark CAST6 helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all CAST6 helper ciphers as internal ciphers to prevent them
from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/cast6_avx_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c
index 0160f68..f448810 100644
--- a/arch/x86/crypto/cast6_avx_glue.c
+++ b/arch/x86/crypto/cast6_avx_glue.c
@@ -372,7 +372,8 @@ static struct crypto_alg cast6_algs[10] = { {
.cra_name   = __ecb-cast6-avx,
.cra_driver_name= __driver-ecb-cast6-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST6_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast6_ctx),
.cra_alignmask  = 0,
@@ -391,7 +392,8 @@ static struct crypto_alg cast6_algs[10] = { {
.cra_name   = __cbc-cast6-avx,
.cra_driver_name= __driver-cbc-cast6-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST6_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast6_ctx),
.cra_alignmask  = 0,
@@ -410,7 +412,8 @@ static struct crypto_alg cast6_algs[10] = { {
.cra_name   = __ctr-cast6-avx,
.cra_driver_name= __driver-ctr-cast6-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct cast6_ctx),
.cra_alignmask  = 0,
@@ -430,7 +433,8 @@ static struct crypto_alg cast6_algs[10] = { {
.cra_name   = __lrw-cast6-avx,
.cra_driver_name= __driver-lrw-cast6-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST6_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast6_lrw_ctx),
.cra_alignmask  = 0,
@@ -453,7 +457,8 @@ static struct crypto_alg cast6_algs[10] = { {
.cra_name   = __xts-cast6-avx,
.cra_driver_name= __driver-xts-cast6-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST6_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast6_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 14/16] crypto: mark ARMv8 AES helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all ARMv8 AES helper ciphers as internal ciphers to prevent
them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/arm/crypto/aes-ce-glue.c | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c
index d2ee591..b445a5d 100644
--- a/arch/arm/crypto/aes-ce-glue.c
+++ b/arch/arm/crypto/aes-ce-glue.c
@@ -354,7 +354,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __ecb-aes-ce,
.cra_driver_name= __driver-ecb-aes-ce,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx),
.cra_alignmask  = 7,
@@ -372,7 +373,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __cbc-aes-ce,
.cra_driver_name= __driver-cbc-aes-ce,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx),
.cra_alignmask  = 7,
@@ -390,7 +392,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __ctr-aes-ce,
.cra_driver_name= __driver-ctr-aes-ce,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct crypto_aes_ctx),
.cra_alignmask  = 7,
@@ -408,7 +411,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __xts-aes-ce,
.cra_driver_name= __driver-xts-aes-ce,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_xts_ctx),
.cra_alignmask  = 7,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 05/16] crypto: mark CAST5 helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all CAST5 helper ciphers as internal ciphers to prevent them
from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/cast5_avx_glue.c | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/crypto/cast5_avx_glue.c b/arch/x86/crypto/cast5_avx_glue.c
index 60ada67..236c809 100644
--- a/arch/x86/crypto/cast5_avx_glue.c
+++ b/arch/x86/crypto/cast5_avx_glue.c
@@ -341,7 +341,8 @@ static struct crypto_alg cast5_algs[6] = { {
.cra_name   = __ecb-cast5-avx,
.cra_driver_name= __driver-ecb-cast5-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST5_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast5_ctx),
.cra_alignmask  = 0,
@@ -360,7 +361,8 @@ static struct crypto_alg cast5_algs[6] = { {
.cra_name   = __cbc-cast5-avx,
.cra_driver_name= __driver-cbc-cast5-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = CAST5_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct cast5_ctx),
.cra_alignmask  = 0,
@@ -379,7 +381,8 @@ static struct crypto_alg cast5_algs[6] = { {
.cra_name   = __ctr-cast5-avx,
.cra_driver_name= __driver-ctr-cast5-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct cast5_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 12/16] crypto: mark Twofish AVX helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all Twofish AVX helper ciphers as internal ciphers to prevent
them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/x86/crypto/twofish_avx_glue.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/twofish_avx_glue.c 
b/arch/x86/crypto/twofish_avx_glue.c
index 1ac531e..b5e2d56 100644
--- a/arch/x86/crypto/twofish_avx_glue.c
+++ b/arch/x86/crypto/twofish_avx_glue.c
@@ -340,7 +340,8 @@ static struct crypto_alg twofish_algs[10] = { {
.cra_name   = __ecb-twofish-avx,
.cra_driver_name= __driver-ecb-twofish-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = TF_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct twofish_ctx),
.cra_alignmask  = 0,
@@ -359,7 +360,8 @@ static struct crypto_alg twofish_algs[10] = { {
.cra_name   = __cbc-twofish-avx,
.cra_driver_name= __driver-cbc-twofish-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = TF_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct twofish_ctx),
.cra_alignmask  = 0,
@@ -378,7 +380,8 @@ static struct crypto_alg twofish_algs[10] = { {
.cra_name   = __ctr-twofish-avx,
.cra_driver_name= __driver-ctr-twofish-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct twofish_ctx),
.cra_alignmask  = 0,
@@ -398,7 +401,8 @@ static struct crypto_alg twofish_algs[10] = { {
.cra_name   = __lrw-twofish-avx,
.cra_driver_name= __driver-lrw-twofish-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = TF_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct twofish_lrw_ctx),
.cra_alignmask  = 0,
@@ -421,7 +425,8 @@ static struct crypto_alg twofish_algs[10] = { {
.cra_name   = __xts-twofish-avx,
.cra_driver_name= __driver-xts-twofish-avx,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = TF_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct twofish_xts_ctx),
.cra_alignmask  = 0,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 16/16] crypto: mark 64 bit ARMv8 AES helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all 64 bit ARMv8 AES helper ciphers as internal ciphers to
prevent them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/arm64/crypto/aes-glue.c | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index b1b5b89..05d9e16 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -284,7 +284,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __ecb-aes- MODE,
.cra_driver_name= __driver-ecb-aes- MODE,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx),
.cra_alignmask  = 7,
@@ -302,7 +303,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __cbc-aes- MODE,
.cra_driver_name= __driver-cbc-aes- MODE,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_ctx),
.cra_alignmask  = 7,
@@ -320,7 +322,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __ctr-aes- MODE,
.cra_driver_name= __driver-ctr-aes- MODE,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct crypto_aes_ctx),
.cra_alignmask  = 7,
@@ -338,7 +341,8 @@ static struct crypto_alg aes_algs[] = { {
.cra_name   = __xts-aes- MODE,
.cra_driver_name= __driver-xts-aes- MODE,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct crypto_aes_xts_ctx),
.cra_alignmask  = 7,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 15/16] crypto: mark GHASH ARMv8 vmull.p64 helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all GHASH ARMv8 vmull.p64 helper ciphers as internal ciphers
to prevent them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/arm/crypto/ghash-ce-glue.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/crypto/ghash-ce-glue.c b/arch/arm/crypto/ghash-ce-glue.c
index 8c959d1..9960aed 100644
--- a/arch/arm/crypto/ghash-ce-glue.c
+++ b/arch/arm/crypto/ghash-ce-glue.c
@@ -141,7 +141,7 @@ static struct shash_alg ghash_alg = {
.cra_name   = ghash,
.cra_driver_name = __driver-ghash-ce,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_SHASH,
+   .cra_flags  = CRYPTO_ALG_TYPE_SHASH | CRYPTO_ALG_INTERNAL,
.cra_blocksize  = GHASH_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct ghash_key),
.cra_module = THIS_MODULE,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/16] crypto: prevent helper ciphers from being used

2015-03-19 Thread Stephan Mueller
Am Donnerstag, 19. März 2015, 18:16:30 schrieb Herbert Xu:

Hi Herbert,

On Thu, Mar 19, 2015 at 07:57:36AM +0100, Stephan Mueller wrote:
 diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
 index db201bca..2cd83ad 100644
 --- a/crypto/ablkcipher.c
 +++ b/crypto/ablkcipher.c
 @@ -688,7 +688,7 @@ struct crypto_ablkcipher
 *crypto_alloc_ablkcipher(const char *alg_name, 
  goto err;
  
  }
 
 -tfm = __crypto_alloc_tfm(alg, type, mask);
 +tfm = __crypto_alloc_tfm_safe(alg, type, mask);

Rather than changing every algorithm type, I'd rather suggest
that you modify crypto_alg_mod_lookup so that it's kept in one
spot.  Just copy what we currently do for CRYPTO_ALG_TESTED.

How can you distinguish between calls coming from crypto_*_spawn (which 
we need to allow) and calls that come from the normal API calls (which 
we should block?


Thanks,


Ciao
Stephan
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/16] crypto: prevent helper ciphers from being used

2015-03-19 Thread Herbert Xu
On Thu, Mar 19, 2015 at 08:23:58AM +0100, Stephan Mueller wrote:

 How can you distinguish between calls coming from crypto_*_spawn (which 
 we need to allow) and calls that come from the normal API calls (which 
 we should block?

crypto_*_spawn should not be the place where you make the call on
whether internals are allowed.  You should put that information
into places such as ablk_init_common or wherever these internals
are allocated.

So in ablk_init_common you would do

cryptd_tfm = cryptd_alloc_ablkcipher(drv_name, CRYPTO_ALG_INTERNAL,
 CRYPTO_ALG_INTERNAL);

IOW internals are disallowed if you don't specify it in the mask,
but you can get them if you do specify it in the mask (and the
corresponding bit in the type).

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/16] crypto: prevent helper ciphers from being used

2015-03-19 Thread Herbert Xu
On Thu, Mar 19, 2015 at 07:57:36AM +0100, Stephan Mueller wrote:

 diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
 index db201bca..2cd83ad 100644
 --- a/crypto/ablkcipher.c
 +++ b/crypto/ablkcipher.c
 @@ -688,7 +688,7 @@ struct crypto_ablkcipher *crypto_alloc_ablkcipher(const 
 char *alg_name,
   goto err;
   }
  
 - tfm = __crypto_alloc_tfm(alg, type, mask);
 + tfm = __crypto_alloc_tfm_safe(alg, type, mask);

Rather than changing every algorithm type, I'd rather suggest
that you modify crypto_alg_mod_lookup so that it's kept in one
spot.  Just copy what we currently do for CRYPTO_ALG_TESTED.

Thanks,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 13/16] crypto: mark NEON bit sliced AES helper ciphers

2015-03-19 Thread Stephan Mueller
Flag all NEON bit sliced AES helper ciphers as internal ciphers to
prevent them from being called by normal users.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 arch/arm/crypto/aesbs-glue.c | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/arm/crypto/aesbs-glue.c b/arch/arm/crypto/aesbs-glue.c
index 15468fb..6d68529 100644
--- a/arch/arm/crypto/aesbs-glue.c
+++ b/arch/arm/crypto/aesbs-glue.c
@@ -301,7 +301,8 @@ static struct crypto_alg aesbs_algs[] = { {
.cra_name   = __cbc-aes-neonbs,
.cra_driver_name= __driver-cbc-aes-neonbs,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct aesbs_cbc_ctx),
.cra_alignmask  = 7,
@@ -319,7 +320,8 @@ static struct crypto_alg aesbs_algs[] = { {
.cra_name   = __ctr-aes-neonbs,
.cra_driver_name= __driver-ctr-aes-neonbs,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = 1,
.cra_ctxsize= sizeof(struct aesbs_ctr_ctx),
.cra_alignmask  = 7,
@@ -337,7 +339,8 @@ static struct crypto_alg aesbs_algs[] = { {
.cra_name   = __xts-aes-neonbs,
.cra_driver_name= __driver-xts-aes-neonbs,
.cra_priority   = 0,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+ CRYPTO_ALG_INTERNAL,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct aesbs_xts_ctx),
.cra_alignmask  = 7,
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: qat - make error and info log messages more descriptive

2015-03-19 Thread Bruce Allan
Convert pr_info() and pr_err() log messages to dev_info() and dev_err(),
respectively, where able.  This adds the module name and PCI B:D:F to
indicate which QAT device generated the log message.  The QAT: is removed
from these log messages as that is now unnecessary.  A few of these log
messages have additional spelling/contextual fixes.

Signed-off-by: Bruce Allan bruce.w.al...@intel.com
---
 drivers/crypto/qat/qat_common/adf_accel_engine.c   |   21 +++--
 drivers/crypto/qat/qat_common/adf_aer.c|   19 +++--
 drivers/crypto/qat/qat_common/adf_cfg.c|5 +
 drivers/crypto/qat/qat_common/adf_ctl_drv.c|   38 ++---
 drivers/crypto/qat/qat_common/adf_dev_mgr.c|3 -
 drivers/crypto/qat/qat_common/adf_init.c   |   83 
 drivers/crypto/qat/qat_common/adf_transport.c  |   31 ---
 drivers/crypto/qat/qat_common/qat_crypto.c |5 +
 drivers/crypto/qat/qat_common/qat_hal.c|2 
 drivers/crypto/qat/qat_dh895xcc/adf_admin.c|3 -
 .../crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c |3 -
 drivers/crypto/qat/qat_dh895xcc/adf_isr.c  |   15 ++--
 12 files changed, 137 insertions(+), 91 deletions(-)

diff --git a/drivers/crypto/qat/qat_common/adf_accel_engine.c 
b/drivers/crypto/qat/qat_common/adf_accel_engine.c
index c77453b..97e8ea5 100644
--- a/drivers/crypto/qat/qat_common/adf_accel_engine.c
+++ b/drivers/crypto/qat/qat_common/adf_accel_engine.c
@@ -60,18 +60,19 @@ int adf_ae_fw_load(struct adf_accel_dev *accel_dev)
 
if (request_firmware(loader_data-uof_fw, hw_device-fw_name,
 accel_dev-accel_pci_dev.pci_dev-dev)) {
-   pr_err(QAT: Failed to load firmware %s\n, hw_device-fw_name);
+   dev_err(GET_DEV(accel_dev), Failed to load firmware %s\n,
+   hw_device-fw_name);
return -EFAULT;
}
 
uof_size = loader_data-uof_fw-size;
uof_addr = (void *)loader_data-uof_fw-data;
if (qat_uclo_map_uof_obj(loader_data-fw_loader, uof_addr, uof_size)) {
-   pr_err(QAT: Failed to map UOF\n);
+   dev_err(GET_DEV(accel_dev), Failed to map UOF\n);
goto out_err;
}
if (qat_uclo_wr_all_uimage(loader_data-fw_loader)) {
-   pr_err(QAT: Failed to map UOF\n);
+   dev_err(GET_DEV(accel_dev), Failed to map UOF\n);
goto out_err;
}
return 0;
@@ -104,8 +105,9 @@ int adf_ae_start(struct adf_accel_dev *accel_dev)
ae_ctr++;
}
}
-   pr_info(QAT: qat_dev%d started %d acceleration engines\n,
-   accel_dev-accel_id, ae_ctr);
+   dev_info(GET_DEV(accel_dev),
+qat_dev%d started %d acceleration engines\n,
+accel_dev-accel_id, ae_ctr);
return 0;
 }
 
@@ -121,8 +123,9 @@ int adf_ae_stop(struct adf_accel_dev *accel_dev)
ae_ctr++;
}
}
-   pr_info(QAT: qat_dev%d stopped %d acceleration engines\n,
-   accel_dev-accel_id, ae_ctr);
+   dev_info(GET_DEV(accel_dev),
+qat_dev%d stopped %d acceleration engines\n,
+accel_dev-accel_id, ae_ctr);
return 0;
 }
 
@@ -147,12 +150,12 @@ int adf_ae_init(struct adf_accel_dev *accel_dev)
 
accel_dev-fw_loader = loader_data;
if (qat_hal_init(accel_dev)) {
-   pr_err(QAT: Failed to init the AEs\n);
+   dev_err(GET_DEV(accel_dev), Failed to init the AEs\n);
kfree(loader_data);
return -EFAULT;
}
if (adf_ae_reset(accel_dev, 0)) {
-   pr_err(QAT: Failed to reset the AEs\n);
+   dev_err(GET_DEV(accel_dev), Failed to reset the AEs\n);
qat_hal_deinit(loader_data-fw_loader);
kfree(loader_data);
return -EFAULT;
diff --git a/drivers/crypto/qat/qat_common/adf_aer.c 
b/drivers/crypto/qat/qat_common/adf_aer.c
index fa1fef8..82e23b8 100644
--- a/drivers/crypto/qat/qat_common/adf_aer.c
+++ b/drivers/crypto/qat/qat_common/adf_aer.c
@@ -60,14 +60,14 @@ static pci_ers_result_t adf_error_detected(struct pci_dev 
*pdev,
 {
struct adf_accel_dev *accel_dev = adf_devmgr_pci_to_accel_dev(pdev);
 
-   pr_info(QAT: Acceleration driver hardware error detected.\n);
+   dev_info(pdev-dev, Acceleration driver hardware error detected.\n);
if (!accel_dev) {
-   pr_err(QAT: Can't find acceleration device\n);
+   dev_err(pdev-dev, Can't find acceleration device\n);
return PCI_ERS_RESULT_DISCONNECT;
}
 
if (state == pci_channel_io_perm_failure) {
-   pr_err(QAT: Can't recover from device error\n);
+   dev_err(pdev-dev, Can't recover from device error\n);
return PCI_ERS_RESULT_DISCONNECT;
}
 
@@ -88,10 +88,12 @@ static 

[PATCH] crypto: qat - remove duplicate definition of Intel PCI vendor id

2015-03-19 Thread Bruce Allan
This define is a duplicate of the one in ./include/linux/pci_ids.h

Signed-off-by: Bruce Allan bruce.w.al...@intel.com
---
 drivers/crypto/qat/qat_common/adf_accel_devices.h |1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h 
b/drivers/crypto/qat/qat_common/adf_accel_devices.h
index 19c0efa..f22ce71 100644
--- a/drivers/crypto/qat/qat_common/adf_accel_devices.h
+++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h
@@ -52,7 +52,6 @@
 #include linux/io.h
 #include adf_cfg_common.h
 
-#define PCI_VENDOR_ID_INTEL 0x8086
 #define ADF_DH895XCC_DEVICE_NAME dh895xcc
 #define ADF_DH895XCC_PCI_DEVICE_ID 0x435
 #define ADF_PCI_MAX_BARS 3

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: qat - fix typo in string

2015-03-19 Thread Bruce Allan
Signed-off-by: Bruce Allan bruce.w.al...@intel.com
---
 drivers/crypto/qat/qat_common/adf_cfg_strings.h |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/qat/qat_common/adf_cfg_strings.h 
b/drivers/crypto/qat/qat_common/adf_cfg_strings.h
index c7ac758..0d1a77e 100644
--- a/drivers/crypto/qat/qat_common/adf_cfg_strings.h
+++ b/drivers/crypto/qat/qat_common/adf_cfg_strings.h
@@ -59,7 +59,7 @@
 #define ADF_RING_SYM_TX RingSymTx
 #define ADF_RING_RND_TX RingNrbgTx
 #define ADF_RING_ASYM_RX RingAsymRx
-#define ADF_RING_SYM_RX RinSymRx
+#define ADF_RING_SYM_RX RingSymRx
 #define ADF_RING_RND_RX RingNrbgRx
 #define ADF_RING_DC_TX RingTx
 #define ADF_RING_DC_RX RingRx

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 net-next 1/4] net: socket: add support for async operations

2015-03-19 Thread Al Viro
On Thu, Mar 19, 2015 at 10:43:16AM -0700, Tadeusz Struk wrote:
 On 03/19/2015 09:20 AM, Al Viro wrote:
  is completely pointless.  Just have sock_read_iter() and sock_write_iter()
  check if your new methods are present and use those if those are.
  
 
 Ok, that will work for me too.
 
  What's more, I'm not at all sure that you want to pass iocb that way -
  kernel-side msghdr isn't tied to userland one anymore, so we might as well
  stash a pointer to iocb into it.  Voila - no new methods needed at all.
 
 Good point, so what do you prefer - to add iocd to msghdr or to call the new
 methods from sock_read_iter() and sock_write_iter()?
 Either way is good for me.

I'd probably add msg_iocb to the end of struct msghdr and explicitly zero it in
copy_msghdr_from_user() and get_compat_msghdr(), but you are asking the wrong
guy - that sort of choices in net/* falls on davem, not me.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 net-next 0/3] Add support for async socket operations

2015-03-19 Thread Tadeusz Struk
After the iocb parameter has been removed from sendmsg() and recvmsg() ops
the socket layer, and the network stack no longer support async operations.
This patch set adds support for asynchronous operations on sockets back.

Changes in v3:
* As sugested by Al Viro instead of adding new functions aio_sendmsg
  and aio_recvmsg, added a ptr to iocb into the kernel-side msghdr structure.
  This way no change to aio.c is required.

Changes in v2:
* removed redundant total_size param from aio_sendmsg and aio_recvmsg functions

--
Tadeusz Struk (3):
  net: socket: add support for async operations
  crypto: af_alg - Allow to link sgl
  crypto: algif - change algif_skcipher to be asynchronous


 crypto/af_alg.c |   18 +++-
 crypto/algif_skcipher.c |  233 ++-
 include/crypto/if_alg.h |4 +
 include/linux/socket.h  |1 
 net/compat.c|2 
 net/socket.c|8 +-
 6 files changed, 251 insertions(+), 15 deletions(-)

-- 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 net-next 1/3] net: socket: add support for async operations

2015-03-19 Thread Tadeusz Struk
Add support for async operations.

Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com
---
 include/linux/socket.h |1 +
 net/compat.c   |2 ++
 net/socket.c   |8 ++--
 3 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/include/linux/socket.h b/include/linux/socket.h
index fab4d0d..c9852ef 100644
--- a/include/linux/socket.h
+++ b/include/linux/socket.h
@@ -51,6 +51,7 @@ struct msghdr {
void*msg_control;   /* ancillary data */
__kernel_size_t msg_controllen; /* ancillary data buffer length */
unsigned intmsg_flags;  /* flags on received message */
+   struct kiocb*msg_iocb;  /* ptr to iocb for async requests */
 };
  
 struct user_msghdr {
diff --git a/net/compat.c b/net/compat.c
index 4784431..7fb7ad1 100644
--- a/net/compat.c
+++ b/net/compat.c
@@ -72,6 +72,8 @@ ssize_t get_compat_msghdr(struct msghdr *kmsg,
if (nr_segs  UIO_MAXIOV)
return -EMSGSIZE;
 
+   kmsg-msg_iocb = NULL;
+
err = compat_rw_copy_check_uvector(save_addr ? READ : WRITE,
   compat_ptr(uiov), nr_segs,
   UIO_FASTIOV, *iov, iov);
diff --git a/net/socket.c b/net/socket.c
index 95d3085..fafc50a 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -798,7 +798,8 @@ static ssize_t sock_read_iter(struct kiocb *iocb, struct 
iov_iter *to)
 {
struct file *file = iocb-ki_filp;
struct socket *sock = file-private_data;
-   struct msghdr msg = {.msg_iter = *to};
+   struct msghdr msg = {.msg_iter = *to,
+.msg_iocb = iocb};
ssize_t res;
 
if (file-f_flags  O_NONBLOCK)
@@ -819,7 +820,8 @@ static ssize_t sock_write_iter(struct kiocb *iocb, struct 
iov_iter *from)
 {
struct file *file = iocb-ki_filp;
struct socket *sock = file-private_data;
-   struct msghdr msg = {.msg_iter = *from};
+   struct msghdr msg = {.msg_iter = *from,
+.msg_iocb = iocb};
ssize_t res;
 
if (iocb-ki_pos != 0)
@@ -1890,6 +1892,8 @@ static ssize_t copy_msghdr_from_user(struct msghdr *kmsg,
if (nr_segs  UIO_MAXIOV)
return -EMSGSIZE;
 
+   kmsg-msg_iocb = NULL;
+
err = rw_copy_check_uvector(save_addr ? READ : WRITE,
uiov, nr_segs,
UIO_FASTIOV, *iov, iov);

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 net-next 3/3] crypto: algif - change algif_skcipher to be asynchronous

2015-03-19 Thread Tadeusz Struk
From: Tadeusz Struk tadeusz.st...@intel.com

The way the algif_skcipher works currently is that on sendmsg/sendpage it
builds an sgl for the input data and then on read/recvmsg it sends the job
for encryption putting the user to sleep till the data is processed.
This way it can only handle one job at a given time.
This patch changes it to be asynchronous by adding AIO support.

Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com
---
 crypto/algif_skcipher.c |  233 ++-
 1 file changed, 226 insertions(+), 7 deletions(-)

diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index b9743dc..8276f21 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -39,6 +39,7 @@ struct skcipher_ctx {
 
struct af_alg_completion completion;
 
+   atomic_t inflight;
unsigned used;
 
unsigned int len;
@@ -49,9 +50,65 @@ struct skcipher_ctx {
struct ablkcipher_request req;
 };
 
+struct skcipher_async_rsgl {
+   struct af_alg_sgl sgl;
+   struct list_head list;
+};
+
+struct skcipher_async_req {
+   struct kiocb *iocb;
+   struct skcipher_async_rsgl first_sgl;
+   struct list_head list;
+   struct scatterlist *tsg;
+   char iv[];
+};
+
+#define GET_SREQ(areq, ctx) (struct skcipher_async_req *)((char *)areq + \
+   crypto_ablkcipher_reqsize(crypto_ablkcipher_reqtfm(ctx-req)))
+
+#define GET_REQ_SIZE(ctx) \
+   crypto_ablkcipher_reqsize(crypto_ablkcipher_reqtfm(ctx-req))
+
+#define GET_IV_SIZE(ctx) \
+   crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(ctx-req))
+
 #define MAX_SGL_ENTS ((4096 - sizeof(struct skcipher_sg_list)) / \
  sizeof(struct scatterlist) - 1)
 
+static void skcipher_free_async_sgls(struct skcipher_async_req *sreq)
+{
+   struct skcipher_async_rsgl *rsgl, *tmp;
+   struct scatterlist *sgl;
+   struct scatterlist *sg;
+   int i, n;
+
+   list_for_each_entry_safe(rsgl, tmp, sreq-list, list) {
+   af_alg_free_sg(rsgl-sgl);
+   if (rsgl != sreq-first_sgl)
+   kfree(rsgl);
+   }
+   sgl = sreq-tsg;
+   n = sg_nents(sgl);
+   for_each_sg(sgl, sg, n, i)
+   put_page(sg_page(sg));
+
+   kfree(sreq-tsg);
+}
+
+static void skcipher_async_cb(struct crypto_async_request *req, int err)
+{
+   struct sock *sk = req-data;
+   struct alg_sock *ask = alg_sk(sk);
+   struct skcipher_ctx *ctx = ask-private;
+   struct skcipher_async_req *sreq = GET_SREQ(req, ctx);
+   struct kiocb *iocb = sreq-iocb;
+
+   atomic_dec(ctx-inflight);
+   skcipher_free_async_sgls(sreq);
+   kfree(req);
+   aio_complete(iocb, err, err);
+}
+
 static inline int skcipher_sndbuf(struct sock *sk)
 {
struct alg_sock *ask = alg_sk(sk);
@@ -96,7 +153,7 @@ static int skcipher_alloc_sgl(struct sock *sk)
return 0;
 }
 
-static void skcipher_pull_sgl(struct sock *sk, int used)
+static void skcipher_pull_sgl(struct sock *sk, int used, int put)
 {
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask-private;
@@ -123,8 +180,8 @@ static void skcipher_pull_sgl(struct sock *sk, int used)
 
if (sg[i].length)
return;
-
-   put_page(sg_page(sg + i));
+   if (put)
+   put_page(sg_page(sg + i));
sg_assign_page(sg + i, NULL);
}
 
@@ -143,7 +200,7 @@ static void skcipher_free_sgl(struct sock *sk)
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask-private;
 
-   skcipher_pull_sgl(sk, ctx-used);
+   skcipher_pull_sgl(sk, ctx-used, 1);
 }
 
 static int skcipher_wait_for_wmem(struct sock *sk, unsigned flags)
@@ -424,8 +481,149 @@ unlock:
return err ?: size;
 }
 
-static int skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
-   size_t ignored, int flags)
+static int skcipher_all_sg_nents(struct skcipher_ctx *ctx)
+{
+   struct skcipher_sg_list *sgl;
+   struct scatterlist *sg;
+   int nents = 0;
+
+   list_for_each_entry(sgl, ctx-tsgl, list) {
+   sg = sgl-sg;
+
+   while (!sg-length)
+   sg++;
+
+   nents += sg_nents(sg);
+   }
+   return nents;
+}
+
+static int skcipher_recvmsg_async(struct socket *sock, struct msghdr *msg,
+ int flags)
+{
+   struct sock *sk = sock-sk;
+   struct alg_sock *ask = alg_sk(sk);
+   struct skcipher_ctx *ctx = ask-private;
+   struct skcipher_sg_list *sgl;
+   struct scatterlist *sg;
+   struct skcipher_async_req *sreq;
+   struct ablkcipher_request *req;
+   struct skcipher_async_rsgl *last_rsgl = NULL;
+   unsigned int len = 0, tx_nents = skcipher_all_sg_nents(ctx);
+   unsigned int reqlen = sizeof(struct 

[PATCH v3 net-next 2/3] crypto: af_alg - Allow to link sgl

2015-03-19 Thread Tadeusz Struk
From: Tadeusz Struk tadeusz.st...@intel.com

Allow to link af_alg sgls.

Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com
---
 crypto/af_alg.c |   18 +-
 include/crypto/if_alg.h |4 +++-
 2 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 7f8b7edc..26089d1 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -358,8 +358,8 @@ int af_alg_make_sg(struct af_alg_sgl *sgl, struct iov_iter 
*iter, int len)
npages = (off + n + PAGE_SIZE - 1)  PAGE_SHIFT;
if (WARN_ON(npages == 0))
return -EINVAL;
-
-   sg_init_table(sgl-sg, npages);
+   /* Add one extra for linking */
+   sg_init_table(sgl-sg, npages + 1);
 
for (i = 0, len = n; i  npages; i++) {
int plen = min_t(int, len, PAGE_SIZE - off);
@@ -369,18 +369,26 @@ int af_alg_make_sg(struct af_alg_sgl *sgl, struct 
iov_iter *iter, int len)
off = 0;
len -= plen;
}
+   sg_mark_end(sgl-sg + npages - 1);
+   sgl-npages = npages;
+
return n;
 }
 EXPORT_SYMBOL_GPL(af_alg_make_sg);
 
+void af_alg_link_sg(struct af_alg_sgl *sgl_prev, struct af_alg_sgl *sgl_new)
+{
+   sg_unmark_end(sgl_prev-sg + sgl_prev-npages - 1);
+   sg_chain(sgl_prev-sg, sgl_prev-npages + 1, sgl_new-sg);
+}
+EXPORT_SYMBOL(af_alg_link_sg);
+
 void af_alg_free_sg(struct af_alg_sgl *sgl)
 {
int i;
 
-   i = 0;
-   do {
+   for (i = 0; i  sgl-npages; i++)
put_page(sgl-pages[i]);
-   } while (!sg_is_last(sgl-sg + (i++)));
 }
 EXPORT_SYMBOL_GPL(af_alg_free_sg);
 
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index 178525e..018afb2 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -58,8 +58,9 @@ struct af_alg_type {
 };
 
 struct af_alg_sgl {
-   struct scatterlist sg[ALG_MAX_PAGES];
+   struct scatterlist sg[ALG_MAX_PAGES + 1];
struct page *pages[ALG_MAX_PAGES];
+   unsigned int npages;
 };
 
 int af_alg_register_type(const struct af_alg_type *type);
@@ -70,6 +71,7 @@ int af_alg_accept(struct sock *sk, struct socket *newsock);
 
 int af_alg_make_sg(struct af_alg_sgl *sgl, struct iov_iter *iter, int len);
 void af_alg_free_sg(struct af_alg_sgl *sgl);
+void af_alg_link_sg(struct af_alg_sgl *sgl_prev, struct af_alg_sgl *sgl_new);
 
 int af_alg_cmsg_send(struct msghdr *msg, struct af_alg_control *con);
 

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 net-next 0/3] Add support for async socket operations

2015-03-19 Thread Al Viro
On Thu, Mar 19, 2015 at 12:31:19PM -0700, Tadeusz Struk wrote:
 After the iocb parameter has been removed from sendmsg() and recvmsg() ops
 the socket layer, and the network stack no longer support async operations.
 This patch set adds support for asynchronous operations on sockets back.
 
 Changes in v3:
 * As sugested by Al Viro instead of adding new functions aio_sendmsg
   and aio_recvmsg, added a ptr to iocb into the kernel-side msghdr structure.
   This way no change to aio.c is required.
 
 Changes in v2:
 * removed redundant total_size param from aio_sendmsg and aio_recvmsg 
 functions

I think I can live with that.  Christoph, do you have any objections to
that series?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 net-next 1/4] net: socket: add support for async operations

2015-03-19 Thread Tadeusz Struk
On 03/19/2015 09:20 AM, Al Viro wrote:
 is completely pointless.  Just have sock_read_iter() and sock_write_iter()
 check if your new methods are present and use those if those are.
 

Ok, that will work for me too.

 What's more, I'm not at all sure that you want to pass iocb that way -
 kernel-side msghdr isn't tied to userland one anymore, so we might as well
 stash a pointer to iocb into it.  Voila - no new methods needed at all.

Good point, so what do you prefer - to add iocd to msghdr or to call the new
methods from sock_read_iter() and sock_write_iter()?
Either way is good for me.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 5/5] crypto: talitos: Add software backlog queue handling

2015-03-19 Thread Kim Phillips
On Thu, 19 Mar 2015 17:56:57 +0200
Horia Geantă horia.gea...@freescale.com wrote:

 On 3/18/2015 12:03 AM, Kim Phillips wrote:
  On Tue, 17 Mar 2015 19:58:55 +0200
  Horia Geantă horia.gea...@freescale.com wrote:
  
  On 3/17/2015 2:19 AM, Kim Phillips wrote:
  On Mon, 16 Mar 2015 12:02:51 +0200
  Horia Geantă horia.gea...@freescale.com wrote:
 
  On 3/4/2015 2:23 AM, Kim Phillips wrote:
  Only potential problem is getting the crypto API to set the GFP_DMA
  flag in the allocation request, but presumably a
  CRYPTO_TFM_REQ_DMA crt_flag can be made to handle that.
 
  Seems there are quite a few places that do not use the
  {aead,ablkcipher_ahash}_request_alloc() API to allocate crypto requests.
  Among them, IPsec and dm-crypt.
  I've looked at the code and I don't think it can be converted to use
  crypto API.
 
  why not?
 
  It would imply having 2 memory allocations, one for crypto request and
  the other for the rest of the data bundled with the request (for IPsec
  that would be ESN + space for IV + sg entries for authenticated-only
  data and sk_buff extension, if needed).
 
  Trying to have a single allocation by making ESN, IV etc. part of the
  request private context requires modifying tfm.reqsize on the fly.
  This won't work without adding some kind of locking for the tfm.
  
  can't a common minimum tfm.reqsize be co-established up front, at
  least for the fast path?
 
 Indeed, for IPsec at tfm allocation time - esp_init_state() -
 tfm.reqsize could be increased to account for what is known for a given
 flow: ESN, IV and asg (S/G entries for authenticated-only data).
 The layout would be:
 aead request (fixed part)
 private ctx of backend algorithm
 seq_no_hi (if ESN)
 IV
 asg
 sg -- S/G table for skb_to_sgvec; how many entries is the question
 
 Do you have a suggestion for how many S/G entries to preallocate for
 representing the sk_buff data to be encrypted?
 An ancient esp4.c used ESP_NUM_FAST_SG, set to 4.
 Btw, currently maximum number of fragments supported by the net stack
 (MAX_SKB_FRAGS) is 16 or more.
 
  This means that the CRYPTO_TFM_REQ_DMA would be visible to all of these
  places. Some of the maintainers do not agree, as you've seen.
 
  would modifying the crypto API to either have a different
  *_request_alloc() API, and/or adding calls to negotiate the GFP mask
  between crypto users and drivers, e.g., get/set_gfp_mask, work?
 
  I think what DaveM asked for was the change to be transparent.
 
  Besides converting to *_request_alloc(), seems that all other options
  require some extra awareness from the user.
  Could you elaborate on the idea above?
  
  was merely suggesting communicating GFP flags anonymously across the
  API, i.e., GFP_DMA wouldn't appear in user code.
 
 Meaning user would have to get_gfp_mask before allocating a crypto
 request - i.e. instead of kmalloc(..., GFP_ATOMIC) to have
 kmalloc(GFP_ATOMIC | get_gfp_mask(aead))?
 
  An alternative would be for talitos to use the page allocator to get 1 /
  2 pages at probe time (4 channels x 32 entries/channel x 64B/descriptor
  = 8 kB), dma_map_page the area and manage it internally for talitos_desc
  hw descriptors.
  What do you think?
 
  There's a comment in esp_alloc_tmp(): Use spare space in skb for
  this where possible, which is ideally where we'd want to be (esp.
 
  Ok, I'll check that. But note the where possible - finding room in the
  skb to avoid the allocation won't always be the case, and then we're
  back to square one.
 
 So the skb cb is out of the question, being too small (48B).
 Any idea what was the intention of the TODO - maybe to use the
 tailroom in the skb data area?
 
  because that memory could already be DMA-able).  Your above
  suggestion would be in the opposite direction of that.
 
  The proposal:
  -removes dma (un)mapping on the fast path
  
  sure, but at the expense of additional complexity.
 
 Right, there's no free lunch. But it's cheaper.
 
  -avoids requesting dma mappable memory for more than it's actually
  needed (CRYPTO_TFM_REQ_DMA forces entire request to be mappable, not
  only its private context)
  
  compared to the payload?  Plus, we have plenty of DMA space these
  days.
  
  -for caam it has the added benefit of speeding the below search for the
  offending descriptor in the SW ring from O(n) to O(1):
  for (i = 0; CIRC_CNT(head, tail + i, JOBR_DEPTH) = 1; i++) {
 sw_idx = (tail + i)  (JOBR_DEPTH - 1);
 
 if (jrp-outring[hw_idx].desc ==
 jrp-entinfo[sw_idx].desc_addr_dma)
 break; /* found */
  }
  (drivers/crypto/caam/jr.c - caam_dequeue)
  
  how?  The job ring h/w will still be spitting things out
  out-of-order.
 
 jrp-outring[hw_idx].desc bus address can be used to find the sw_idx in
 O(1):
 
 dma_addr_t desc_base = dma_map_page(alloc_page(GFP_DMA),...);
 [...]
 sw_idx = (desc_base - jrp-outring[hw_idx].desc) / JD_SIZE;
 
 JD_SIZE would be 16 words (64B) - 13 words used for the h/w job
 descriptor, 3 words can be used for 

Re: [PATCH -crypto] lib: memzero_explicit: use barrier instead of OPTIMIZER_HIDE_VAR

2015-03-19 Thread Herbert Xu
On Wed, Mar 18, 2015 at 06:47:25PM +0100, Daniel Borkmann wrote:
 From: mancha security manc...@zoho.com
 
 OPTIMIZER_HIDE_VAR(), as defined when using gcc, is insufficient to
 ensure protection from dead store optimization.

Patch applied.  Thanks!
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 net-next 1/4] net: socket: add support for async operations

2015-03-19 Thread Al Viro
On Mon, Mar 16, 2015 at 09:15:14AM -0700, Tadeusz Struk wrote:
 Add support for async operations.

NAK.  For the same reason as the last time - 

 +static ssize_t sock_aio_read(struct kiocb *iocb, const struct iovec *iov,
 +  unsigned long nr_segs, loff_t loff);
 +static ssize_t sock_aio_write(struct kiocb *iocb, const struct iovec *iov,
 +   unsigned long nr_segs, loff_t loff);
 +

is completely pointless.  Just have sock_read_iter() and sock_write_iter()
check if your new methods are present and use those if those are.

What's more, I'm not at all sure that you want to pass iocb that way -
kernel-side msghdr isn't tied to userland one anymore, so we might as well
stash a pointer to iocb into it.  Voila - no new methods needed at all.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 5/5] crypto: talitos: Add software backlog queue handling

2015-03-19 Thread Horia Geantă
On 3/18/2015 12:03 AM, Kim Phillips wrote:
 On Tue, 17 Mar 2015 19:58:55 +0200
 Horia Geantă horia.gea...@freescale.com wrote:
 
 On 3/17/2015 2:19 AM, Kim Phillips wrote:
 On Mon, 16 Mar 2015 12:02:51 +0200
 Horia Geantă horia.gea...@freescale.com wrote:

 On 3/4/2015 2:23 AM, Kim Phillips wrote:
 Only potential problem is getting the crypto API to set the GFP_DMA
 flag in the allocation request, but presumably a
 CRYPTO_TFM_REQ_DMA crt_flag can be made to handle that.

 Seems there are quite a few places that do not use the
 {aead,ablkcipher_ahash}_request_alloc() API to allocate crypto requests.
 Among them, IPsec and dm-crypt.
 I've looked at the code and I don't think it can be converted to use
 crypto API.

 why not?

 It would imply having 2 memory allocations, one for crypto request and
 the other for the rest of the data bundled with the request (for IPsec
 that would be ESN + space for IV + sg entries for authenticated-only
 data and sk_buff extension, if needed).

 Trying to have a single allocation by making ESN, IV etc. part of the
 request private context requires modifying tfm.reqsize on the fly.
 This won't work without adding some kind of locking for the tfm.
 
 can't a common minimum tfm.reqsize be co-established up front, at
 least for the fast path?

Indeed, for IPsec at tfm allocation time - esp_init_state() -
tfm.reqsize could be increased to account for what is known for a given
flow: ESN, IV and asg (S/G entries for authenticated-only data).
The layout would be:
aead request (fixed part)
private ctx of backend algorithm
seq_no_hi (if ESN)
IV
asg
sg -- S/G table for skb_to_sgvec; how many entries is the question

Do you have a suggestion for how many S/G entries to preallocate for
representing the sk_buff data to be encrypted?
An ancient esp4.c used ESP_NUM_FAST_SG, set to 4.
Btw, currently maximum number of fragments supported by the net stack
(MAX_SKB_FRAGS) is 16 or more.

 This means that the CRYPTO_TFM_REQ_DMA would be visible to all of these
 places. Some of the maintainers do not agree, as you've seen.

 would modifying the crypto API to either have a different
 *_request_alloc() API, and/or adding calls to negotiate the GFP mask
 between crypto users and drivers, e.g., get/set_gfp_mask, work?

 I think what DaveM asked for was the change to be transparent.

 Besides converting to *_request_alloc(), seems that all other options
 require some extra awareness from the user.
 Could you elaborate on the idea above?
 
 was merely suggesting communicating GFP flags anonymously across the
 API, i.e., GFP_DMA wouldn't appear in user code.

Meaning user would have to get_gfp_mask before allocating a crypto
request - i.e. instead of kmalloc(..., GFP_ATOMIC) to have
kmalloc(GFP_ATOMIC | get_gfp_mask(aead))?

 An alternative would be for talitos to use the page allocator to get 1 /
 2 pages at probe time (4 channels x 32 entries/channel x 64B/descriptor
 = 8 kB), dma_map_page the area and manage it internally for talitos_desc
 hw descriptors.
 What do you think?

 There's a comment in esp_alloc_tmp(): Use spare space in skb for
 this where possible, which is ideally where we'd want to be (esp.

 Ok, I'll check that. But note the where possible - finding room in the
 skb to avoid the allocation won't always be the case, and then we're
 back to square one.

So the skb cb is out of the question, being too small (48B).
Any idea what was the intention of the TODO - maybe to use the
tailroom in the skb data area?

 because that memory could already be DMA-able).  Your above
 suggestion would be in the opposite direction of that.

 The proposal:
 -removes dma (un)mapping on the fast path
 
 sure, but at the expense of additional complexity.

Right, there's no free lunch. But it's cheaper.

 -avoids requesting dma mappable memory for more than it's actually
 needed (CRYPTO_TFM_REQ_DMA forces entire request to be mappable, not
 only its private context)
 
 compared to the payload?  Plus, we have plenty of DMA space these
 days.
 
 -for caam it has the added benefit of speeding the below search for the
 offending descriptor in the SW ring from O(n) to O(1):
 for (i = 0; CIRC_CNT(head, tail + i, JOBR_DEPTH) = 1; i++) {
  sw_idx = (tail + i)  (JOBR_DEPTH - 1);

  if (jrp-outring[hw_idx].desc ==
  jrp-entinfo[sw_idx].desc_addr_dma)
  break; /* found */
 }
 (drivers/crypto/caam/jr.c - caam_dequeue)
 
 how?  The job ring h/w will still be spitting things out
 out-of-order.

jrp-outring[hw_idx].desc bus address can be used to find the sw_idx in
O(1):

dma_addr_t desc_base = dma_map_page(alloc_page(GFP_DMA),...);
[...]
sw_idx = (desc_base - jrp-outring[hw_idx].desc) / JD_SIZE;

JD_SIZE would be 16 words (64B) - 13 words used for the h/w job
descriptor, 3 words can be used for smth. else.
Basically all JDs would be filled at a 64B-aligned offset in the memory
page.

 Plus, like I said, it's taking the problem in the wrong direction:
 we need to strive to merge the allocation 

Re: [PATCH 07/10] ARM: dts: n9/n950: Enable omap crypto support

2015-03-19 Thread Tony Lindgren
* Pavel Machek pa...@ucw.cz [150228 08:45]:
 On Thu 2015-02-26 14:49:57, Pali Rohár wrote:
  Harmattan system on Nokia N9 and N950 devices uses omap crypto support.
  Bootloader on those devices is known that it enables HW crypto support.
  This patch just include omap36xx.dtsi directly, so aes and sham is enabled.
  
  Signed-off-by: Pali Rohár pali.ro...@gmail.com
 
 Acked-by: Pavel Machek pa...@ucw.cz

I'm picking patches 7 - 10 into omap-for-v4.1/dt branch thanks.

Tony
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: img-hash: Fix Kconfig selections

2015-03-19 Thread James Hartley
The Kconfig entry for CRYPTO_DEV_IMGTEC_HASH incorrectly selects
CRYPTO_SHA224, which does not exist (and is covered by CRYPTO_SHA256
which covers both 224 and 256). Remove it.

Also correct typo CRYPTO_ALG_API to be CRYPTO_ALGPI.

Reported-by: Valentin Rothberg valentinrothb...@gmail.com
Signed-off-by: James Hartley james.hart...@imgtec.com
---
 drivers/crypto/Kconfig |3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 8b18b66..800bf41 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -448,10 +448,9 @@ source drivers/crypto/vmx/Kconfig
 config CRYPTO_DEV_IMGTEC_HASH
depends on MIPS || COMPILE_TEST
tristate Imagination Technologies hardware hash accelerator
-   select CRYPTO_ALG_API
+   select CRYPTO_ALGAPI
select CRYPTO_MD5
select CRYPTO_SHA1
-   select CRYPTO_SHA224
select CRYPTO_SHA256
select CRYPTO_HASH
help
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html