Re: [PATCH v5 05/19] crypto: introduce crypto wait for async op

2017-08-14 Thread Jonathan Cameron
On Mon, 14 Aug 2017 18:21:15 +0300
Gilad Ben-Yossef  wrote:

> Invoking a possibly async. crypto op and waiting for completion
> while correctly handling backlog processing is a common task
> in the crypto API implementation and outside users of it.
> 
> This patch adds a generic implementation for doing so in
> preparation for using it across the board instead of hand
> rolled versions.
> 
> Signed-off-by: Gilad Ben-Yossef 
> CC: Eric Biggers 

One really trivial point inline I happened to notice.

> ---
>  crypto/api.c   | 13 +
>  include/linux/crypto.h | 41 +
>  2 files changed, 54 insertions(+)
> 
> diff --git a/crypto/api.c b/crypto/api.c
> index 941cd4c..2a2479d 100644
> --- a/crypto/api.c
> +++ b/crypto/api.c
> @@ -24,6 +24,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include "internal.h"
>  
>  LIST_HEAD(crypto_alg_list);
> @@ -595,5 +596,17 @@ int crypto_has_alg(const char *name, u32 type, u32 mask)
>  }
>  EXPORT_SYMBOL_GPL(crypto_has_alg);
>  
> +void crypto_req_done(struct crypto_async_request *req, int err)
> +{
> + struct crypto_wait *wait = req->data;
> +
> + if (err == -EINPROGRESS)
> + return;
> +
> + wait->err = err;
> + complete(>completion);
> +}
> +EXPORT_SYMBOL_GPL(crypto_req_done);
> +
>  MODULE_DESCRIPTION("Cryptographic core API");
>  MODULE_LICENSE("GPL");
> diff --git a/include/linux/crypto.h b/include/linux/crypto.h
> index 84da997..bb00186 100644
> --- a/include/linux/crypto.h
> +++ b/include/linux/crypto.h
> @@ -24,6 +24,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  /*
>   * Autoloaded crypto modules should only use a prefixed name to avoid 
> allowing
> @@ -468,6 +469,45 @@ struct crypto_alg {
>  } CRYPTO_MINALIGN_ATTR;
>  
>  /*
> + * A helper struct for waiting for completion of async crypto ops
> + */
> +struct crypto_wait {
> + struct completion completion;
> + int err;
> +};
> +
> +/*
> + * Macro for declaring a crypto op async wait object on stack
> + */
> +#define DECLARE_CRYPTO_WAIT(_wait) \
> + struct crypto_wait _wait = { \
> + COMPLETION_INITIALIZER_ONSTACK((_wait).completion), 0 }
> +
> +/*
> + * Async ops completion helper functioons
> + */
> +void crypto_req_done(struct crypto_async_request *req, int err);
> +
> +static inline int crypto_wait_req(int err, struct crypto_wait *wait)
> +{
> + switch (err) {
> + case -EINPROGRESS:
> + case -EBUSY:
> + wait_for_completion(>completion);
> + reinit_completion(>completion);
> + err = wait->err;
> + break;
> + };
> +
> + return err;
> +}
> +
> +static inline void crypto_init_wait(struct crypto_wait *wait)
> +{
> + init_completion(>completion);
> +}
> +
> +/*
>   * Algorithm registration interface.
>   */
>  int crypto_register_alg(struct crypto_alg *alg);
> @@ -1604,5 +1644,6 @@ static inline int crypto_comp_decompress(struct 
> crypto_comp *tfm,
>   src, slen, dst, dlen);
>  }
>  
> +

Trivial observation.  Shouldn't have this bonus blank line inserted here.

>  #endif   /* _LINUX_CRYPTO_H */
>  



Re: random.c: LFSR polynomials are not irreducible/primitive

2017-08-14 Thread Theodore Ts'o
On Mon, Aug 14, 2017 at 10:20:18AM +0200, Stephan Mueller wrote:
> Hi Ted,
> 
> drivers/char/random.c contains the following comment:
> 
> """
>  * Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and
>  * Videau in their paper, "The Linux Pseudorandom Number Generator
>  * Revisited" (see: http://eprint.iacr.org/2012/251.pdf).  In their
>  * paper, they point out that we are not using a true Twisted GFSR,
>  * since Matsumoto & Kurita used a trinomial feedback polynomial (that
>  * is, with only three taps, instead of the six that we are using).
>  * As a result, the resulting polynomial is neither primitive nor
>  * irreducible, and hence does not have a maximal period over
>  * GF(2**32).  They suggest a slight change to the generator
>  * polynomial which improves the resulting TGFSR polynomial to be
>  * irreducible, which we have made here.
> """
> 
> This comment leads me to belief that the current polynomial is primitive (and 
> irreducible).
> 
> Strangely, this is not the case as seen with the following code that can be 
> used with the mathematical tool called magma. There is a free online version 
> of magma available to recheck it: http://magma.maths.usyd.edu.au/calc/
> 
> Note, the polynomials used up till 3.12 were primitive and irreducible.
> 
> Could you please help me understanding why the current polynomials are better 
> than the old ones?

Have you looked at section 3.1.1 of the above cited paper?

http://eprint.iacr.org/2012/251.pdf

- Ted


Re: [PATCH v5 02/19] crypto: ccp: use -EAGAIN for transient busy indication

2017-08-14 Thread Gary R Hook

On 08/14/2017 10:21 AM, Gilad Ben-Yossef wrote:

Replace -EBUSY with -EAGAIN when reporting transient busy
indication in the absence of backlog.

Signed-off-by: Gilad Ben-Yossef 


Reviewed-by: Gary R Hook 


---
 drivers/crypto/ccp/ccp-crypto-main.c | 8 +++-
 drivers/crypto/ccp/ccp-dev.c | 7 +--
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-main.c 
b/drivers/crypto/ccp/ccp-crypto-main.c
index 35a9de7..403ff0a 100644
--- a/drivers/crypto/ccp/ccp-crypto-main.c
+++ b/drivers/crypto/ccp/ccp-crypto-main.c
@@ -222,9 +222,10 @@ static int ccp_crypto_enqueue_cmd(struct ccp_crypto_cmd 
*crypto_cmd)

/* Check if the cmd can/should be queued */
if (req_queue.cmd_count >= CCP_CRYPTO_MAX_QLEN) {
-   ret = -EBUSY;
-   if (!(crypto_cmd->cmd->flags & CCP_CMD_MAY_BACKLOG))
+   if (!(crypto_cmd->cmd->flags & CCP_CMD_MAY_BACKLOG)) {
+   ret = -EAGAIN;
goto e_lock;
+   }
}

/* Look for an entry with the same tfm.  If there is a cmd
@@ -243,9 +244,6 @@ static int ccp_crypto_enqueue_cmd(struct ccp_crypto_cmd 
*crypto_cmd)
ret = ccp_enqueue_cmd(crypto_cmd->cmd);
if (!ccp_crypto_success(ret))
goto e_lock;/* Error, don't queue it */
-   if ((ret == -EBUSY) &&
-   !(crypto_cmd->cmd->flags & CCP_CMD_MAY_BACKLOG))
-   goto e_lock;/* Not backlogging, don't queue it */
}

if (req_queue.cmd_count >= CCP_CRYPTO_MAX_QLEN) {
diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
index 4e029b1..3d637e3 100644
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -292,9 +292,12 @@ int ccp_enqueue_cmd(struct ccp_cmd *cmd)
i = ccp->cmd_q_count;

if (ccp->cmd_count >= MAX_CMD_QLEN) {
-   ret = -EBUSY;
-   if (cmd->flags & CCP_CMD_MAY_BACKLOG)
+   if (cmd->flags & CCP_CMD_MAY_BACKLOG) {
+   ret = -EBUSY;
list_add_tail(>entry, >backlog);
+   } else {
+   ret = -EAGAIN;
+   }
} else {
ret = -EINPROGRESS;
ccp->cmd_count++;



[PATCH v5 01/19] crypto: change transient busy return code to -EAGAIN

2017-08-14 Thread Gilad Ben-Yossef
The crypto API was using the -EBUSY return value to indicate
both a hard failure to submit a crypto operation into a
transformation provider when the latter was busy and the backlog
mechanism was not enabled as well as a notification that the
operation was queued into the backlog when the backlog mechanism
was enabled.

Having the same return code indicate two very different conditions
depending on a flag is both error prone and requires extra runtime
check like the following to discern between the cases:

if (err == -EINPROGRESS ||
(err == -EBUSY && (ahash_request_flags(req) &
   CRYPTO_TFM_REQ_MAY_BACKLOG)))

This patch changes the return code used to indicate a crypto op
failed due to the transformation provider being transiently busy
to -EAGAIN.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/algapi.c |  6 --
 crypto/algif_hash.c | 20 +---
 crypto/cryptd.c |  4 +---
 3 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/crypto/algapi.c b/crypto/algapi.c
index aa699ff..916bee3 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -897,9 +897,11 @@ int crypto_enqueue_request(struct crypto_queue *queue,
int err = -EINPROGRESS;
 
if (unlikely(queue->qlen >= queue->max_qlen)) {
-   err = -EBUSY;
-   if (!(request->flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+   if (!(request->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) {
+   err = -EAGAIN;
goto out;
+   }
+   err = -EBUSY;
if (queue->backlog == >list)
queue->backlog = >list;
}
diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
index 5e92bd2..3b3c154 100644
--- a/crypto/algif_hash.c
+++ b/crypto/algif_hash.c
@@ -39,6 +39,20 @@ struct algif_hash_tfm {
bool has_key;
 };
 
+/* Previous versions of crypto_* ops used to return -EBUSY
+ * rather than -EAGAIN to indicate being tied up. The in
+ * kernel API changed but we don't want to break the user
+ * space API. As only the hash user interface exposed this
+ * error ever to the user, do the translation here.
+ */
+static inline int crypto_user_err(int err)
+{
+   if (err == -EAGAIN)
+   return -EBUSY;
+
+   return err;
+}
+
 static int hash_alloc_result(struct sock *sk, struct hash_ctx *ctx)
 {
unsigned ds;
@@ -136,7 +150,7 @@ static int hash_sendmsg(struct socket *sock, struct msghdr 
*msg,
 unlock:
release_sock(sk);
 
-   return err ?: copied;
+   return err ? crypto_user_err(err) : copied;
 }
 
 static ssize_t hash_sendpage(struct socket *sock, struct page *page,
@@ -188,7 +202,7 @@ static ssize_t hash_sendpage(struct socket *sock, struct 
page *page,
 unlock:
release_sock(sk);
 
-   return err ?: size;
+   return err ? crypto_user_err(err) : size;
 }
 
 static int hash_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
@@ -236,7 +250,7 @@ static int hash_recvmsg(struct socket *sock, struct msghdr 
*msg, size_t len,
hash_free_result(sk, ctx);
release_sock(sk);
 
-   return err ?: len;
+   return err ? crypto_user_err(err) : len;
 }
 
 static int hash_accept(struct socket *sock, struct socket *newsock, int flags,
diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index 0508c48..d1dbdce 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -137,16 +137,14 @@ static int cryptd_enqueue_request(struct cryptd_queue 
*queue,
int cpu, err;
struct cryptd_cpu_queue *cpu_queue;
atomic_t *refcnt;
-   bool may_backlog;
 
cpu = get_cpu();
cpu_queue = this_cpu_ptr(queue->cpu_queue);
err = crypto_enqueue_request(_queue->queue, request);
 
refcnt = crypto_tfm_ctx(request->tfm);
-   may_backlog = request->flags & CRYPTO_TFM_REQ_MAY_BACKLOG;
 
-   if (err == -EBUSY && !may_backlog)
+   if (err == -EAGAIN)
goto out_put_cpu;
 
queue_work_on(cpu, kcrypto_wq, _queue->work);
-- 
2.1.4



[PATCH v5 02/19] crypto: ccp: use -EAGAIN for transient busy indication

2017-08-14 Thread Gilad Ben-Yossef
Replace -EBUSY with -EAGAIN when reporting transient busy
indication in the absence of backlog.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/crypto/ccp/ccp-crypto-main.c | 8 +++-
 drivers/crypto/ccp/ccp-dev.c | 7 +--
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-main.c 
b/drivers/crypto/ccp/ccp-crypto-main.c
index 35a9de7..403ff0a 100644
--- a/drivers/crypto/ccp/ccp-crypto-main.c
+++ b/drivers/crypto/ccp/ccp-crypto-main.c
@@ -222,9 +222,10 @@ static int ccp_crypto_enqueue_cmd(struct ccp_crypto_cmd 
*crypto_cmd)
 
/* Check if the cmd can/should be queued */
if (req_queue.cmd_count >= CCP_CRYPTO_MAX_QLEN) {
-   ret = -EBUSY;
-   if (!(crypto_cmd->cmd->flags & CCP_CMD_MAY_BACKLOG))
+   if (!(crypto_cmd->cmd->flags & CCP_CMD_MAY_BACKLOG)) {
+   ret = -EAGAIN;
goto e_lock;
+   }
}
 
/* Look for an entry with the same tfm.  If there is a cmd
@@ -243,9 +244,6 @@ static int ccp_crypto_enqueue_cmd(struct ccp_crypto_cmd 
*crypto_cmd)
ret = ccp_enqueue_cmd(crypto_cmd->cmd);
if (!ccp_crypto_success(ret))
goto e_lock;/* Error, don't queue it */
-   if ((ret == -EBUSY) &&
-   !(crypto_cmd->cmd->flags & CCP_CMD_MAY_BACKLOG))
-   goto e_lock;/* Not backlogging, don't queue it */
}
 
if (req_queue.cmd_count >= CCP_CRYPTO_MAX_QLEN) {
diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
index 4e029b1..3d637e3 100644
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -292,9 +292,12 @@ int ccp_enqueue_cmd(struct ccp_cmd *cmd)
i = ccp->cmd_q_count;
 
if (ccp->cmd_count >= MAX_CMD_QLEN) {
-   ret = -EBUSY;
-   if (cmd->flags & CCP_CMD_MAY_BACKLOG)
+   if (cmd->flags & CCP_CMD_MAY_BACKLOG) {
+   ret = -EBUSY;
list_add_tail(>entry, >backlog);
+   } else {
+   ret = -EAGAIN;
+   }
} else {
ret = -EINPROGRESS;
ccp->cmd_count++;
-- 
2.1.4



[PATCH v5 04/19] crypto: marvell/cesa: remove redundant backlog checks on EBUSY

2017-08-14 Thread Gilad Ben-Yossef
Now that -EBUSY return code only indicates backlog queueing
we can safely remove the now redundant check for the
CRYPTO_TFM_REQ_MAY_BACKLOG flag when -EBUSY is returned.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/crypto/marvell/cesa.c | 3 +--
 drivers/crypto/marvell/cesa.h | 2 +-
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index 6e7a5c7..269737f 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -183,8 +183,7 @@ int mv_cesa_queue_req(struct crypto_async_request *req,
spin_lock_bh(>lock);
ret = crypto_enqueue_request(>queue, req);
if ((mv_cesa_req_get_type(creq) == CESA_DMA_REQ) &&
-   (ret == -EINPROGRESS ||
-   (ret == -EBUSY && req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   (ret == -EINPROGRESS || ret == -EBUSY)
mv_cesa_tdma_chain(engine, creq);
spin_unlock_bh(>lock);
 
diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
index b7872f6..63c8457 100644
--- a/drivers/crypto/marvell/cesa.h
+++ b/drivers/crypto/marvell/cesa.h
@@ -763,7 +763,7 @@ static inline int mv_cesa_req_needs_cleanup(struct 
crypto_async_request *req,
 * the backlog and will be processed later. There's no need to
 * clean it up.
 */
-   if (ret == -EBUSY && req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
+   if (ret == -EBUSY)
return false;
 
/* Request wasn't queued, we need to clean it up */
-- 
2.1.4



[PATCH v5 03/19] crypto: remove redundant backlog checks on EBUSY

2017-08-14 Thread Gilad Ben-Yossef
Now that -EBUSY return code only indicates backlog queueing
we can safely remove the now redundant check for the
CRYPTO_TFM_REQ_MAY_BACKLOG flag when -EBUSY is returned.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/ahash.c| 12 +++-
 crypto/cts.c  |  6 ++
 crypto/lrw.c  |  8 ++--
 crypto/rsa-pkcs1pad.c | 16 
 crypto/xts.c  |  8 ++--
 5 files changed, 13 insertions(+), 37 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 826cd7a..d63eeef 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -334,9 +334,7 @@ static int ahash_op_unaligned(struct ahash_request *req,
return err;
 
err = op(req);
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY && (ahash_request_flags(req) &
-  CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   if (err == -EINPROGRESS || err == -EBUSY)
return err;
 
ahash_restore_req(req, err);
@@ -394,9 +392,7 @@ static int ahash_def_finup_finish1(struct ahash_request 
*req, int err)
req->base.complete = ahash_def_finup_done2;
 
err = crypto_ahash_reqtfm(req)->final(req);
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY && (ahash_request_flags(req) &
-  CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   if (err == -EINPROGRESS || err == -EBUSY)
return err;
 
 out:
@@ -432,9 +428,7 @@ static int ahash_def_finup(struct ahash_request *req)
return err;
 
err = tfm->update(req);
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY && (ahash_request_flags(req) &
-  CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   if (err == -EINPROGRESS || err == -EBUSY)
return err;
 
return ahash_def_finup_finish1(req, err);
diff --git a/crypto/cts.c b/crypto/cts.c
index 243f591..4773c18 100644
--- a/crypto/cts.c
+++ b/crypto/cts.c
@@ -136,8 +136,7 @@ static void crypto_cts_encrypt_done(struct 
crypto_async_request *areq, int err)
goto out;
 
err = cts_cbc_encrypt(req);
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY && req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+   if (err == -EINPROGRESS || err == -EBUSY)
return;
 
 out:
@@ -229,8 +228,7 @@ static void crypto_cts_decrypt_done(struct 
crypto_async_request *areq, int err)
goto out;
 
err = cts_cbc_decrypt(req);
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY && req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+   if (err == -EINPROGRESS || err == -EBUSY)
return;
 
 out:
diff --git a/crypto/lrw.c b/crypto/lrw.c
index a8bfae4..695cea9 100644
--- a/crypto/lrw.c
+++ b/crypto/lrw.c
@@ -328,9 +328,7 @@ static int do_encrypt(struct skcipher_request *req, int err)
  crypto_skcipher_encrypt(subreq) ?:
  post_crypt(req);
 
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY &&
-req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+   if (err == -EINPROGRESS || err == -EBUSY)
return err;
}
 
@@ -380,9 +378,7 @@ static int do_decrypt(struct skcipher_request *req, int err)
  crypto_skcipher_decrypt(subreq) ?:
  post_crypt(req);
 
-   if (err == -EINPROGRESS ||
-   (err == -EBUSY &&
-req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+   if (err == -EINPROGRESS || err == -EBUSY)
return err;
}
 
diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
index 407c64b..2908f93 100644
--- a/crypto/rsa-pkcs1pad.c
+++ b/crypto/rsa-pkcs1pad.c
@@ -279,9 +279,7 @@ static int pkcs1pad_encrypt(struct akcipher_request *req)
   req->dst, ctx->key_size - 1, req->dst_len);
 
err = crypto_akcipher_encrypt(_ctx->child_req);
-   if (err != -EINPROGRESS &&
-   (err != -EBUSY ||
-!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   if (err != -EINPROGRESS && err != -EBUSY)
return pkcs1pad_encrypt_sign_complete(req, err);
 
return err;
@@ -383,9 +381,7 @@ static int pkcs1pad_decrypt(struct akcipher_request *req)
   ctx->key_size);
 
err = crypto_akcipher_decrypt(_ctx->child_req);
-   if (err != -EINPROGRESS &&
-   (err != -EBUSY ||
-!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)))
+   if (err != -EINPROGRESS && err != -EBUSY)
return pkcs1pad_decrypt_complete(req, err);
 
return err;
@@ -440,9 +436,7 @@ static int pkcs1pad_sign(struct akcipher_request *req)
   req->dst, ctx->key_size - 1, req->dst_len);
 
err = crypto_akcipher_sign(_ctx->child_req);
-   if 

[PATCH v5 05/19] crypto: introduce crypto wait for async op

2017-08-14 Thread Gilad Ben-Yossef
Invoking a possibly async. crypto op and waiting for completion
while correctly handling backlog processing is a common task
in the crypto API implementation and outside users of it.

This patch adds a generic implementation for doing so in
preparation for using it across the board instead of hand
rolled versions.

Signed-off-by: Gilad Ben-Yossef 
CC: Eric Biggers 
---
 crypto/api.c   | 13 +
 include/linux/crypto.h | 41 +
 2 files changed, 54 insertions(+)

diff --git a/crypto/api.c b/crypto/api.c
index 941cd4c..2a2479d 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "internal.h"
 
 LIST_HEAD(crypto_alg_list);
@@ -595,5 +596,17 @@ int crypto_has_alg(const char *name, u32 type, u32 mask)
 }
 EXPORT_SYMBOL_GPL(crypto_has_alg);
 
+void crypto_req_done(struct crypto_async_request *req, int err)
+{
+   struct crypto_wait *wait = req->data;
+
+   if (err == -EINPROGRESS)
+   return;
+
+   wait->err = err;
+   complete(>completion);
+}
+EXPORT_SYMBOL_GPL(crypto_req_done);
+
 MODULE_DESCRIPTION("Cryptographic core API");
 MODULE_LICENSE("GPL");
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 84da997..bb00186 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /*
  * Autoloaded crypto modules should only use a prefixed name to avoid allowing
@@ -468,6 +469,45 @@ struct crypto_alg {
 } CRYPTO_MINALIGN_ATTR;
 
 /*
+ * A helper struct for waiting for completion of async crypto ops
+ */
+struct crypto_wait {
+   struct completion completion;
+   int err;
+};
+
+/*
+ * Macro for declaring a crypto op async wait object on stack
+ */
+#define DECLARE_CRYPTO_WAIT(_wait) \
+   struct crypto_wait _wait = { \
+   COMPLETION_INITIALIZER_ONSTACK((_wait).completion), 0 }
+
+/*
+ * Async ops completion helper functioons
+ */
+void crypto_req_done(struct crypto_async_request *req, int err);
+
+static inline int crypto_wait_req(int err, struct crypto_wait *wait)
+{
+   switch (err) {
+   case -EINPROGRESS:
+   case -EBUSY:
+   wait_for_completion(>completion);
+   reinit_completion(>completion);
+   err = wait->err;
+   break;
+   };
+
+   return err;
+}
+
+static inline void crypto_init_wait(struct crypto_wait *wait)
+{
+   init_completion(>completion);
+}
+
+/*
  * Algorithm registration interface.
  */
 int crypto_register_alg(struct crypto_alg *alg);
@@ -1604,5 +1644,6 @@ static inline int crypto_comp_decompress(struct 
crypto_comp *tfm,
src, slen, dst, dlen);
 }
 
+
 #endif /* _LINUX_CRYPTO_H */
 
-- 
2.1.4



[PATCH v5 06/19] crypto: move algif to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
algif starts several async crypto ops and waits for their completion.
Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/af_alg.c | 27 ---
 crypto/algif_aead.c |  8 
 crypto/algif_hash.c | 30 ++
 crypto/algif_skcipher.c |  9 -
 include/crypto/if_alg.h | 15 +--
 5 files changed, 23 insertions(+), 66 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index d6936c0..f8917e7 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -481,33 +481,6 @@ int af_alg_cmsg_send(struct msghdr *msg, struct 
af_alg_control *con)
 }
 EXPORT_SYMBOL_GPL(af_alg_cmsg_send);
 
-int af_alg_wait_for_completion(int err, struct af_alg_completion *completion)
-{
-   switch (err) {
-   case -EINPROGRESS:
-   case -EBUSY:
-   wait_for_completion(>completion);
-   reinit_completion(>completion);
-   err = completion->err;
-   break;
-   };
-
-   return err;
-}
-EXPORT_SYMBOL_GPL(af_alg_wait_for_completion);
-
-void af_alg_complete(struct crypto_async_request *req, int err)
-{
-   struct af_alg_completion *completion = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   completion->err = err;
-   complete(>completion);
-}
-EXPORT_SYMBOL_GPL(af_alg_complete);
-
 /**
  * af_alg_alloc_tsgl - allocate the TX SGL
  *
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 48d46e7..abbac8a 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -278,11 +278,11 @@ static int _aead_recvmsg(struct socket *sock, struct 
msghdr *msg,
/* Synchronous operation */
aead_request_set_callback(>cra_u.aead_req,
  CRYPTO_TFM_REQ_MAY_BACKLOG,
- af_alg_complete, >completion);
-   err = af_alg_wait_for_completion(ctx->enc ?
+ crypto_req_done, >wait);
+   err = crypto_wait_req(ctx->enc ?
crypto_aead_encrypt(>cra_u.aead_req) :
crypto_aead_decrypt(>cra_u.aead_req),
->completion);
+   >wait);
}
 
/* AIO operation in progress */
@@ -554,7 +554,7 @@ static int aead_accept_parent_nokey(void *private, struct 
sock *sk)
ctx->merge = 0;
ctx->enc = 0;
ctx->aead_assoclen = 0;
-   af_alg_init_completion(>completion);
+   crypto_init_wait(>wait);
 
ask->private = ctx;
 
diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
index 3b3c154..d2ab8de 100644
--- a/crypto/algif_hash.c
+++ b/crypto/algif_hash.c
@@ -26,7 +26,7 @@ struct hash_ctx {
 
u8 *result;
 
-   struct af_alg_completion completion;
+   struct crypto_wait wait;
 
unsigned int len;
bool more;
@@ -102,8 +102,7 @@ static int hash_sendmsg(struct socket *sock, struct msghdr 
*msg,
if ((msg->msg_flags & MSG_MORE))
hash_free_result(sk, ctx);
 
-   err = af_alg_wait_for_completion(crypto_ahash_init(>req),
-   >completion);
+   err = crypto_wait_req(crypto_ahash_init(>req), >wait);
if (err)
goto unlock;
}
@@ -124,8 +123,8 @@ static int hash_sendmsg(struct socket *sock, struct msghdr 
*msg,
 
ahash_request_set_crypt(>req, ctx->sgl.sg, NULL, len);
 
-   err = af_alg_wait_for_completion(crypto_ahash_update(>req),
->completion);
+   err = crypto_wait_req(crypto_ahash_update(>req),
+ >wait);
af_alg_free_sg(>sgl);
if (err)
goto unlock;
@@ -143,8 +142,8 @@ static int hash_sendmsg(struct socket *sock, struct msghdr 
*msg,
goto unlock;
 
ahash_request_set_crypt(>req, NULL, ctx->result, 0);
-   err = af_alg_wait_for_completion(crypto_ahash_final(>req),
->completion);
+   err = crypto_wait_req(crypto_ahash_final(>req),
+ >wait);
}
 
 unlock:
@@ -185,7 +184,7 @@ static ssize_t hash_sendpage(struct socket *sock, struct 
page *page,
} else {
if (!ctx->more) {
err = crypto_ahash_init(>req);
-   err = af_alg_wait_for_completion(err, >completion);
+   err = crypto_wait_req(err, >wait);
if (err)
goto unlock;
}
@@ -193,7 +192,7 @@ static ssize_t hash_sendpage(struct socket *sock, struct 
page *page,
err = 

[PATCH v5 08/19] crypto: move drbg to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
DRBG is starting an async. crypto op and waiting for it complete.
Move it over to generic code doing the same.

The code now also passes CRYPTO_TFM_REQ_MAY_SLEEP flag indicating
crypto request memory allocation may use GFP_KERNEL which should
be perfectly fine as the code is obviously sleeping for the
completion of the request any way.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/drbg.c | 36 +---
 include/crypto/drbg.h |  3 +--
 2 files changed, 10 insertions(+), 29 deletions(-)

diff --git a/crypto/drbg.c b/crypto/drbg.c
index 633a88e..c522251 100644
--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -1651,16 +1651,6 @@ static int drbg_fini_sym_kernel(struct drbg_state *drbg)
return 0;
 }
 
-static void drbg_skcipher_cb(struct crypto_async_request *req, int error)
-{
-   struct drbg_state *drbg = req->data;
-
-   if (error == -EINPROGRESS)
-   return;
-   drbg->ctr_async_err = error;
-   complete(>ctr_completion);
-}
-
 static int drbg_init_sym_kernel(struct drbg_state *drbg)
 {
struct crypto_cipher *tfm;
@@ -1691,7 +1681,7 @@ static int drbg_init_sym_kernel(struct drbg_state *drbg)
return PTR_ERR(sk_tfm);
}
drbg->ctr_handle = sk_tfm;
-   init_completion(>ctr_completion);
+   crypto_init_wait(>ctr_wait);
 
req = skcipher_request_alloc(sk_tfm, GFP_KERNEL);
if (!req) {
@@ -1700,8 +1690,9 @@ static int drbg_init_sym_kernel(struct drbg_state *drbg)
return -ENOMEM;
}
drbg->ctr_req = req;
-   skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-   drbg_skcipher_cb, drbg);
+   skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
+   CRYPTO_TFM_REQ_MAY_SLEEP,
+   crypto_req_done, >ctr_wait);
 
alignmask = crypto_skcipher_alignmask(sk_tfm);
drbg->ctr_null_value_buf = kzalloc(DRBG_CTR_NULL_LEN + alignmask,
@@ -1762,21 +1753,12 @@ static int drbg_kcapi_sym_ctr(struct drbg_state *drbg,
/* Output buffer may not be valid for SGL, use scratchpad */
skcipher_request_set_crypt(drbg->ctr_req, _in, _out,
   cryptlen, drbg->V);
-   ret = crypto_skcipher_encrypt(drbg->ctr_req);
-   switch (ret) {
-   case 0:
-   break;
-   case -EINPROGRESS:
-   case -EBUSY:
-   wait_for_completion(>ctr_completion);
-   if (!drbg->ctr_async_err) {
-   reinit_completion(>ctr_completion);
-   break;
-   }
-   default:
+   ret = crypto_wait_req(crypto_skcipher_encrypt(drbg->ctr_req),
+   >ctr_wait);
+   if (ret)
goto out;
-   }
-   init_completion(>ctr_completion);
+
+   crypto_init_wait(>ctr_wait);
 
memcpy(outbuf, drbg->outscratchpad, cryptlen);
 
diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h
index 22f884c..8f94110 100644
--- a/include/crypto/drbg.h
+++ b/include/crypto/drbg.h
@@ -126,8 +126,7 @@ struct drbg_state {
__u8 *ctr_null_value;   /* CTR mode aligned zero buf */
__u8 *outscratchpadbuf; /* CTR mode output scratchpad */
 __u8 *outscratchpad;   /* CTR mode aligned outbuf */
-   struct completion ctr_completion;   /* CTR mode async handler */
-   int ctr_async_err;  /* CTR mode async error */
+   struct crypto_wait ctr_wait;/* CTR mode async wait obj */
 
bool seeded;/* DRBG fully seeded? */
bool pr;/* Prediction resistance enabled? */
-- 
2.1.4



[PATCH v5 10/19] crypto: move testmgr to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
testmgr is starting async. crypto ops and waiting for them to complete.
Move it over to generic code doing the same.

This also provides a test of the generic crypto async. wait code.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/testmgr.c | 204 ++-
 1 file changed, 66 insertions(+), 138 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 7125ba3..a65b4d5 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -76,11 +76,6 @@ int alg_test(const char *driver, const char *alg, u32 type, 
u32 mask)
 #define ENCRYPT 1
 #define DECRYPT 0
 
-struct tcrypt_result {
-   struct completion completion;
-   int err;
-};
-
 struct aead_test_suite {
struct {
const struct aead_testvec *vecs;
@@ -155,17 +150,6 @@ static void hexdump(unsigned char *buf, unsigned int len)
buf, len, false);
 }
 
-static void tcrypt_complete(struct crypto_async_request *req, int err)
-{
-   struct tcrypt_result *res = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   res->err = err;
-   complete(>completion);
-}
-
 static int testmgr_alloc_buf(char *buf[XBUFSIZE])
 {
int i;
@@ -193,20 +177,10 @@ static void testmgr_free_buf(char *buf[XBUFSIZE])
free_page((unsigned long)buf[i]);
 }
 
-static int wait_async_op(struct tcrypt_result *tr, int ret)
-{
-   if (ret == -EINPROGRESS || ret == -EBUSY) {
-   wait_for_completion(>completion);
-   reinit_completion(>completion);
-   ret = tr->err;
-   }
-   return ret;
-}
-
 static int ahash_partial_update(struct ahash_request **preq,
struct crypto_ahash *tfm, const struct hash_testvec *template,
void *hash_buff, int k, int temp, struct scatterlist *sg,
-   const char *algo, char *result, struct tcrypt_result *tresult)
+   const char *algo, char *result, struct crypto_wait *wait)
 {
char *state;
struct ahash_request *req;
@@ -236,7 +210,7 @@ static int ahash_partial_update(struct ahash_request **preq,
}
ahash_request_set_callback(req,
CRYPTO_TFM_REQ_MAY_BACKLOG,
-   tcrypt_complete, tresult);
+   crypto_req_done, wait);
 
memcpy(hash_buff, template->plaintext + temp,
template->tap[k]);
@@ -247,7 +221,7 @@ static int ahash_partial_update(struct ahash_request **preq,
pr_err("alg: hash: Failed to import() for %s\n", algo);
goto out;
}
-   ret = wait_async_op(tresult, crypto_ahash_update(req));
+   ret = crypto_wait_req(crypto_ahash_update(req), wait);
if (ret)
goto out;
*preq = req;
@@ -272,7 +246,7 @@ static int __test_hash(struct crypto_ahash *tfm,
char *result;
char *key;
struct ahash_request *req;
-   struct tcrypt_result tresult;
+   struct crypto_wait wait;
void *hash_buff;
char *xbuf[XBUFSIZE];
int ret = -ENOMEM;
@@ -286,7 +260,7 @@ static int __test_hash(struct crypto_ahash *tfm,
if (testmgr_alloc_buf(xbuf))
goto out_nobuf;
 
-   init_completion();
+   crypto_init_wait();
 
req = ahash_request_alloc(tfm, GFP_KERNEL);
if (!req) {
@@ -295,7 +269,7 @@ static int __test_hash(struct crypto_ahash *tfm,
goto out_noreq;
}
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-  tcrypt_complete, );
+  crypto_req_done, );
 
j = 0;
for (i = 0; i < tcount; i++) {
@@ -335,26 +309,26 @@ static int __test_hash(struct crypto_ahash *tfm,
 
ahash_request_set_crypt(req, sg, result, template[i].psize);
if (use_digest) {
-   ret = wait_async_op(, crypto_ahash_digest(req));
+   ret = crypto_wait_req(crypto_ahash_digest(req), );
if (ret) {
pr_err("alg: hash: digest failed on test %d "
   "for %s: ret=%d\n", j, algo, -ret);
goto out;
}
} else {
-   ret = wait_async_op(, crypto_ahash_init(req));
+   ret = crypto_wait_req(crypto_ahash_init(req), );
if (ret) {
pr_err("alg: hash: init failed on test %d "
   "for %s: ret=%d\n", j, algo, -ret);
goto out;
}
-   ret = wait_async_op(, crypto_ahash_update(req));
+   ret = crypto_wait_req(crypto_ahash_update(req), );
if (ret) {
pr_err("alg: hash: update failed on test %d "
  

[PATCH v5 11/19] fscrypt: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
fscrypt starts several async. crypto ops and waiting for them to
complete. Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 fs/crypto/crypto.c  | 28 
 fs/crypto/fname.c   | 36 ++--
 fs/crypto/fscrypt_private.h | 10 --
 fs/crypto/keyinfo.c | 21 +++--
 4 files changed, 13 insertions(+), 82 deletions(-)

diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index c7835df..80a3cad 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -126,21 +126,6 @@ struct fscrypt_ctx *fscrypt_get_ctx(const struct inode 
*inode, gfp_t gfp_flags)
 }
 EXPORT_SYMBOL(fscrypt_get_ctx);
 
-/**
- * page_crypt_complete() - completion callback for page crypto
- * @req: The asynchronous cipher request context
- * @res: The result of the cipher operation
- */
-static void page_crypt_complete(struct crypto_async_request *req, int res)
-{
-   struct fscrypt_completion_result *ecr = req->data;
-
-   if (res == -EINPROGRESS)
-   return;
-   ecr->res = res;
-   complete(>completion);
-}
-
 int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw,
   u64 lblk_num, struct page *src_page,
   struct page *dest_page, unsigned int len,
@@ -151,7 +136,7 @@ int fscrypt_do_page_crypto(const struct inode *inode, 
fscrypt_direction_t rw,
u8 padding[FS_IV_SIZE - sizeof(__le64)];
} iv;
struct skcipher_request *req = NULL;
-   DECLARE_FS_COMPLETION_RESULT(ecr);
+   DECLARE_CRYPTO_WAIT(wait);
struct scatterlist dst, src;
struct fscrypt_info *ci = inode->i_crypt_info;
struct crypto_skcipher *tfm = ci->ci_ctfm;
@@ -179,7 +164,7 @@ int fscrypt_do_page_crypto(const struct inode *inode, 
fscrypt_direction_t rw,
 
skcipher_request_set_callback(
req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
-   page_crypt_complete, );
+   crypto_req_done, );
 
sg_init_table(, 1);
sg_set_page(, dest_page, len, offs);
@@ -187,14 +172,9 @@ int fscrypt_do_page_crypto(const struct inode *inode, 
fscrypt_direction_t rw,
sg_set_page(, src_page, len, offs);
skcipher_request_set_crypt(req, , , len, );
if (rw == FS_DECRYPT)
-   res = crypto_skcipher_decrypt(req);
+   res = crypto_wait_req(crypto_skcipher_decrypt(req), );
else
-   res = crypto_skcipher_encrypt(req);
-   if (res == -EINPROGRESS || res == -EBUSY) {
-   BUG_ON(req->base.data != );
-   wait_for_completion();
-   res = ecr.res;
-   }
+   res = crypto_wait_req(crypto_skcipher_encrypt(req), );
skcipher_request_free(req);
if (res) {
printk_ratelimited(KERN_ERR
diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
index ad9f814..a80a0d3 100644
--- a/fs/crypto/fname.c
+++ b/fs/crypto/fname.c
@@ -15,21 +15,6 @@
 #include "fscrypt_private.h"
 
 /**
- * fname_crypt_complete() - completion callback for filename crypto
- * @req: The asynchronous cipher request context
- * @res: The result of the cipher operation
- */
-static void fname_crypt_complete(struct crypto_async_request *req, int res)
-{
-   struct fscrypt_completion_result *ecr = req->data;
-
-   if (res == -EINPROGRESS)
-   return;
-   ecr->res = res;
-   complete(>completion);
-}
-
-/**
  * fname_encrypt() - encrypt a filename
  *
  * The caller must have allocated sufficient memory for the @oname string.
@@ -40,7 +25,7 @@ static int fname_encrypt(struct inode *inode,
const struct qstr *iname, struct fscrypt_str *oname)
 {
struct skcipher_request *req = NULL;
-   DECLARE_FS_COMPLETION_RESULT(ecr);
+   DECLARE_CRYPTO_WAIT(wait);
struct fscrypt_info *ci = inode->i_crypt_info;
struct crypto_skcipher *tfm = ci->ci_ctfm;
int res = 0;
@@ -76,17 +61,12 @@ static int fname_encrypt(struct inode *inode,
}
skcipher_request_set_callback(req,
CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
-   fname_crypt_complete, );
+   crypto_req_done, );
sg_init_one(, oname->name, cryptlen);
skcipher_request_set_crypt(req, , , cryptlen, iv);
 
/* Do the encryption */
-   res = crypto_skcipher_encrypt(req);
-   if (res == -EINPROGRESS || res == -EBUSY) {
-   /* Request is being completed asynchronously; wait for it */
-   wait_for_completion();
-   res = ecr.res;
-   }
+   res = crypto_wait_req(crypto_skcipher_encrypt(req), );
skcipher_request_free(req);
if (res < 0) {
printk_ratelimited(KERN_ERR
@@ -110,7 +90,7 @@ static int fname_decrypt(struct inode *inode,

[PATCH v5 12/19] dm: move dm-verity to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
dm-verity is starting async. crypto ops and waiting for them to complete.
Move it over to generic code doing the same.

This also fixes a possible data coruption bug created by the
use of wait_for_completion_interruptible() without dealing
correctly with an interrupt aborting the wait prior to the
async op finishing.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/md/dm-verity-target.c | 81 +++
 drivers/md/dm-verity.h|  5 ---
 2 files changed, 20 insertions(+), 66 deletions(-)

diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
index 79f18d4..8df08a8 100644
--- a/drivers/md/dm-verity-target.c
+++ b/drivers/md/dm-verity-target.c
@@ -92,74 +92,33 @@ static sector_t verity_position_at_level(struct dm_verity 
*v, sector_t block,
return block >> (level * v->hash_per_block_bits);
 }
 
-/*
- * Callback function for asynchrnous crypto API completion notification
- */
-static void verity_op_done(struct crypto_async_request *base, int err)
-{
-   struct verity_result *res = (struct verity_result *)base->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   res->err = err;
-   complete(>completion);
-}
-
-/*
- * Wait for async crypto API callback
- */
-static inline int verity_complete_op(struct verity_result *res, int ret)
-{
-   switch (ret) {
-   case 0:
-   break;
-
-   case -EINPROGRESS:
-   case -EBUSY:
-   ret = wait_for_completion_interruptible(>completion);
-   if (!ret)
-   ret = res->err;
-   reinit_completion(>completion);
-   break;
-
-   default:
-   DMERR("verity_wait_hash: crypto op submission failed: %d", ret);
-   }
-
-   if (unlikely(ret < 0))
-   DMERR("verity_wait_hash: crypto op failed: %d", ret);
-
-   return ret;
-}
-
 static int verity_hash_update(struct dm_verity *v, struct ahash_request *req,
const u8 *data, size_t len,
-   struct verity_result *res)
+   struct crypto_wait *wait)
 {
struct scatterlist sg;
 
sg_init_one(, data, len);
ahash_request_set_crypt(req, , NULL, len);
 
-   return verity_complete_op(res, crypto_ahash_update(req));
+   return crypto_wait_req(crypto_ahash_update(req), wait);
 }
 
 /*
  * Wrapper for crypto_ahash_init, which handles verity salting.
  */
 static int verity_hash_init(struct dm_verity *v, struct ahash_request *req,
-   struct verity_result *res)
+   struct crypto_wait *wait)
 {
int r;
 
ahash_request_set_tfm(req, v->tfm);
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP |
CRYPTO_TFM_REQ_MAY_BACKLOG,
-   verity_op_done, (void *)res);
-   init_completion(>completion);
+   crypto_req_done, (void *)wait);
+   crypto_init_wait(wait);
 
-   r = verity_complete_op(res, crypto_ahash_init(req));
+   r = crypto_wait_req(crypto_ahash_init(req), wait);
 
if (unlikely(r < 0)) {
DMERR("crypto_ahash_init failed: %d", r);
@@ -167,18 +126,18 @@ static int verity_hash_init(struct dm_verity *v, struct 
ahash_request *req,
}
 
if (likely(v->salt_size && (v->version >= 1)))
-   r = verity_hash_update(v, req, v->salt, v->salt_size, res);
+   r = verity_hash_update(v, req, v->salt, v->salt_size, wait);
 
return r;
 }
 
 static int verity_hash_final(struct dm_verity *v, struct ahash_request *req,
-u8 *digest, struct verity_result *res)
+u8 *digest, struct crypto_wait *wait)
 {
int r;
 
if (unlikely(v->salt_size && (!v->version))) {
-   r = verity_hash_update(v, req, v->salt, v->salt_size, res);
+   r = verity_hash_update(v, req, v->salt, v->salt_size, wait);
 
if (r < 0) {
DMERR("verity_hash_final failed updating salt: %d", r);
@@ -187,7 +146,7 @@ static int verity_hash_final(struct dm_verity *v, struct 
ahash_request *req,
}
 
ahash_request_set_crypt(req, NULL, digest, 0);
-   r = verity_complete_op(res, crypto_ahash_final(req));
+   r = crypto_wait_req(crypto_ahash_final(req), wait);
 out:
return r;
 }
@@ -196,17 +155,17 @@ int verity_hash(struct dm_verity *v, struct ahash_request 
*req,
const u8 *data, size_t len, u8 *digest)
 {
int r;
-   struct verity_result res;
+   struct crypto_wait wait;
 
-   r = verity_hash_init(v, req, );
+   r = verity_hash_init(v, req, );
if (unlikely(r < 0))
goto out;
 
-   r = verity_hash_update(v, req, data, len, );
+   r = verity_hash_update(v, req, 

[PATCH v5 14/19] ima: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
ima starts several async crypto ops and  waits for their completions.
Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
Acked-by: Mimi Zohar 
---
 security/integrity/ima/ima_crypto.c | 56 +++--
 1 file changed, 17 insertions(+), 39 deletions(-)

diff --git a/security/integrity/ima/ima_crypto.c 
b/security/integrity/ima/ima_crypto.c
index 802d5d2..0e4db1fe 100644
--- a/security/integrity/ima/ima_crypto.c
+++ b/security/integrity/ima/ima_crypto.c
@@ -27,11 +27,6 @@
 
 #include "ima.h"
 
-struct ahash_completion {
-   struct completion completion;
-   int err;
-};
-
 /* minimum file size for ahash use */
 static unsigned long ima_ahash_minsize;
 module_param_named(ahash_minsize, ima_ahash_minsize, ulong, 0644);
@@ -196,30 +191,13 @@ static void ima_free_atfm(struct crypto_ahash *tfm)
crypto_free_ahash(tfm);
 }
 
-static void ahash_complete(struct crypto_async_request *req, int err)
+static inline int ahash_wait(int err, struct crypto_wait *wait)
 {
-   struct ahash_completion *res = req->data;
 
-   if (err == -EINPROGRESS)
-   return;
-   res->err = err;
-   complete(>completion);
-}
+   err = crypto_wait_req(err, wait);
 
-static int ahash_wait(int err, struct ahash_completion *res)
-{
-   switch (err) {
-   case 0:
-   break;
-   case -EINPROGRESS:
-   case -EBUSY:
-   wait_for_completion(>completion);
-   reinit_completion(>completion);
-   err = res->err;
-   /* fall through */
-   default:
+   if (err)
pr_crit_ratelimited("ahash calculation failed: err: %d\n", err);
-   }
 
return err;
 }
@@ -233,7 +211,7 @@ static int ima_calc_file_hash_atfm(struct file *file,
int rc, read = 0, rbuf_len, active = 0, ahash_rc = 0;
struct ahash_request *req;
struct scatterlist sg[1];
-   struct ahash_completion res;
+   struct crypto_wait wait;
size_t rbuf_size[2];
 
hash->length = crypto_ahash_digestsize(tfm);
@@ -242,12 +220,12 @@ static int ima_calc_file_hash_atfm(struct file *file,
if (!req)
return -ENOMEM;
 
-   init_completion();
+   crypto_init_wait();
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
   CRYPTO_TFM_REQ_MAY_SLEEP,
-  ahash_complete, );
+  crypto_req_done, );
 
-   rc = ahash_wait(crypto_ahash_init(req), );
+   rc = ahash_wait(crypto_ahash_init(req), );
if (rc)
goto out1;
 
@@ -288,7 +266,7 @@ static int ima_calc_file_hash_atfm(struct file *file,
 * read/request, wait for the completion of the
 * previous ahash_update() request.
 */
-   rc = ahash_wait(ahash_rc, );
+   rc = ahash_wait(ahash_rc, );
if (rc)
goto out3;
}
@@ -304,7 +282,7 @@ static int ima_calc_file_hash_atfm(struct file *file,
 * read/request, wait for the completion of the
 * previous ahash_update() request.
 */
-   rc = ahash_wait(ahash_rc, );
+   rc = ahash_wait(ahash_rc, );
if (rc)
goto out3;
}
@@ -318,7 +296,7 @@ static int ima_calc_file_hash_atfm(struct file *file,
active = !active; /* swap buffers, if we use two */
}
/* wait for the last update request to complete */
-   rc = ahash_wait(ahash_rc, );
+   rc = ahash_wait(ahash_rc, );
 out3:
if (read)
file->f_mode &= ~FMODE_READ;
@@ -327,7 +305,7 @@ static int ima_calc_file_hash_atfm(struct file *file,
 out2:
if (!rc) {
ahash_request_set_crypt(req, NULL, hash->digest, 0);
-   rc = ahash_wait(crypto_ahash_final(req), );
+   rc = ahash_wait(crypto_ahash_final(req), );
}
 out1:
ahash_request_free(req);
@@ -527,7 +505,7 @@ static int calc_buffer_ahash_atfm(const void *buf, loff_t 
len,
 {
struct ahash_request *req;
struct scatterlist sg;
-   struct ahash_completion res;
+   struct crypto_wait wait;
int rc, ahash_rc = 0;
 
hash->length = crypto_ahash_digestsize(tfm);
@@ -536,12 +514,12 @@ static int calc_buffer_ahash_atfm(const void *buf, loff_t 
len,
if (!req)
return -ENOMEM;
 
-   init_completion();
+   crypto_init_wait();
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
   CRYPTO_TFM_REQ_MAY_SLEEP,
-  ahash_complete, );
+  

[PATCH v5 13/19] cifs: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
cifs starts an async. crypto op and waits for their completion.
Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
Acked-by: Pavel Shilovsky 
---
 fs/cifs/smb2ops.c | 30 --
 1 file changed, 4 insertions(+), 26 deletions(-)

diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
index cfacf2c..16fb041 100644
--- a/fs/cifs/smb2ops.c
+++ b/fs/cifs/smb2ops.c
@@ -1878,22 +1878,6 @@ init_sg(struct smb_rqst *rqst, u8 *sign)
return sg;
 }
 
-struct cifs_crypt_result {
-   int err;
-   struct completion completion;
-};
-
-static void cifs_crypt_complete(struct crypto_async_request *req, int err)
-{
-   struct cifs_crypt_result *res = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   res->err = err;
-   complete(>completion);
-}
-
 static int
 smb2_get_enc_key(struct TCP_Server_Info *server, __u64 ses_id, int enc, u8 
*key)
 {
@@ -1934,12 +1918,10 @@ crypt_message(struct TCP_Server_Info *server, struct 
smb_rqst *rqst, int enc)
struct aead_request *req;
char *iv;
unsigned int iv_len;
-   struct cifs_crypt_result result = {0, };
+   DECLARE_CRYPTO_WAIT(wait);
struct crypto_aead *tfm;
unsigned int crypt_len = le32_to_cpu(tr_hdr->OriginalMessageSize);
 
-   init_completion();
-
rc = smb2_get_enc_key(server, tr_hdr->SessionId, enc, key);
if (rc) {
cifs_dbg(VFS, "%s: Could not get %scryption key\n", __func__,
@@ -1999,14 +1981,10 @@ crypt_message(struct TCP_Server_Info *server, struct 
smb_rqst *rqst, int enc)
aead_request_set_ad(req, assoc_data_len);
 
aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
- cifs_crypt_complete, );
+ crypto_req_done, );
 
-   rc = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req);
-
-   if (rc == -EINPROGRESS || rc == -EBUSY) {
-   wait_for_completion();
-   rc = result.err;
-   }
+   rc = crypto_wait_req(enc ? crypto_aead_encrypt(req)
+   : crypto_aead_decrypt(req), );
 
if (!rc && enc)
memcpy(_hdr->Signature, sign, SMB2_SIGNATURE_SIZE);
-- 
2.1.4



[PATCH v5 15/19] crypto: tcrypt: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
tcrypt starts several async crypto ops and  waits for their completions.
Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 crypto/tcrypt.c | 84 +
 1 file changed, 25 insertions(+), 59 deletions(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 0022a18..802aa81 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -79,34 +79,11 @@ static char *check[] = {
NULL
 };
 
-struct tcrypt_result {
-   struct completion completion;
-   int err;
-};
-
-static void tcrypt_complete(struct crypto_async_request *req, int err)
-{
-   struct tcrypt_result *res = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   res->err = err;
-   complete(>completion);
-}
-
 static inline int do_one_aead_op(struct aead_request *req, int ret)
 {
-   if (ret == -EINPROGRESS || ret == -EBUSY) {
-   struct tcrypt_result *tr = req->base.data;
+   struct crypto_wait *wait = req->base.data;
 
-   ret = wait_for_completion_interruptible(>completion);
-   if (!ret)
-   ret = tr->err;
-   reinit_completion(>completion);
-   }
-
-   return ret;
+   return crypto_wait_req(ret, wait);
 }
 
 static int test_aead_jiffies(struct aead_request *req, int enc,
@@ -248,7 +225,7 @@ static void test_aead_speed(const char *algo, int enc, 
unsigned int secs,
char *axbuf[XBUFSIZE];
unsigned int *b_size;
unsigned int iv_len;
-   struct tcrypt_result result;
+   struct crypto_wait wait;
 
iv = kzalloc(MAX_IVLEN, GFP_KERNEL);
if (!iv)
@@ -284,7 +261,7 @@ static void test_aead_speed(const char *algo, int enc, 
unsigned int secs,
goto out_notfm;
}
 
-   init_completion();
+   crypto_init_wait();
printk(KERN_INFO "\ntesting speed of %s (%s) %s\n", algo,
get_driver_name(crypto_aead, tfm), e);
 
@@ -296,7 +273,7 @@ static void test_aead_speed(const char *algo, int enc, 
unsigned int secs,
}
 
aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
- tcrypt_complete, );
+ crypto_req_done, );
 
i = 0;
do {
@@ -397,21 +374,16 @@ static void test_hash_sg_init(struct scatterlist *sg)
 
 static inline int do_one_ahash_op(struct ahash_request *req, int ret)
 {
-   if (ret == -EINPROGRESS || ret == -EBUSY) {
-   struct tcrypt_result *tr = req->base.data;
+   struct crypto_wait *wait = req->base.data;
 
-   wait_for_completion(>completion);
-   reinit_completion(>completion);
-   ret = tr->err;
-   }
-   return ret;
+   return crypto_wait_req(ret, wait);
 }
 
 struct test_mb_ahash_data {
struct scatterlist sg[TVMEMSIZE];
char result[64];
struct ahash_request *req;
-   struct tcrypt_result tresult;
+   struct crypto_wait wait;
char *xbuf[XBUFSIZE];
 };
 
@@ -440,7 +412,7 @@ static void test_mb_ahash_speed(const char *algo, unsigned 
int sec,
if (testmgr_alloc_buf(data[i].xbuf))
goto out;
 
-   init_completion([i].tresult.completion);
+   crypto_init_wait([i].wait);
 
data[i].req = ahash_request_alloc(tfm, GFP_KERNEL);
if (!data[i].req) {
@@ -449,8 +421,8 @@ static void test_mb_ahash_speed(const char *algo, unsigned 
int sec,
goto out;
}
 
-   ahash_request_set_callback(data[i].req, 0,
-  tcrypt_complete, [i].tresult);
+   ahash_request_set_callback(data[i].req, 0, crypto_req_done,
+  [i].wait);
test_hash_sg_init(data[i].sg);
}
 
@@ -492,16 +464,16 @@ static void test_mb_ahash_speed(const char *algo, 
unsigned int sec,
if (ret)
break;
 
-   complete([k].tresult.completion);
-   data[k].tresult.err = 0;
+   crypto_req_done([k].req->base, 0);
}
 
for (j = 0; j < k; j++) {
-   struct tcrypt_result *tr = [j].tresult;
+   struct crypto_wait *wait = [j].wait;
+   int wait_ret;
 
-   wait_for_completion(>completion);
-   if (tr->err)
-   ret = tr->err;
+   wait_ret = crypto_wait_req(-EINPROGRESS, wait);
+   if (wait_ret)
+   ret = wait_ret;
}
 
end = get_cycles();
@@ -679,7 +651,7 @@ static void test_ahash_speed_common(const char *algo, 
unsigned int secs,

[PATCH v5 16/19] crypto: talitos: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
The talitos driver starts several async crypto ops and  waits for their
completions. Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/crypto/talitos.c | 38 +-
 1 file changed, 5 insertions(+), 33 deletions(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index 79791c6..194a307 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -2037,22 +2037,6 @@ static int ahash_import(struct ahash_request *areq, 
const void *in)
return 0;
 }
 
-struct keyhash_result {
-   struct completion completion;
-   int err;
-};
-
-static void keyhash_complete(struct crypto_async_request *req, int err)
-{
-   struct keyhash_result *res = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   res->err = err;
-   complete(>completion);
-}
-
 static int keyhash(struct crypto_ahash *tfm, const u8 *key, unsigned int 
keylen,
   u8 *hash)
 {
@@ -2060,10 +2044,10 @@ static int keyhash(struct crypto_ahash *tfm, const u8 
*key, unsigned int keylen,
 
struct scatterlist sg[1];
struct ahash_request *req;
-   struct keyhash_result hresult;
+   struct crypto_wait wait;
int ret;
 
-   init_completion();
+   crypto_init_wait();
 
req = ahash_request_alloc(tfm, GFP_KERNEL);
if (!req)
@@ -2072,25 +2056,13 @@ static int keyhash(struct crypto_ahash *tfm, const u8 
*key, unsigned int keylen,
/* Keep tfm keylen == 0 during hash of the long key */
ctx->keylen = 0;
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-  keyhash_complete, );
+  crypto_req_done, );
 
sg_init_one([0], key, keylen);
 
ahash_request_set_crypt(req, sg, hash, keylen);
-   ret = crypto_ahash_digest(req);
-   switch (ret) {
-   case 0:
-   break;
-   case -EINPROGRESS:
-   case -EBUSY:
-   ret = wait_for_completion_interruptible(
-   );
-   if (!ret)
-   ret = hresult.err;
-   break;
-   default:
-   break;
-   }
+   ret = crypto_wait_req(crypto_ahash_digest(req), );
+
ahash_request_free(req);
 
return ret;
-- 
2.1.4



[PATCH v5 18/19] crypto: mediatek: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
The mediatek driver starts several async crypto ops and waits for their
completions. Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/crypto/mediatek/mtk-aes.c | 31 +--
 1 file changed, 5 insertions(+), 26 deletions(-)

diff --git a/drivers/crypto/mediatek/mtk-aes.c 
b/drivers/crypto/mediatek/mtk-aes.c
index 9e845e8..e2c7c95 100644
--- a/drivers/crypto/mediatek/mtk-aes.c
+++ b/drivers/crypto/mediatek/mtk-aes.c
@@ -137,11 +137,6 @@ struct mtk_aes_gcm_ctx {
struct crypto_skcipher *ctr;
 };
 
-struct mtk_aes_gcm_setkey_result {
-   int err;
-   struct completion completion;
-};
-
 struct mtk_aes_drv {
struct list_head dev_list;
/* Device list lock */
@@ -936,17 +931,6 @@ static int mtk_aes_gcm_crypt(struct aead_request *req, u64 
mode)
>base);
 }
 
-static void mtk_gcm_setkey_done(struct crypto_async_request *req, int err)
-{
-   struct mtk_aes_gcm_setkey_result *result = req->data;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   result->err = err;
-   complete(>completion);
-}
-
 /*
  * Because of the hardware limitation, we need to pre-calculate key(H)
  * for the GHASH operation. The result of the encryption operation
@@ -962,7 +946,7 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
u32 hash[4];
u8 iv[8];
 
-   struct mtk_aes_gcm_setkey_result result;
+   struct crypto_wait wait;
 
struct scatterlist sg[1];
struct skcipher_request req;
@@ -1002,22 +986,17 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
if (!data)
return -ENOMEM;
 
-   init_completion(>result.completion);
+   crypto_init_wait(>wait);
sg_init_one(data->sg, >hash, AES_BLOCK_SIZE);
skcipher_request_set_tfm(>req, ctr);
skcipher_request_set_callback(>req, CRYPTO_TFM_REQ_MAY_SLEEP |
  CRYPTO_TFM_REQ_MAY_BACKLOG,
- mtk_gcm_setkey_done, >result);
+ crypto_req_done, >wait);
skcipher_request_set_crypt(>req, data->sg, data->sg,
   AES_BLOCK_SIZE, data->iv);
 
-   err = crypto_skcipher_encrypt(>req);
-   if (err == -EINPROGRESS || err == -EBUSY) {
-   err = wait_for_completion_interruptible(
-   >result.completion);
-   if (!err)
-   err = data->result.err;
-   }
+   err = crypto_wait_req(crypto_skcipher_encrypt(>req),
+ >wait);
if (err)
goto out;
 
-- 
2.1.4



[PATCH v5 17/19] crypto: qce: move to generic async completion

2017-08-14 Thread Gilad Ben-Yossef
The qce driver starts several async crypto ops and  waits for their
completions. Move it over to generic code doing the same.

Signed-off-by: Gilad Ben-Yossef 
---
 drivers/crypto/qce/sha.c | 30 --
 1 file changed, 4 insertions(+), 26 deletions(-)

diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
index 47e114a..53227d7 100644
--- a/drivers/crypto/qce/sha.c
+++ b/drivers/crypto/qce/sha.c
@@ -349,28 +349,12 @@ static int qce_ahash_digest(struct ahash_request *req)
return qce->async_req_enqueue(tmpl->qce, >base);
 }
 
-struct qce_ahash_result {
-   struct completion completion;
-   int error;
-};
-
-static void qce_digest_complete(struct crypto_async_request *req, int error)
-{
-   struct qce_ahash_result *result = req->data;
-
-   if (error == -EINPROGRESS)
-   return;
-
-   result->error = error;
-   complete(>completion);
-}
-
 static int qce_ahash_hmac_setkey(struct crypto_ahash *tfm, const u8 *key,
 unsigned int keylen)
 {
unsigned int digestsize = crypto_ahash_digestsize(tfm);
struct qce_sha_ctx *ctx = crypto_tfm_ctx(>base);
-   struct qce_ahash_result result;
+   struct crypto_wait wait;
struct ahash_request *req;
struct scatterlist sg;
unsigned int blocksize;
@@ -405,9 +389,9 @@ static int qce_ahash_hmac_setkey(struct crypto_ahash *tfm, 
const u8 *key,
goto err_free_ahash;
}
 
-   init_completion();
+   crypto_init_wait();
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-  qce_digest_complete, );
+  crypto_req_done, );
crypto_ahash_clear_flags(ahash_tfm, ~0);
 
buf = kzalloc(keylen + QCE_MAX_ALIGN_SIZE, GFP_KERNEL);
@@ -420,13 +404,7 @@ static int qce_ahash_hmac_setkey(struct crypto_ahash *tfm, 
const u8 *key,
sg_init_one(, buf, keylen);
ahash_request_set_crypt(req, , ctx->authkey, keylen);
 
-   ret = crypto_ahash_digest(req);
-   if (ret == -EINPROGRESS || ret == -EBUSY) {
-   ret = wait_for_completion_interruptible();
-   if (!ret)
-   ret = result.error;
-   }
-
+   ret = crypto_wait_req(crypto_ahash_digest(req), );
if (ret)
crypto_ahash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
 
-- 
2.1.4



[PATCH v5 19/19] crypto: adapt api sample to use async. op wait

2017-08-14 Thread Gilad Ben-Yossef
The code sample is waiting for an async. crypto op completion.
Adapt sample to use the new generic infrastructure to do the same.

This also fixes a possible data coruption bug created by the
use of wait_for_completion_interruptible() without dealing
correctly with an interrupt aborting the wait prior to the
async op finishing.

Signed-off-by: Gilad Ben-Yossef 
---
 Documentation/crypto/api-samples.rst | 52 +++-
 1 file changed, 10 insertions(+), 42 deletions(-)

diff --git a/Documentation/crypto/api-samples.rst 
b/Documentation/crypto/api-samples.rst
index 2531948..006827e 100644
--- a/Documentation/crypto/api-samples.rst
+++ b/Documentation/crypto/api-samples.rst
@@ -7,59 +7,27 @@ Code Example For Symmetric Key Cipher Operation
 ::
 
 
-struct tcrypt_result {
-struct completion completion;
-int err;
-};
-
 /* tie all data structures together */
 struct skcipher_def {
 struct scatterlist sg;
 struct crypto_skcipher *tfm;
 struct skcipher_request *req;
-struct tcrypt_result result;
+struct crypto_wait wait;
 };
 
-/* Callback function */
-static void test_skcipher_cb(struct crypto_async_request *req, int error)
-{
-struct tcrypt_result *result = req->data;
-
-if (error == -EINPROGRESS)
-return;
-result->err = error;
-complete(>completion);
-pr_info("Encryption finished successfully\n");
-}
-
 /* Perform cipher operation */
 static unsigned int test_skcipher_encdec(struct skcipher_def *sk,
  int enc)
 {
-int rc = 0;
+int rc;
 
 if (enc)
-rc = crypto_skcipher_encrypt(sk->req);
+rc = crypto_wait_req(crypto_skcipher_encrypt(sk->req), >wait);
 else
-rc = crypto_skcipher_decrypt(sk->req);
-
-switch (rc) {
-case 0:
-break;
-case -EINPROGRESS:
-case -EBUSY:
-rc = wait_for_completion_interruptible(
->result.completion);
-if (!rc && !sk->result.err) {
-reinit_completion(>result.completion);
-break;
-}
-default:
-pr_info("skcipher encrypt returned with %d result %d\n",
-rc, sk->result.err);
-break;
-}
-init_completion(>result.completion);
+rc = crypto_wait_req(crypto_skcipher_decrypt(sk->req), >wait);
+
+   if (rc)
+   pr_info("skcipher encrypt returned with result %d\n", rc);
 
 return rc;
 }
@@ -89,8 +57,8 @@ Code Example For Symmetric Key Cipher Operation
 }
 
 skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-  test_skcipher_cb,
-  );
+  crypto_req_done,
+  );
 
 /* AES 256 with random key */
 get_random_bytes(, 32);
@@ -122,7 +90,7 @@ Code Example For Symmetric Key Cipher Operation
 /* We encrypt one block */
 sg_init_one(, scratchpad, 16);
 skcipher_request_set_crypt(req, , , 16, ivdata);
-init_completion();
+crypto_init_wait();
 
 /* encrypt data */
 ret = test_skcipher_encdec(, 1);
-- 
2.1.4



[PATCH v5 00/19] simplify crypto wait for async op

2017-08-14 Thread Gilad Ben-Yossef
Many users of kernel async. crypto services have a pattern of
starting an async. crypto op and than using a completion
to wait for it to end.

This patch set simplifies this common use case in two ways:

First, by separating the return codes of the case where a
request is queued to a backlog due to the provider being
busy (-EBUSY) from the case the request has failed due
to the provider being busy and backlogging is not enabled
(-EAGAIN).

Next, this change is than built on to create a generic API
to wait for a async. crypto operation to complete.

The end result is a smaller code base and an API that is
easier to use and more difficult to get wrong.

The patch set was boot tested on x86_64 and arm64 which
at the very least tests the crypto users via testmgr and
tcrypt but I do note that I do not have access to some
of the HW whose drivers are modified nor do I claim I was
able to test all of the corner cases.

The patch set is based upon linux-next release tagged
next-20170811.

Changes from v4:
- Rebase on top of latest algif changes from Stephan
  Mueller.
- Fix typo in ccp patch title.

Changes from v3:
- Instead of changing the return code to indicate
  backlog queueing, change the return code to indicate
  transient busy state, as suggested by Herbert Xu.

Changes from v2:
- Patch title changed from "introduce crypto wait for
  async op" to better reflect the current state.
- Rebase on top of latest linux-next.
- Add a new return code of -EIOCBQUEUED for backlog
  queueing, as suggested by Herbert Xu.
- Transform more users to the new API.
- Update the drbg change to account for new init as
  indicated by Stephan Muller.

Changes from v1:
- Address review comments from Eric Biggers.
- Separated out bug fixes of existing code and rebase
  on top of that patch set.
- Rename 'ecr' to 'wait' in fscrypto code.
- Split patch introducing the new API from the change
  moving over the algif code which it originated from
  to the new API.
- Inline crypto_wait_req().
- Some code indentation fixes.

Gilad Ben-Yossef (19):
  crypto: change transient busy return code to -EAGAIN
  crypto: ccp: use -EAGAIN for transient busy indication
  crypto: remove redundant backlog checks on EBUSY
  crypto: marvell/cesa: remove redundant backlog checks on EBUSY
  crypto: introduce crypto wait for async op
  crypto: move algif to generic async completion
  crypto: move pub key to generic async completion
  crypto: move drbg to generic async completion
  crypto: move gcm to generic async completion
  crypto: move testmgr to generic async completion
  fscrypt: move to generic async completion
  dm: move dm-verity to generic async completion
  cifs: move to generic async completion
  ima: move to generic async completion
  crypto: tcrypt: move to generic async completion
  crypto: talitos: move to generic async completion
  crypto: qce: move to generic async completion
  crypto: mediatek: move to generic async completion
  crypto: adapt api sample to use async. op wait

 Documentation/crypto/api-samples.rst |  52 ++---
 crypto/af_alg.c  |  27 -
 crypto/ahash.c   |  12 +--
 crypto/algapi.c  |   6 +-
 crypto/algif_aead.c  |   8 +-
 crypto/algif_hash.c  |  50 +
 crypto/algif_skcipher.c  |   9 +-
 crypto/api.c |  13 +++
 crypto/asymmetric_keys/public_key.c  |  28 +
 crypto/cryptd.c  |   4 +-
 crypto/cts.c |   6 +-
 crypto/drbg.c|  36 ++-
 crypto/gcm.c |  32 ++
 crypto/lrw.c |   8 +-
 crypto/rsa-pkcs1pad.c|  16 +--
 crypto/tcrypt.c  |  84 +--
 crypto/testmgr.c | 204 ---
 crypto/xts.c |   8 +-
 drivers/crypto/ccp/ccp-crypto-main.c |   8 +-
 drivers/crypto/ccp/ccp-dev.c |   7 +-
 drivers/crypto/marvell/cesa.c|   3 +-
 drivers/crypto/marvell/cesa.h|   2 +-
 drivers/crypto/mediatek/mtk-aes.c|  31 +-
 drivers/crypto/qce/sha.c |  30 +-
 drivers/crypto/talitos.c |  38 +--
 drivers/md/dm-verity-target.c|  81 --
 drivers/md/dm-verity.h   |   5 -
 fs/cifs/smb2ops.c|  30 +-
 fs/crypto/crypto.c   |  28 +
 fs/crypto/fname.c|  36 ++-
 fs/crypto/fscrypt_private.h  |  10 --
 fs/crypto/keyinfo.c  |  21 +---
 include/crypto/drbg.h|   3 +-
 include/crypto/if_alg.h  |  15 +--
 include/linux/crypto.h   |  41 +++
 security/integrity/ima/ima_crypto.c  |  56 +++---
 36 files changed, 311 insertions(+), 737 deletions(-)

-- 
2.1.4



Re: [PATCH v5 2/5] lib: Add zstd modules

2017-08-14 Thread David Sterba
On Fri, Aug 11, 2017 at 09:20:10AM -0400, Chris Mason wrote:
> 
> 
> On 08/10/2017 03:25 PM, Hugo Mills wrote:
> > On Thu, Aug 10, 2017 at 01:41:21PM -0400, Chris Mason wrote:
> >> On 08/10/2017 04:30 AM, Eric Biggers wrote:
> >>>
> >>> Theses benchmarks are misleading because they compress the whole file as a
> >>> single stream without resetting the dictionary, which isn't how data will
> >>> typically be compressed in kernel mode.  With filesystem compression the 
> >>> data
> >>> has to be divided into small chunks that can each be decompressed 
> >>> independently.
> >>> That eliminates one of the primary advantages of Zstandard (support for 
> >>> large
> >>> dictionary sizes).
> >>
> >> I did btrfs benchmarks of kernel trees and other normal data sets as
> >> well.  The numbers were in line with what Nick is posting here.
> >> zstd is a big win over both lzo and zlib from a btrfs point of view.
> >>
> >> It's true Nick's patches only support a single compression level in
> >> btrfs, but that's because btrfs doesn't have a way to pass in the
> >> compression ratio.  It could easily be a mount option, it was just
> >> outside the scope of Nick's initial work.
> > 
> > Could we please not add more mount options? I get that they're easy
> > to implement, but it's a very blunt instrument. What we tend to see
> > (with both nodatacow and compress) is people using the mount options,
> > then asking for exceptions, discovering that they can't do that, and
> > then falling back to doing it with attributes or btrfs properties.
> > Could we just start with btrfs properties this time round, and cut out
> > the mount option part of this cycle.
> > 
> > In the long run, it'd be great to see most of the btrfs-specific
> > mount options get deprecated and ultimately removed entirely, in
> > favour of attributes/properties, where feasible.
> > 
> 
> It's a good point, and as was commented later down I'd just do mount -o 
> compress=zstd:3 or something.

We've solved that already, here's a proposed patch that extends current
mount options,

https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg66248.html

and a patch that could be backported to older kernels so the new mount
options do not break mounts on older kernels with the new syntax

https://patchwork.kernel.org/patch/9845697/



[PATCH 1/2] crypto/chacha20: fix handling of chunked input

2017-08-14 Thread Ard Biesheuvel
Commit 9ae433bc79f9 ("crypto: chacha20 - convert generic and x86 versions
to skcipher") ported the existing chacha20 code to use the new skcipher
API, and introduced a bug along the way. Unfortunately, the tcrypt tests
did not catch the error, and it was only found recently by Tobias.

Stefan kindly diagnosed the error, and proposed a fix which is similar
to the one below, with the exception that 'walk.stride' is used rather
than the hardcoded block size. This does not actually matter in this
case, but it's a better example of how to use the skcipher walk API.

Fixes: 9ae433bc79f9 ("crypto: chacha20 - convert generic and x86 ...")
Cc:  # v4.11+
Cc: Steffen Klassert 
Reported-by: Tobias Brunner 
Signed-off-by: Ard Biesheuvel 
---
 crypto/chacha20_generic.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
index 8b3c04d625c3..4a45fa4890c0 100644
--- a/crypto/chacha20_generic.c
+++ b/crypto/chacha20_generic.c
@@ -91,9 +91,14 @@ int crypto_chacha20_crypt(struct skcipher_request *req)
crypto_chacha20_init(state, ctx, walk.iv);
 
while (walk.nbytes > 0) {
+   unsigned int nbytes = walk.nbytes;
+
+   if (nbytes < walk.total)
+   nbytes = round_down(nbytes, walk.stride);
+
chacha20_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr,
-walk.nbytes);
-   err = skcipher_walk_done(, 0);
+nbytes);
+   err = skcipher_walk_done(, walk.nbytes - nbytes);
}
 
return err;
-- 
2.11.0



[PATCH 2/2] crypto: testmgr - add chunked test cases for chacha20

2017-08-14 Thread Ard Biesheuvel
We failed to catch a bug in the chacha20 code after porting it to the
skcipher API. We would have caught it if any chunked tests had been
defined, so define some now so we will catch future regressions.

Signed-off-by: Ard Biesheuvel 
---
 crypto/testmgr.h | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 6ceb0e2758bb..d54971d2d1c8 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -32675,6 +32675,10 @@ static const struct cipher_testvec 
chacha20_enc_tv_template[] = {
  "\x5b\x86\x2f\x37\x30\xe3\x7c\xfd"
  "\xc4\xfd\x80\x6c\x22\xf2\x21",
.rlen   = 375,
+   .also_non_np = 1,
+   .np = 3,
+   .tap= { 375 - 20, 4, 16 },
+
}, { /* RFC7539 A.2. Test Vector #3 */
.key= "\x1c\x92\x40\xa5\xeb\x55\xd3\x8a"
  "\xf3\x33\x88\x86\x04\xf6\xb5\xf0"
@@ -33049,6 +33053,9 @@ static const struct cipher_testvec 
chacha20_enc_tv_template[] = {
  "\xa1\xed\xad\xd5\x76\xfa\x24\x8f"
  "\x98",
.rlen   = 1281,
+   .also_non_np = 1,
+   .np = 3,
+   .tap= { 1200, 1, 80 },
},
 };
 
-- 
2.11.0



[PATCH 1/3] crypto: skcipher - export crypto_skcipher_type2

2017-08-14 Thread Corentin Labbe
This patch export crypto_skcipher_type2 like others cra_type

Signed-off-by: Corentin Labbe 
---
 crypto/skcipher.c   | 3 ++-
 include/crypto/algapi.h | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index 4faa0fd53b0c..c6523826890f 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -893,7 +893,7 @@ static int crypto_skcipher_report(struct sk_buff *skb, 
struct crypto_alg *alg)
 }
 #endif
 
-static const struct crypto_type crypto_skcipher_type2 = {
+const struct crypto_type crypto_skcipher_type2 = {
.extsize = crypto_skcipher_extsize,
.init_tfm = crypto_skcipher_init_tfm,
.free = crypto_skcipher_free_instance,
@@ -906,6 +906,7 @@ static const struct crypto_type crypto_skcipher_type2 = {
.type = CRYPTO_ALG_TYPE_SKCIPHER,
.tfmsize = offsetof(struct crypto_skcipher, base),
 };
+EXPORT_SYMBOL_GPL(crypto_skcipher_type2);
 
 int crypto_grab_skcipher(struct crypto_skcipher_spawn *spawn,
  const char *name, u32 type, u32 mask)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index e3cebf640c00..c6a4090ed2ed 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -130,6 +130,7 @@ struct ablkcipher_walk {
 
 extern const struct crypto_type crypto_ablkcipher_type;
 extern const struct crypto_type crypto_blkcipher_type;
+extern const struct crypto_type crypto_skcipher_type2;
 
 void crypto_mod_put(struct crypto_alg *alg);
 
-- 
2.13.0



[PATCH 2/3] crypto: engine - find request type with cra_type

2017-08-14 Thread Corentin Labbe
The current method for finding request type is based on crypto_tfm_alg_type.

But in case of skcipher, it is the same than ablkcipher.
Using cra_type for this work permits to make the distinction between the two.

Signed-off-by: Corentin Labbe 
---
 crypto/crypto_engine.c | 19 ---
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 61e7c4e02fd2..74b840749074 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -38,7 +38,8 @@ static void crypto_pump_requests(struct crypto_engine *engine,
struct ablkcipher_request *breq;
unsigned long flags;
bool was_busy = false;
-   int ret, rtype;
+   int ret;
+   const struct crypto_type *cratype;
 
spin_lock_irqsave(>queue_lock, flags);
 
@@ -94,7 +95,7 @@ static void crypto_pump_requests(struct crypto_engine *engine,
 
spin_unlock_irqrestore(>queue_lock, flags);
 
-   rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
+   cratype = engine->cur_req->tfm->__crt_alg->cra_type;
/* Until here we get the request need to be encrypted successfully */
if (!was_busy && engine->prepare_crypt_hardware) {
ret = engine->prepare_crypt_hardware(engine);
@@ -104,8 +105,7 @@ static void crypto_pump_requests(struct crypto_engine 
*engine,
}
}
 
-   switch (rtype) {
-   case CRYPTO_ALG_TYPE_AHASH:
+   if (cratype == _ahash_type) {
hreq = ahash_request_cast(engine->cur_req);
if (engine->prepare_hash_request) {
ret = engine->prepare_hash_request(engine, hreq);
@@ -122,7 +122,7 @@ static void crypto_pump_requests(struct crypto_engine 
*engine,
goto req_err;
}
return;
-   case CRYPTO_ALG_TYPE_ABLKCIPHER:
+   } else if (cratype == _ablkcipher_type) {
breq = ablkcipher_request_cast(engine->cur_req);
if (engine->prepare_cipher_request) {
ret = engine->prepare_cipher_request(engine, breq);
@@ -139,21 +139,18 @@ static void crypto_pump_requests(struct crypto_engine 
*engine,
goto req_err;
}
return;
-   default:
+   } else {
dev_err(engine->dev, "failed to prepare request of unknown 
type\n");
return;
}
 
 req_err:
-   switch (rtype) {
-   case CRYPTO_ALG_TYPE_AHASH:
+   if (cratype == _ahash_type) {
hreq = ahash_request_cast(engine->cur_req);
crypto_finalize_hash_request(engine, hreq, ret);
-   break;
-   case CRYPTO_ALG_TYPE_ABLKCIPHER:
+   } else if (cratype == _ablkcipher_type) {
breq = ablkcipher_request_cast(engine->cur_req);
crypto_finalize_cipher_request(engine, breq, ret);
-   break;
}
return;
 
-- 
2.13.0



[PATCH 3/3] crypto: engine - Permit to enqueue skcipher request

2017-08-14 Thread Corentin Labbe
The crypto engine could actually only enqueue hash and ablkcipher request.
This patch permit it to enqueue skcipher requets by adding all necessary
functions.

Signed-off-by: Corentin Labbe 
---
 crypto/crypto_engine.c  | 114 
 include/crypto/engine.h |  14 ++
 2 files changed, 128 insertions(+)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 74b840749074..8567224d7609 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -36,6 +36,7 @@ static void crypto_pump_requests(struct crypto_engine *engine,
struct crypto_async_request *async_req, *backlog;
struct ahash_request *hreq;
struct ablkcipher_request *breq;
+   struct skcipher_request *skreq;
unsigned long flags;
bool was_busy = false;
int ret;
@@ -139,6 +140,23 @@ static void crypto_pump_requests(struct crypto_engine 
*engine,
goto req_err;
}
return;
+   } else if (cratype == _skcipher_type2) {
+   skreq = skcipher_request_cast(engine->cur_req);
+   if (engine->prepare_skcipher_request) {
+   ret = engine->prepare_skcipher_request(engine, skreq);
+   if (ret) {
+   dev_err(engine->dev, "failed to prepare 
request: %d\n",
+   ret);
+   goto req_err;
+   }
+   engine->cur_req_prepared = true;
+   }
+   ret = engine->skcipher_one_request(engine, skreq);
+   if (ret) {
+   dev_err(engine->dev, "failed to cipher one request from 
queue\n");
+   goto req_err;
+   }
+   return;
} else {
dev_err(engine->dev, "failed to prepare request of unknown 
type\n");
return;
@@ -151,6 +169,9 @@ static void crypto_pump_requests(struct crypto_engine 
*engine,
} else if (cratype == _ablkcipher_type) {
breq = ablkcipher_request_cast(engine->cur_req);
crypto_finalize_cipher_request(engine, breq, ret);
+   } else if (cratype == _skcipher_type2) {
+   skreq = skcipher_request_cast(engine->cur_req);
+   crypto_finalize_skcipher_request(engine, skreq, ret);
}
return;
 
@@ -210,6 +231,49 @@ int crypto_transfer_cipher_request_to_engine(struct 
crypto_engine *engine,
 EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
 
 /**
+ * crypto_transfer_skcipher_request - transfer the new request into the
+ * enginequeue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_skcipher_request(struct crypto_engine *engine,
+struct skcipher_request *req,
+bool need_pump)
+{
+   unsigned long flags;
+   int ret;
+
+   spin_lock_irqsave(>queue_lock, flags);
+
+   if (!engine->running) {
+   spin_unlock_irqrestore(>queue_lock, flags);
+   return -ESHUTDOWN;
+   }
+
+   ret = crypto_enqueue_request(>queue, >base);
+
+   if (!engine->busy && need_pump)
+   kthread_queue_work(engine->kworker, >pump_requests);
+
+   spin_unlock_irqrestore(>queue_lock, flags);
+   return ret;
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request);
+
+/**
+ * crypto_transfer_skcipher_request_to_engine - transfer one request to list
+ * into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+  struct skcipher_request *req)
+{
+   return crypto_transfer_skcipher_request(engine, req, true);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
+
+/**
  * crypto_transfer_hash_request - transfer the new request into the
  * enginequeue
  * @engine: the hardware engine
@@ -289,6 +353,43 @@ void crypto_finalize_cipher_request(struct crypto_engine 
*engine,
 EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
 
 /**
+ * crypto_finalize_skcipher_request - finalize one request if the request is 
done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+ struct skcipher_request *req, int err)
+{
+   unsigned long flags;
+   bool finalize_cur_req = false;
+   int ret;
+
+   spin_lock_irqsave(>queue_lock, flags);
+   if (engine->cur_req == >base)
+   finalize_cur_req = true;
+   spin_unlock_irqrestore(>queue_lock, flags);
+
+   if (finalize_cur_req) {
+   if 

[PATCH 0/3 v3] crypto: engine - Permit to enqueue skcipher request

2017-08-14 Thread Corentin Labbe
Hello

The crypto engine could actually only enqueue hash and ablkcipher request.
This patch serie permit it to enqueue skcipher requests by adding all necessary
functions.

Changes since v2
- added two patch for finding request type according to its cra_type

Changes since v1
- Aligned to column struct *dev in include
- Added missing mutex_unlock in crypto_engine_start()

Corentin Labbe (3):
  crypto: skcipher - export crypto_skcipher_type2
  crypto: engine - find request type with cra_type
  crypto: engine - Permit to enqueue skcipher request

 crypto/crypto_engine.c  | 133 
 crypto/skcipher.c   |   3 +-
 include/crypto/algapi.h |   1 +
 include/crypto/engine.h |  14 +
 4 files changed, 139 insertions(+), 12 deletions(-)

-- 
2.13.0



Re: scatterwalk.c: Nullpointer dereference

2017-08-14 Thread Stephan Mueller
Am Montag, 14. August 2017, 14:25:49 CEST schrieb Plauth, Max:

Hi Max,

> Dear linux-crypto community,
> 
> I think I might have run into a bug in crypto/scatterwalk.c:
> - at the end of scatterwalk_pagedone, sg_next(walk->sg) is fed as an
> argument to scatterwalk_start(...) - sg_next (lib/scatterlist.c) returns
> NULL in the case of sg_is_last(sg) - In this case, NULL is being fed into
> scatterwalk_start
> - there, a NULL value of *sg leads to a NULL pointer dereference:
> walk->sg = sg;
> walk->offset = sg->offset;
> 
> I stumbled across this issue when I tried to extend the cryptodev-linux
> Kernel module with support for compression algorithms
> (https://github.com/plauth/cryptodev-linux).

You are quite right that this looks like a nullpointer. But you should never 
run into this problem because the scatterwalk length definition should ensure 
that this never happens. I.e. the scatterwalk length should not be longer than 
the underlying SGL.

Thus, the bug you report is rather a bug in the scatterlist / scatterwalk 
length definition.

Ciao
Stephan


scatterwalk.c: Nullpointer dereference

2017-08-14 Thread Plauth, Max
Dear linux-crypto community,

I think I might have run into a bug in crypto/scatterwalk.c:
- at the end of scatterwalk_pagedone, sg_next(walk->sg) is fed as an argument 
to scatterwalk_start(...)
- sg_next (lib/scatterlist.c) returns NULL in the case of sg_is_last(sg)
- In this case, NULL is being fed into scatterwalk_start
- there, a NULL value of *sg leads to a NULL pointer dereference:
walk->sg = sg;
walk->offset = sg->offset;

I stumbled across this issue when I tried to extend the cryptodev-linux Kernel 
module with support for compression algorithms 
(https://github.com/plauth/cryptodev-linux).

Best regards,
Max

---
Test environment:
Ubuntu Linux 16.04.3 LTS
Platform: x86_64 / amd64
Kernel: 4.10.0-28-generic (also tested with a 4.12.7)
Forked version of the cryptodev-linux kernel module 
(https://github.com/plauth/cryptodev-linux)
Test application: examples/lzo.c

The lzo.c example will induce the following Oops stacktrace:
Aug 14 14:09:13 plauth-ws kernel: [ 2198.234888] BUG: unable to handle kernel 
NULL pointer dereference at 0008
Aug 14 14:09:13 plauth-ws kernel: [ 2198.234918] IP: 
scatterwalk_copychunks+0x137/0x1e0
Aug 14 14:09:13 plauth-ws kernel: [ 2198.234930] PGD 1b4559067 
Aug 14 14:09:13 plauth-ws kernel: [ 2198.234931] PUD 223766067 
Aug 14 14:09:13 plauth-ws kernel: [ 2198.234938] PMD 0 
Aug 14 14:09:13 plauth-ws kernel: [ 2198.234945] 
Aug 14 14:09:13 plauth-ws kernel: [ 2198.234956] Oops:  [#1] SMP
Aug 14 14:09:13 plauth-ws kernel: [ 2198.234965] Modules linked in: 
cryptodev(OE) input_leds dcdbas intel_rapl x86_pkg_temp_thermal 
intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul 
ghash_clmulni_intel pcbc aesni_intel aes_x86_64 crypto_simd glue_helper cryptd 
intel_cstate intel_rapl_perf snd_hda_codec_realtek snd_hda_codec_generic 
lpc_ich snd_hda_codec_hdmi snd_hda_intel snd_hda_codec snd_hda_core mei_me 
snd_hwdep mei snd_pcm snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq 
snd_seq_device snd_timer snd mac_hid soundcore shpchp parport_pc ppdev lp 
parport autofs4 hid_generic usbhid hid amdgpu amdkfd amd_iommu_v2 radeon i915 
ttm i2c_algo_bit drm_kms_helper syscopyarea ahci sysfillrect sysimgblt e1000e 
libahci fb_sys_fops drm ptp pps_core fjes video
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235117] CPU: 3 PID: 2860 Comm: lzo 
Tainted: G   OE   4.10.0-28-generic #32~16.04.2-Ubuntu
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235118] Hardware name: Dell Inc. 
OptiPlex 9020/06X1TJ, BIOS A09 11/20/2014
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235118] task: 9ceae48e2d00 
task.stack: b25888f28000
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235120] RIP: 
0010:scatterwalk_copychunks+0x137/0x1e0
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235120] RSP: 0018:b25888f2bad0 
EFLAGS: 00010246
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235121] RAX:  RBX: 
0004 RCX: 0ccb
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235121] RDX: 9ceae48e2d00 RSI: 
0ccb RDI: 9ceae2617020
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235121] RBP: b25888f2bb10 R08: 
002b R09: 002b
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235122] R10:  R11: 
b2588111 R12: b25888f2bb28
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235122] R13: b2588119402b R14: 
9ceae48e2d00 R15: 0001
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235123] FS:  7f1317d13700() 
GS:9ceaeeb8() knlGS:
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235123] CS:  0010 DS:  ES:  
CR0: 80050033
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235124] CR2: 0008 CR3: 
00021c985000 CR4: 001406e0
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235124] Call Trace:
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235127]  
scatterwalk_map_and_copy+0x6c/0x80
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235128]  ? lzo_scompress+0x3b/0x70
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235130]  
scomp_acomp_comp_decomp+0xe2/0x230
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235132]  scomp_acomp_compress+0x13/0x20
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235135]  
cryptodev_compr_compress+0x36/0xb0 [cryptodev]
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235136]  
hash_n_crypt.isra.2+0xbe/0x1a0 [cryptodev]
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235137]  crypto_run+0x26c/0x640 
[cryptodev]
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235138]  cryptodev_ioctl+0x290/0x620 
[cryptodev]
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235141]  ? 
tty_insert_flip_string_fixed_flag+0x83/0xe0
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235142]  ? 
tty_flip_buffer_push+0x2b/0x30
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235144]  ? remove_wait_queue+0x4d/0x60
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235145]  ? __wake_up+0x44/0x50
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235146]  ? tty_ldisc_deref+0x16/0x20
Aug 14 14:09:13 plauth-ws kernel: [ 2198.235149]  do_vfs_ioctl+0xa1/0x5f0
Aug 14 

[PATCH] crypto: cavium - add release_firmware to all return case

2017-08-14 Thread Corentin Labbe
Two return case misses to call release_firmware() and so leak some
memory.

This patch create a fw_release label (and so a common error path)
and use it on all return case.

Detected by CoverityScan, CID#1416422 ("Resource Leak")

Signed-off-by: Corentin Labbe 
---
 drivers/crypto/cavium/cpt/cptpf_main.c | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/cavium/cpt/cptpf_main.c 
b/drivers/crypto/cavium/cpt/cptpf_main.c
index 4119c40e7c4b..34a6d8bf229e 100644
--- a/drivers/crypto/cavium/cpt/cptpf_main.c
+++ b/drivers/crypto/cavium/cpt/cptpf_main.c
@@ -268,8 +268,10 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const 
u8 *fw, bool is_ae)
mcode = >mcode[cpt->next_mc_idx];
memcpy(mcode->version, (u8 *)fw_entry->data, CPT_UCODE_VERSION_SZ);
mcode->code_size = ntohl(ucode->code_length) * 2;
-   if (!mcode->code_size)
-   return -EINVAL;
+   if (!mcode->code_size) {
+   ret = -EINVAL;
+   goto fw_release;
+   }
 
mcode->is_ae = is_ae;
mcode->core_mask = 0ULL;
@@ -280,7 +282,8 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const 
u8 *fw, bool is_ae)
  >phys_base, GFP_KERNEL);
if (!mcode->code) {
dev_err(dev, "Unable to allocate space for microcode");
-   return -ENOMEM;
+   ret = -ENOMEM;
+   goto fw_release;
}
 
memcpy((void *)mcode->code, (void *)(fw_entry->data + sizeof(*ucode)),
@@ -302,12 +305,14 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, 
const u8 *fw, bool is_ae)
ret = do_cpt_init(cpt, mcode);
if (ret) {
dev_err(dev, "do_cpt_init failed with ret: %d\n", ret);
-   return ret;
+   goto fw_release;
}
 
dev_info(dev, "Microcode Loaded %s\n", mcode->version);
mcode->is_mc_valid = 1;
cpt->next_mc_idx++;
+
+fw_release:
release_firmware(fw_entry);
 
return ret;
-- 
2.13.0



Crypto Fixes for 4.13

2017-08-14 Thread Herbert Xu
Hi Linus: 

This push fixes an error path bug in ixp4xx as well as a read
overrun in sha1-avx2.


Please pull from

git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git linus


Herbert Xu (1):
  crypto: ixp4xx - Fix error handling path in 'aead_perform()'

megha@linux.intel.com (1):
  crypto: x86/sha1 - Fix reads beyond the number of blocks passed

 arch/x86/crypto/sha1_avx2_x86_64_asm.S |   67 +---
 arch/x86/crypto/sha1_ssse3_glue.c  |2 +-
 drivers/crypto/ixp4xx_crypto.c |6 +--
 3 files changed, 40 insertions(+), 35 deletions(-)

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v8 0/4] crypto: add algif_akcipher user space API

2017-08-14 Thread Marcel Holtmann
Hi Stephan,

>>> The first part is clearly where AF_ALG fits and keyctl does not. This is
>>> provided with the current patch set. As the keyctl API only handles, well,
>>> keys, access to the raw ciphers may not be possible through this API. And
>>> let us face it, a lot of user space code shall support many different
>>> OSes. Thus, if you have a crypto lib in user space who has its own key
>>> management (which is a core element of such libraries and thus cannot be
>>> put into an architecture-dependent code part), having only the keyctl API
>>> on Linux for accelerated asym support may not be helpful.
>> 
>> That argument is just non-sense.
> 
> How interesting. For example, what about NSS with its own key database?

a lot of applications create their own key or certificate database. It also 
means they need to reload and reload them over and over again for each process. 
A lot of things are possible, but why keep doing things more complicated than 
they need to be. As I said before, if you only have a hammer ..

Regards

Marcel



Re: [Freedombox-discuss] Hardware Crypto

2017-08-14 Thread Gilad Ben-Yossef
Hi,

On Sun, Aug 13, 2017 at 8:21 PM, Sandy Harris  wrote:
> Showing only the key parts of the message:
>
>> From: John Gilmore 
>
> An exceedingly knowledgeable guy, one we should probably take seriously.
> https://en.wikipedia.org/wiki/John_Gilmore_(activist)
>
>> Most hardware crypto accelerators are useless, ...
>> ... you might as well have
>> just computed the answer in userspace using ordinary instructions.
>
> A strong claim, but one I'm inclined to believe. In the cases where it
> applies, it may be a problem for much of the Linux crypto work.

The claim is mostly true if the data to be processed is in user space,
simply because context switches are so expensive.

However, in case of in-kernel producers of data, such as DMcrypt.
Fscrypt and the recent TLS socket frame work, this is not the case.

In that scenario, processing context and the data are already in kernel space
and then it becomes a question of what is more efficient - DMA to/from
a crypto HW or doing it on the core - and the result is often measured not
in pure latency of operation but in performance per watt, since in some cases
you might be better off letting CPU cores idle and perform computation on
HW that is more power conserving.

So it really depends on the specific hardware, work load and operating
conditions.

>
> Some CPUs have special instructions to speed up some crypto
> operations, and not just AES. For example, Intel has them for several
> hashes and for elliptic curve calculations:
> https://software.intel.com/en-us/articles/intel-sha-extensions
> https://en.wikipedia.org/wiki/CLMUL_instruction_set
> https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/polynomial-multiplication-instructions-paper.pdf
>
> These move the goalposts; if doing it "using ordinary instructions" is
> sometimes faster than hardware, then doing it with
> application-specific instructions is even more likely to be faster.
>
>> Even if a /dev/crypto interface existed and was faster for some kinds
>> of operations than just doing the crypto manually, the standard crypto
>> libraries would have to be portably tuned to detect when to use
>> hardware and when to use software.  The libraries generally use
>> hardware if it's available, since they were written with the
>> assumption that nobody would bother with hardware crypto if it was
>> slower than software.
>>
>> "Just make it fast for all cases" is hard when the hardware is poorly
>> designed.  When the hardware is well designed, it *is* faster for all
>> cases.  But that's uncommon.
>>
>> Making this determination in realtime would be a substantial
>> enhancement to each crypto library.  Since it'd have to be written
>> portably (or the maintainers of the portable crypto libraries won't
>> take it back), it couldn't assume any particular timings of any
>> particular driver, either in hardware or software.  So it would have
>> to run some fraction of the calls (perhaps 1%) in more than one
>> driver, and time each one, and then make decisions on which driver to
>> use by default for the other 99% of the calls.  The resulting times
>> differ dramatically, based on many factors, ...
>>
>> One advantage of running some of the calls using both hardware and
>> software is that the library can check that the results match exactly,
>> and abort with a clear message.  That would likely have caught some bugs
>> that snuck through in earlier crypto libraries.
>
> I'm not at all sure I'd want run-time testing of this since, at least
> as a general rule, introducing complications to crypto code is rarely
> a good idea. Such tests at development time seem like a fine idea,
> though; do we have those already?
>
> What about testing when it is time to decide on kernel configuration;
> include a particular module or not? Another issue is whether the
> module choice is all-or-nothing; if there is a hardware RNG can one
> use that without loading the rest of the code for the crypto
> accelerator?

The choice is modular.

Gilad



-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru


random.c: LFSR polynomials are not irreducible/primitive

2017-08-14 Thread Stephan Mueller
Hi Ted,

drivers/char/random.c contains the following comment:

"""
 * Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and
 * Videau in their paper, "The Linux Pseudorandom Number Generator
 * Revisited" (see: http://eprint.iacr.org/2012/251.pdf).  In their
 * paper, they point out that we are not using a true Twisted GFSR,
 * since Matsumoto & Kurita used a trinomial feedback polynomial (that
 * is, with only three taps, instead of the six that we are using).
 * As a result, the resulting polynomial is neither primitive nor
 * irreducible, and hence does not have a maximal period over
 * GF(2**32).  They suggest a slight change to the generator
 * polynomial which improves the resulting TGFSR polynomial to be
 * irreducible, which we have made here.
"""

This comment leads me to belief that the current polynomial is primitive (and 
irreducible).

Strangely, this is not the case as seen with the following code that can be 
used with the mathematical tool called magma. There is a free online version 
of magma available to recheck it: http://magma.maths.usyd.edu.au/calc/

Note, the polynomials used up till 3.12 were primitive and irreducible.

Could you please help me understanding why the current polynomials are better 
than the old ones?

Thanks a lot.

F:=GF(2);
F;

P:=PolynomialRing(F);
P;

print "Old polynomials:";

P:=x^128 + x^103 + x^76 + x^51 +x^25 + x + 1;
P;
print "is irreducible: "; IsIrreducible(P);
print "is primitive: "; IsPrimitive(P);

P:=x^32 + x^26 + x^20 + x^14 + x^7 + x + 1;
P;
print "is irreducible: "; IsIrreducible(P);
print "is primitive: "; IsPrimitive(P);

print "New polynomials:";

P:=x^128 + x^104 + x^76 + x^51 +x^25 + x + 1;
P;
print "is irreducible: "; IsIrreducible(P);
print "is primitive: "; IsPrimitive(P);

P:=x^32 + x^26 + x^19 + x^14 + x^7 + x + 1;
P;
print "is irreducible: "; IsIrreducible(P);
print "is primitive: "; IsPrimitive(P);

And obtained:

Finite field of size 2
Univariate Polynomial Ring in x over GF(2)
Old polynomials:
x^128 + x^103 + x^76 + x^51 + x^25 + x + 1
is irreducible:
true
is primitive:
true
x^32 + x^26 + x^20 + x^14 + x^7 + x + 1
is irreducible:
true
is primitive:
true
New polynomials:
x^128 + x^104 + x^76 + x^51 + x^25 + x + 1
is irreducible:
false
is primitive:
false
x^32 + x^26 + x^19 + x^14 + x^7 + x + 1
is irreducible:
false
is primitive:
false


Ciao
Stephan


Re: [PATCH] crypto: caam - properly set IV after {en,de}crypt

2017-08-14 Thread Gilad Ben-Yossef
Hi,

On Thu, Jun 29, 2017 at 1:19 PM, Horia Geantă  wrote:
> On 6/28/2017 4:42 PM, Horia Geantă wrote:
>> On 6/28/2017 4:27 PM, David Gstir wrote:
>>> Certain cipher modes like CTS expect the IV (req->info) of
>>> ablkcipher_request (or equivalently req->iv of skcipher_request) to
>>> contain the last ciphertext block when the {en,de}crypt operation is done.
>>> This is currently not the case for the CAAM driver which in turn breaks
>>> e.g. cts(cbc(aes)) when the CAAM driver is enabled.
>>>
>>> This patch fixes the CAAM driver to properly set the IV after the
>>> {en,de}crypt operation of ablkcipher finishes.
>>>
>>> This issue was revealed by the changes in the SW CTS mode in commit
>>> 0605c41cc53ca ("crypto: cts - Convert to skcipher")
>>>
>>> Cc:  # 4.8+
>>> Signed-off-by: David Gstir 
>> Reviewed-by: Horia Geantă 
>>
> Btw, instead of updating the IV in SW, CAAM engine could be programmed
> to do it - by saving the Context Register of the AES accelerator.
>
> Unfortunately this would require changes in quite a few places: shared
> descriptor, HW S/G generation logic, IV dma (un)mapping and maybe others.
>
> So it's better to have this fix now (which, considering size, is
> appropriate for -stable) and later, if needed, offload IV updating in HW.
>

My apologies for reviving this thread from the dead, but doesn't the patch fail
for in-place decryption since we are copying from req->dst after
the operation is done, and therefore it no longer contains the ciphertext?

I'm asking since I ran into a similar issue in the ccree driver and thought
to deploy a similar fix but could not convince myself why this works.

Thanks!
Gilad

-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru


Re: [PATCH v8 0/4] crypto: add algif_akcipher user space API

2017-08-14 Thread Stephan Mueller
Am Montag, 14. August 2017, 08:26:22 CEST schrieb Marcel Holtmann:

Hi Marcel,

> > The first part is clearly where AF_ALG fits and keyctl does not. This is
> > provided with the current patch set. As the keyctl API only handles, well,
> > keys, access to the raw ciphers may not be possible through this API. And
> > let us face it, a lot of user space code shall support many different
> > OSes. Thus, if you have a crypto lib in user space who has its own key
> > management (which is a core element of such libraries and thus cannot be
> > put into an architecture-dependent code part), having only the keyctl API
> > on Linux for accelerated asym support may not be helpful.
> 
> That argument is just non-sense.

How interesting. For example, what about NSS with its own key database?

Ciao
Stephan


Re: [PATCH v8 0/4] crypto: add algif_akcipher user space API

2017-08-14 Thread Marcel Holtmann
Hi Stephan,

>> I also would like to have more of those algorithms exposed to userspace,
>> and I'd like to make sure the API is a good fit. There was extensive
>> discussion of AF_ALG akcipher last year. v8 of the patch set updates the
>> implementation but doesn't address the API concerns that kept the previous
>> versions from being merged, so we seem to be at just as much of an impasse
>> as before.
> 
> During last year's discussion, I think we have concluded (and please correct 
> me if I miss something), that the export of the asymmetric HW implementations 
> to user space should
> 
> - allow a streaming mode where the user space uses the kernel as an 
> accelerator (maybe user space has another way of storing keys)

explain to me what a streaming mode with an asymmetric cipher is? They are no 
good for these kind of operations. If you want streaming mode, then you want a 
symmetric cipher.

The keyring subsystem is actually a good storage for keys. If the keyring 
subsystem can use hardware storage for the keys, even better. However right now 
all the certificates and its associated keys are stored perfectly fine in a 
keyring.

> - allow the private key to be retained in kernel space (or even in hardware 
> only) -- i.e. user space only has a handle to the key for a full asym 
> operation

And that is exactly my point. Even if userspace has the key, let it load into 
the kernel as a key inside a keyring. We do not need two ways for loading 
asymmetric keys. They are by nature more complex then any symmetric key.

> The first part is clearly where AF_ALG fits and keyctl does not. This is 
> provided with the current patch set. As the keyctl API only handles, well, 
> keys, access to the raw ciphers may not be possible through this API. And let 
> us face it, a lot of user space code shall support many different OSes. Thus, 
> if you have a crypto lib in user space who has its own key management (which 
> is a core element of such libraries and thus cannot be put into an 
> architecture-dependent code part), having only the keyctl API on Linux for 
> accelerated asym support may not be helpful.

That argument is just non-sense. The crypto libraries that run on multiple OSes 
have already enough abstraction layers to deal with this. And frankly there it 
would be preferred if hardware backed keys are handled properly. Since that is 
what is really needed.

Also I have to repeat my comment from above. The asymmetric ciphers are really 
bad for any kind of streaming operation. Using them in that way is almost 
always going to end badly.

David Howells has patches for sign/verify/encrypt/decrypt operation. Keep in 
mind that all the parameters of the asymmetric keys are really bound to its 
cipher. So it is pretty obvious that the key itself defines what cipher is 
used. I do not get the wish for raw access to RSA or ECDH for example. You need 
the right set of keys and parameters first. Otherwise it is all garbage and not 
even guaranteed to be cryptographically secure.

> The second part is a keyctl domain. I see in-kernel support for this 
> scenario, 
> but have not yet seen the kernel/user interface nor the user space support.

Actually have you looked at Mat’s kernel tree and the support for it in ELL.

> Both options are orthogonal, IMHO.

I don’t agree with that. As explained above, asymmetric ciphers are bound to 
their keys. For me this all feels like a hammer approach. You want to treat 
things as nails since that is the only thing you have. However that is not the 
case. We have the keyring subsystem.

> Tadeusz Struck provided a patch to link the kernel crypto API / 
> algif_akcipher 
> with the keys subsystem to allow the second requirement to be implemented in 
> algif_akcipher. That patch is on my desk and I plan to integrate it and even 
> make it generic to allow its use for all different cipher types. I have not 
> yet integrated it to allow a small patch set to be reviewed. If it is the 
> will 
> of the council, I can surely add that code to the initial patch set and 
> resubmit.

For symmetric ciphers this is awesome. For asymmetric ciphers I have no idea on 
how this would work. Once you provided the key handle, then the crypto 
subsystem would have to select the cipher. However that is not how it is 
designed. You are binding a cipher early one. In addition, depending on the 
key, you would need to choose the right hardware to execute on (in case of 
hardware bound keys). It is also not designed for this either.

So I have no idea how you want to overcome the design limitations / choices of 
AF_ALG when it comes to supporting assymetric ciphers correctly.

Regards

Marcel



Re: [PATCH v8 0/4] crypto: add algif_akcipher user space API

2017-08-14 Thread Stephan Mueller
Am Freitag, 11. August 2017, 18:02:55 CEST schrieb Marcel Holtmann:

Hi Marcel,

> > Thanks for the reminder. I have looked at that but I am unsure about
> > whether this one covers asym crypto appropriately, too.
> > 
> > The issue is that some hardware may only offer accelerators without a full
> > blown RSA siggen/ver logic (that pulls in PKCS or OAEP or others). How do
> > you propose to cover raw primitives with keyctl?
> 
> where is such a hardware?

Strangely, I see such support all the time in embedded devices where 
asymmetric acceleration is really necessary.

> And what is the usage of it? Look at what we are
> using asymmetric crypto for at the moment. It is either for sign and verify
> with secure boot etc. Or it is for key exchange purposes.

I understand that this is the core purpose of asymmetric ciphers. Yet, 
asymmetric ciphers are complex and we as kernel developers do not know (or 
shall not mandate?) where the complexity shall be implemented. By forcing all 
into the keyctl, we would insist that the entire complexity of the full-blown 
asym cipher is in the kernel without an option that user space may implement 
it.

What we are currently seemingly discuss is the choice of

- keyctl: having all complexity of the entire asym logic including its key 
handling in the kernel with the benefit of the kernel protection of the 
private key

- algif_akcipher (maybe with a link to keyctl): only exporting the cipher 
support and allow user space to decide where the complexity lies

Just to give you an example: A full blown RSA operation (excluding the hashing 
part for signatures) consists of padding and the asymmetric operation. For the 
asymmetric operation, we have sign/verify and encrypt/decrypt (keywrap). There 
are a gazillion padding types out there:

- PKCS1

- OAEP

- SP800-56B: RSAEP, RSADP, RSASVE, RSA-OAEP, RSA-KEM-KWS

And there may be others.

When we talk about encryption/decryption we have to consider the KDFs 
(SP800-108, RFC5689, SP800-56A).

When we consider the KDFs, we have to think of the KDF data styles (ASN.1, 
concatenation)

With keyctl to me it seems that we need to integrate all that logic into the 
kernel. As all of that is just processing logic, securing it in the kernel may 
not be the right way as this code does not need the elevated privileges in the 
kernel for that.

Yet, some hardware may provide some/all of this logic. And we want to make 
that available to user space.
> 
> The asymmetric crypto is a means to an end. If it is not for certification
> verification, then it for is creating a symmetric key to be used with a
> symmetric cipher. We have the the asymmetric_keys subsystem for
> representing the nature of this crypto. Also the list of asymmetric ciphers
> is a lot smaller than the symmetric ones.
> 
> What raw primitives? When we are using for example ECDH for Bluetooth where
> you need to create a pairwise symmetric key, then what you really want from
> the cryptographic primitives is this:
> 
> 1) Create public/private key pair

See above, it is my opinion that with asym ciphers, there is a lot of 
complexity and lots of options. I do not think that the kernel API should be a 
limiting factor here, because the kernel simply does not implement a specific 
cipher type.

> 2) Give public key to applications and store the private key safely
> 3) Retrieve public key from remote side and challenge

This assumes that always the Linux kernel is the manager of keys (or the 
gatekeeper to the key store). Are we sure that this is always the case?

Ciao
Stephan


Re: [PATCH v8 0/4] crypto: add algif_akcipher user space API

2017-08-14 Thread Stephan Mueller
Am Freitag, 11. August 2017, 21:43:15 CEST schrieb Mat Martineau:

Hi Mat,

> 
> I also would like to have more of those algorithms exposed to userspace,
> and I'd like to make sure the API is a good fit. There was extensive
> discussion of AF_ALG akcipher last year. v8 of the patch set updates the
> implementation but doesn't address the API concerns that kept the previous
> versions from being merged, so we seem to be at just as much of an impasse
> as before.

During last year's discussion, I think we have concluded (and please correct 
me if I miss something), that the export of the asymmetric HW implementations 
to user space should

- allow a streaming mode where the user space uses the kernel as an 
accelerator (maybe user space has another way of storing keys)

- allow the private key to be retained in kernel space (or even in hardware 
only) -- i.e. user space only has a handle to the key for a full asym 
operation

The first part is clearly where AF_ALG fits and keyctl does not. This is 
provided with the current patch set. As the keyctl API only handles, well, 
keys, access to the raw ciphers may not be possible through this API. And let 
us face it, a lot of user space code shall support many different OSes. Thus, 
if you have a crypto lib in user space who has its own key management (which 
is a core element of such libraries and thus cannot be put into an 
architecture-dependent code part), having only the keyctl API on Linux for 
accelerated asym support may not be helpful.

The second part is a keyctl domain. I see in-kernel support for this scenario, 
but have not yet seen the kernel/user interface nor the user space support.

Both options are orthogonal, IMHO.

Tadeusz Struck provided a patch to link the kernel crypto API / algif_akcipher 
with the keys subsystem to allow the second requirement to be implemented in 
algif_akcipher. That patch is on my desk and I plan to integrate it and even 
make it generic to allow its use for all different cipher types. I have not 
yet integrated it to allow a small patch set to be reviewed. If it is the will 
of the council, I can surely add that code to the initial patch set and 
resubmit.


Ciao
Stephan


Re: [PATCH v8 0/4] crypto: add algif_akcipher user space API

2017-08-14 Thread Stephan Mueller
Am Sonntag, 13. August 2017, 10:52:00 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,

> While I don't have anything to contribute to the choice between
> keyctl() vs ALG_IF as interfaces for asymmetric  cryptography, I would
> like to point out that there is both interest and HW support for
> private symmetric key operations as well, for example for storage
> encryption via DM-Crypt and fscrypt, so I do hope (and will work on)
> adding some sort of HW key support the crypto API, community
> acceptance withstanding of course.
> 
> So I hope we won't treat the idea of crypto API lack of support for HW
> keys as a long standing immutable argument.

See the patch set that was offered by Tudor regarding the in-kernel or in-
hardware generation of the ECDH private keys. There is nothing that prevents 
us having such API for akcipher. In fact, it would even be more or less a 
copy-n-paste job.

Exporting that logic to user space could be done as follows:

- keyctl API is used to trigger the key generation process and to obtain a 
handle

- AF_ALG to perform the asym operation where the key handle from keyctl is 
handed into the kernel. I am aware that this link between AF_ALG and keyctl is 
yet missing. But it on my desk and I am willing to integrate it. The 
integration should even not be specific to algif_akcipher, but to all cipher 
types.

Ciao
Stephan