Re: [PATCH 7/7] crypto: marvell: Add support for chaining crypto requests in TDMA mode

2016-06-15 Thread Boris Brezillon
On Wed, 15 Jun 2016 21:15:34 +0200
Romain Perier  wrote:

> The Cryptographic Engines and Security Accelerators (CESA) supports the
> Multi-Packet Chain Mode. With this mode enabled, multiple tdma requests
> can be chained and processed by the hardware without software
> interferences.

intervention.

> This mode was already activated, however the crypto
> requests were not chained together. By doing so, we reduce significantly

   significantly reduce

> the number of IRQs. Instead of being interrupted at the end of each
> crypto request, we are interrupted at the end of the last cryptographic
> request processed by the engine.
> 
> This commits re-factorizes the code, changes the code architecture and
> adds the required data structures to chain cryptographic requests
> together before sending them to an engine.

Not necessarily before sending them to the engine, it can be done while
the engine is running.

> 
> Signed-off-by: Romain Perier 
> ---
>  drivers/crypto/marvell/cesa.c   | 117 
> +++-
>  drivers/crypto/marvell/cesa.h   |  38 -
>  drivers/crypto/marvell/cipher.c |   3 +-
>  drivers/crypto/marvell/hash.c   |   9 +++-
>  drivers/crypto/marvell/tdma.c   |  81 
>  5 files changed, 218 insertions(+), 30 deletions(-)
> 
> diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
> index f9e6688..33411f6 100644
> --- a/drivers/crypto/marvell/cesa.c
> +++ b/drivers/crypto/marvell/cesa.c
> @@ -32,7 +32,7 @@
>  #include "cesa.h"
>  
>  /* Limit of the crypto queue before reaching the backlog */
> -#define CESA_CRYPTO_DEFAULT_MAX_QLEN 50
> +#define CESA_CRYPTO_DEFAULT_MAX_QLEN 128
>  
>  static int allhwsupport = !IS_ENABLED(CONFIG_CRYPTO_DEV_MV_CESA);
>  module_param_named(allhwsupport, allhwsupport, int, 0444);
> @@ -40,23 +40,83 @@ MODULE_PARM_DESC(allhwsupport, "Enable support for all 
> hardware (even it if over
>  
>  struct mv_cesa_dev *cesa_dev;
>  
> -static void mv_cesa_dequeue_req_unlocked(struct mv_cesa_engine *engine)
> +struct crypto_async_request *mv_cesa_dequeue_req_locked(
> + struct mv_cesa_engine *engine, struct crypto_async_request **backlog)

Coding style issue:

struct crypto_async_request *
mv_cesa_dequeue_req_locked(struct mv_cesa_engine *engine,
   struct crypto_async_request **backlog)

> +{
> + struct crypto_async_request *req;
> +
> + *backlog = crypto_get_backlog(&engine->queue);
> + req = crypto_dequeue_request(&engine->queue);
> +
> + if (!req)
> + return NULL;
> +
> + return req;
> +}
> +
> +static void mv_cesa_rearm_engine(struct mv_cesa_engine *engine)
>  {
>   struct crypto_async_request *req, *backlog;
>   struct mv_cesa_ctx *ctx;
>  
> - backlog = crypto_get_backlog(&engine->queue);
> - req = crypto_dequeue_request(&engine->queue);
> - engine->req = req;
>  
> + spin_lock_bh(&engine->lock);
> + if (engine->req)
> + goto out_unlock;
> +
> + req = mv_cesa_dequeue_req_locked(engine, &backlog);
>   if (!req)
> - return;
> + goto out_unlock;
> +
> + engine->req = req;
> + spin_unlock_bh(&engine->lock);

I'm not a big fan of those multiple 'unlock() locations', and since
your code is pretty simple I'd prefer seeing something like.

spin_lock_bh(&engine->lock);
if (!engine->req) {
req = mv_cesa_dequeue_req_locked(engine, &backlog);
engine->req = req;
}
spin_unlock_bh(&engine->lock);

if (!req)
return;

With req and backlog initialized to NULL at the beginning of the
function.

>  
>   if (backlog)
>   backlog->complete(backlog, -EINPROGRESS);
>  
>   ctx = crypto_tfm_ctx(req->tfm);
>   ctx->ops->step(req);
> + return;

Missing blank line.

> +out_unlock:
> + spin_unlock_bh(&engine->lock);
> +}
> +
> +static int mv_cesa_std_process(struct mv_cesa_engine *engine, u32 status)
> +{
> + struct crypto_async_request *req;
> + struct mv_cesa_ctx *ctx;
> + int res;
> +
> + req = engine->req;
> + ctx = crypto_tfm_ctx(req->tfm);
> + res = ctx->ops->process(req, status);
> +
> + if (res == 0) {
> + ctx->ops->complete(req);
> + mv_cesa_engine_enqueue_complete_request(engine, req);
> + } else if (res == -EINPROGRESS) {
> + ctx->ops->step(req);
> + } else {
> + ctx->ops->complete(req);

Do we really have to call ->complete() in this case?

> + }
> +
> + return res;
> +}
> +
> +static int mv_cesa_int_process(struct mv_cesa_engine *engine, u32 status)
> +{
> + if (engine->chain.first && engine->chain.last)
> + return mv_cesa_tdma_process(engine, status);

Missing blank line.

> + return mv_cesa_std_process(engine, status);
> +}
> +
> +static inline void mv_cesa_complete_req(struct mv_cesa_

Re: [PATCH 6/7] crypto: marvell: Adding load balancing between engines

2016-06-15 Thread Boris Brezillon
On Wed, 15 Jun 2016 21:15:33 +0200
Romain Perier  wrote:

> This commits adds support for fine grained load balancing on
> multi-engine IPs. The engine is pre-selected based on its current load
> and on the weight of the crypto request that is about to be processed.
> The global crypto queue is also moved to each engine. These changes are

to the mv_cesa_engine object.

> useful for preparing the code to support TDMA chaining between crypto
> requests, because each tdma chain will be handled per engine.

These changes are required to allow chaining crypto requests at the DMA
level.

> By using
> a crypto queue per engine, we make sure that we keep the state of the
> tdma chain synchronized with the crypto queue. We also reduce contention
> on 'cesa_dev->lock' and improve parallelism.
> 
> Signed-off-by: Romain Perier 
> ---
>  drivers/crypto/marvell/cesa.c   | 30 +--
>  drivers/crypto/marvell/cesa.h   | 26 +++--
>  drivers/crypto/marvell/cipher.c | 59 ++---
>  drivers/crypto/marvell/hash.c   | 65 
> +++--
>  4 files changed, 97 insertions(+), 83 deletions(-)
> 

[...]

> diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
> index fbaae2f..02aa38f 100644
> --- a/drivers/crypto/marvell/cipher.c
> +++ b/drivers/crypto/marvell/cipher.c
> @@ -89,6 +89,9 @@ static void mv_cesa_ablkcipher_std_step(struct 
> ablkcipher_request *req)
>   size_t  len = min_t(size_t, req->nbytes - sreq->offset,
>   CESA_SA_SRAM_PAYLOAD_SIZE);
>  
> + mv_cesa_adjust_op(engine, &sreq->op);
> + memcpy_toio(engine->sram, &sreq->op, sizeof(sreq->op));
> +
>   len = sg_pcopy_to_buffer(req->src, creq->src_nents,
>engine->sram + CESA_SA_DATA_SRAM_OFFSET,
>len, sreq->offset);
> @@ -167,12 +170,9 @@ mv_cesa_ablkcipher_std_prepare(struct ablkcipher_request 
> *req)
>  {
>   struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(req);
>   struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
> - struct mv_cesa_engine *engine = sreq->base.engine;
>  
>   sreq->size = 0;
>   sreq->offset = 0;
> - mv_cesa_adjust_op(engine, &sreq->op);
> - memcpy_toio(engine->sram, &sreq->op, sizeof(sreq->op));

Are these changes really related to this load balancing support?
AFAICT, it's something that could have been done earlier, and is not
dependent on the changes your introducing here, but maybe I'm missing
something.

>  }

[...]

>  static int mv_cesa_ecb_aes_encrypt(struct ablkcipher_request *req)
> diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c
> index f7f84cc..5946a69 100644
> --- a/drivers/crypto/marvell/hash.c
> +++ b/drivers/crypto/marvell/hash.c
> @@ -162,6 +162,15 @@ static void mv_cesa_ahash_std_step(struct ahash_request 
> *req)
>   unsigned int new_cache_ptr = 0;
>   u32 frag_mode;
>   size_t  len;
> + unsigned int digsize;
> + int i;
> +
> + mv_cesa_adjust_op(engine, &creq->op_tmpl);
> + memcpy_toio(engine->sram, &creq->op_tmpl, sizeof(creq->op_tmpl));
> +
> + digsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(req));
> + for (i = 0; i < digsize / 4; i++)
> + writel_relaxed(creq->state[i], engine->regs + CESA_IVDIG(i));
>  
>   if (creq->cache_ptr)
>   memcpy_toio(engine->sram + CESA_SA_DATA_SRAM_OFFSET,
> @@ -265,11 +274,8 @@ static void mv_cesa_ahash_std_prepare(struct 
> ahash_request *req)
>  {
>   struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
>   struct mv_cesa_ahash_std_req *sreq = &creq->req.std;
> - struct mv_cesa_engine *engine = sreq->base.engine;
>  
>   sreq->offset = 0;
> - mv_cesa_adjust_op(engine, &creq->op_tmpl);
> - memcpy_toio(engine->sram, &creq->op_tmpl, sizeof(creq->op_tmpl));

Same as above: it doesn't seem related to the load balancing stuff.

>  }

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] crypto: marvell: Adding a complete operation for async requests

2016-06-15 Thread Boris Brezillon
On Wed, 15 Jun 2016 21:15:32 +0200
Romain Perier  wrote:

> So far, the 'process' operation was used to check if the current request
> was correctly handled by the engine, if it was the case it copied
> information from the SRAM to the main memory. Now, we split this
> operation. We keep the 'process' operation, which still checks if the
> request was correctly handled by the engine or not, then we add a new
> operation for completion. The 'complete' method copies the content of
> the SRAM to memory. This will soon become useful if we want to call
> the process and the complete operations from different locations
> depending on the type of the request (different cleanup logic).
> 
> Signed-off-by: Romain Perier 
> ---
>  drivers/crypto/marvell/cesa.c   |  1 +
>  drivers/crypto/marvell/cesa.h   |  3 +++
>  drivers/crypto/marvell/cipher.c | 47 
> -
>  drivers/crypto/marvell/hash.c   | 22 ++-
>  4 files changed, 44 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
> index fe04d1b..af96426 100644
> --- a/drivers/crypto/marvell/cesa.c
> +++ b/drivers/crypto/marvell/cesa.c
> @@ -98,6 +98,7 @@ static irqreturn_t mv_cesa_int(int irq, void *priv)
>   engine->req = NULL;
>   mv_cesa_dequeue_req_unlocked(engine);
>   spin_unlock_bh(&engine->lock);
> + ctx->ops->complete(req);
>   ctx->ops->cleanup(req);
>   local_bh_disable();
>   req->complete(req, res);
> diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
> index 158ff82..32de08b 100644
> --- a/drivers/crypto/marvell/cesa.h
> +++ b/drivers/crypto/marvell/cesa.h
> @@ -456,6 +456,8 @@ struct mv_cesa_engine {
>   *   code)
>   * @step:launch the crypto operation on the next chunk
>   * @cleanup: cleanup the crypto request (release associated data)
> + * @complete:complete the request, i.e copy result from sram or 
> contexts
> + *   when it is needed.
>   */
>  struct mv_cesa_req_ops {
>   void (*prepare)(struct crypto_async_request *req,
> @@ -463,6 +465,7 @@ struct mv_cesa_req_ops {
>   int (*process)(struct crypto_async_request *req, u32 status);
>   void (*step)(struct crypto_async_request *req);
>   void (*cleanup)(struct crypto_async_request *req);
> + void (*complete)(struct crypto_async_request *req);
>  };
>  
>  /**
> diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
> index 15d2c5a..fbaae2f 100644
> --- a/drivers/crypto/marvell/cipher.c
> +++ b/drivers/crypto/marvell/cipher.c
> @@ -118,7 +118,6 @@ static int mv_cesa_ablkcipher_std_process(struct 
> ablkcipher_request *req,
>   struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
>   struct mv_cesa_engine *engine = sreq->base.engine;
>   size_t len;
> - unsigned int ivsize;
>  
>   len = sg_pcopy_from_buffer(req->dst, creq->dst_nents,
>  engine->sram + CESA_SA_DATA_SRAM_OFFSET,
> @@ -128,10 +127,6 @@ static int mv_cesa_ablkcipher_std_process(struct 
> ablkcipher_request *req,
>   if (sreq->offset < req->nbytes)
>   return -EINPROGRESS;
>  
> - ivsize = crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(req));
> - memcpy_fromio(req->info,
> -   engine->sram + CESA_SA_CRYPT_IV_SRAM_OFFSET, ivsize);
> -
>   return 0;
>  }
>  
> @@ -141,21 +136,9 @@ static int mv_cesa_ablkcipher_process(struct 
> crypto_async_request *req,
>   struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
>   struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
>  
> - if (mv_cesa_req_get_type(&creq->req.base) == CESA_DMA_REQ) {
> - int ret;
> - struct mv_cesa_req *basereq;
> - unsigned int ivsize;
> -
> - ret = mv_cesa_dma_process(&creq->req.base, status);
> - if (ret)
> - return ret;
> + if (mv_cesa_req_get_type(&creq->req.base) == CESA_DMA_REQ)
> + return mv_cesa_dma_process(&creq->req.base, status);
>  
> - basereq = &creq->req.base;
> - ivsize = crypto_ablkcipher_ivsize(
> -  crypto_ablkcipher_reqtfm(ablkreq));
> - memcpy_fromio(ablkreq->info, basereq->chain.last->data, ivsize);
> - return ret;
> - }
>   return mv_cesa_ablkcipher_std_process(ablkreq, status);
>  }
>  
> @@ -197,6 +180,7 @@ static inline void mv_cesa_ablkcipher_prepare(struct 
> crypto_async_request *req,
>  {
>   struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
>   struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
> +

Nit: not sure you should mix this cosmetic change with the other
ch

Re: [PATCH 3/7] crypto: marvell: Copy IV vectors by DMA transfers for acipher requests

2016-06-15 Thread Boris Brezillon
On Wed, 15 Jun 2016 21:15:30 +0200
Romain Perier  wrote:

> @@ -135,23 +140,23 @@ static int mv_cesa_ablkcipher_process(struct 
> crypto_async_request *req,
>  {
>   struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
>   struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
> - struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
> - struct mv_cesa_engine *engine = sreq->base.engine;
> - int ret;
>  
> - if (creq->req.base.type == CESA_DMA_REQ)
> + if (creq->req.base.type == CESA_DMA_REQ) {
> + int ret;
> + struct mv_cesa_tdma_req *dreq;
> + unsigned int ivsize;
> +
>   ret = mv_cesa_dma_process(&creq->req.dma, status);
> - else
> - ret = mv_cesa_ablkcipher_std_process(ablkreq, status);
> + if (ret)
> + return ret;
>  
> - if (ret)
> + dreq = &creq->req.dma;
> + ivsize = crypto_ablkcipher_ivsize(
> +  crypto_ablkcipher_reqtfm(ablkreq));
> + memcpy_fromio(ablkreq->info, dreq->chain.last->data, ivsize);

Just use memcpy() here: you're not copying from an iomem region here.

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/7] crypto: marvell: Moving the tdma chain out of mv_cesa_tdma_req

2016-06-15 Thread Boris Brezillon
On Wed, 15 Jun 2016 21:15:31 +0200
Romain Perier  wrote:

> Actually the only way to access the tdma chain is to use the 'req' union

Currently, ...

> from a mv_cesa_{ablkcipher,ahash}. This will soon become a problem if we
> want to handle the TDMA chaining vs standard/non-DMA processing in a
> generic way (with generic functions at the cesa.c level detecting
> whether the request should be queued at the DMA level or not). Hence the
> decision to move the chain field a the mv_cesa_req level at the expense

   at

> of adding 2 void * fields to all request contexts (including non-DMA
> ones). To limit the overhead, we get rid of the type field, which can
> now be deduced from the req->chain.first value.
> 
> Signed-off-by: Romain Perier 
> ---
>  drivers/crypto/marvell/cesa.c   |  3 ++-
>  drivers/crypto/marvell/cesa.h   | 31 +--
>  drivers/crypto/marvell/cipher.c | 40 ++--
>  drivers/crypto/marvell/hash.c   | 36 +++-
>  drivers/crypto/marvell/tdma.c   |  8 
>  5 files changed, 56 insertions(+), 62 deletions(-)
> 
> diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
> index 93700cd..fe04d1b 100644
> --- a/drivers/crypto/marvell/cesa.c
> +++ b/drivers/crypto/marvell/cesa.c
> @@ -111,7 +111,8 @@ static irqreturn_t mv_cesa_int(int irq, void *priv)
>   return ret;
>  }
>  
> -int mv_cesa_queue_req(struct crypto_async_request *req)
> +int mv_cesa_queue_req(struct crypto_async_request *req,
> +   struct mv_cesa_req *creq)
>  {
>   int ret;
>   int i;
> diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
> index 74b84bd..158ff82 100644
> --- a/drivers/crypto/marvell/cesa.h
> +++ b/drivers/crypto/marvell/cesa.h
> @@ -509,21 +509,11 @@ enum mv_cesa_req_type {
>  
>  /**
>   * struct mv_cesa_req - CESA request
> - * @type:request type
>   * @engine:  engine associated with this request
> + * @chain:   list of tdma descriptors associated  with this request

   ^ extra white space.

>   */
>  struct mv_cesa_req {
> - enum mv_cesa_req_type type;
>   struct mv_cesa_engine *engine;
> -};
> -
> -/**
> - * struct mv_cesa_tdma_req - CESA TDMA request
> - * @base:base information
> - * @chain:   TDMA chain
> - */
> -struct mv_cesa_tdma_req {
> - struct mv_cesa_req base;
>   struct mv_cesa_tdma_chain chain;
>  };
>  
> @@ -562,7 +552,6 @@ struct mv_cesa_ablkcipher_std_req {
>  struct mv_cesa_ablkcipher_req {
>   union {
>   struct mv_cesa_req base;
> - struct mv_cesa_tdma_req dma;
>   struct mv_cesa_ablkcipher_std_req std;

Now that the dma specific fields are part of the base request there's no
reason to keep this union.

You can just put struct mv_cesa_req base; directly under struct
mv_cesa_ablkcipher_req, and move mv_cesa_ablkcipher_std_req fields in
mv_cesa_ablkcipher_req.

>   } req;
>   int src_nents;
> @@ -587,7 +576,6 @@ struct mv_cesa_ahash_std_req {
>   * @cache_dma:   DMA address of the cache buffer
>   */
>  struct mv_cesa_ahash_dma_req {
> - struct mv_cesa_tdma_req base;
>   u8 *padding;
>   dma_addr_t padding_dma;
>   u8 *cache;
> @@ -625,6 +613,12 @@ struct mv_cesa_ahash_req {
>  
>  extern struct mv_cesa_dev *cesa_dev;
>  
> +static inline enum mv_cesa_req_type
> +mv_cesa_req_get_type(struct mv_cesa_req *req)
> +{
> + return req->chain.first ? CESA_DMA_REQ : CESA_STD_REQ;
> +}
> +
>  static inline void mv_cesa_update_op_cfg(struct mv_cesa_op_ctx *op,
>u32 cfg, u32 mask)
>  {
> @@ -697,7 +691,8 @@ static inline bool mv_cesa_mac_op_is_first_frag(const 
> struct mv_cesa_op_ctx *op)
>   CESA_SA_DESC_CFG_FIRST_FRAG;
>  }
>  
> -int mv_cesa_queue_req(struct crypto_async_request *req);
> +int mv_cesa_queue_req(struct crypto_async_request *req,
> +   struct mv_cesa_req *creq);
>  
>  /*
>   * Helper function that indicates whether a crypto request needs to be
> @@ -767,9 +762,9 @@ static inline bool mv_cesa_req_dma_iter_next_op(struct 
> mv_cesa_dma_iter *iter)
>   return iter->op_len;
>  }
>  
> -void mv_cesa_dma_step(struct mv_cesa_tdma_req *dreq);
> +void mv_cesa_dma_step(struct mv_cesa_req *dreq);
>  
> -static inline int mv_cesa_dma_process(struct mv_cesa_tdma_req *dreq,
> +static inline int mv_cesa_dma_process(struct mv_cesa_req *dreq,
> u32 status)
>  {
>   if (!(status & CESA_SA_INT_ACC0_IDMA_DONE))
> @@ -781,10 +776,10 @@ static inline int mv_cesa_dma_process(struct 
> mv_cesa_tdma_req *dreq,
>   return 0;
>  }
>  
> -void mv_cesa_dma_prepare(struct mv_cesa_tdma_req *dreq,
> +void mv_cesa_dma_prepare(struct mv_cesa_req *dreq,
>struct mv_cesa_engine *engine);
> +void mv_cesa_dma_cleanup(struct mv_cesa_req *dreq);
>

Re: [PATCH 3/7] crypto: marvell: Copy IV vectors by DMA transfers for acipher requests

2016-06-15 Thread Boris Brezillon
On Wed, 15 Jun 2016 21:15:30 +0200
Romain Perier  wrote:

> Adding a TDMA descriptor at the end of the request for copying the
> output IV vector via a DMA transfer. This is required for processing
> cipher requests asynchroniously in chained mode, otherwise the content

  asynchronously

> of the IV vector will be overwriten for each new finished request.

BTW, Not sure the term 'asynchronously' is appropriate here. The
standard (AKA non-DMA) processing is also asynchronous. The real reason
here is that you want to chain the requests and offload as much
processing as possible to the DMA and crypto engine. And as you
explained, this is only possible if we retrieve the updated IV using
DMA. 

> 
> Signed-off-by: Romain Perier 
> ---
>  drivers/crypto/marvell/cesa.c   |  4 
>  drivers/crypto/marvell/cesa.h   |  5 +
>  drivers/crypto/marvell/cipher.c | 40 +++-
>  drivers/crypto/marvell/tdma.c   | 29 +
>  4 files changed, 65 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
> index fb403e1..93700cd 100644
> --- a/drivers/crypto/marvell/cesa.c
> +++ b/drivers/crypto/marvell/cesa.c
> @@ -312,6 +312,10 @@ static int mv_cesa_dev_dma_init(struct mv_cesa_dev *cesa)
>   if (!dma->padding_pool)
>   return -ENOMEM;
>  
> + dma->iv_pool = dmam_pool_create("cesa_iv", dev, 16, 1, 0);
> + if (!dma->iv_pool)
> + return -ENOMEM;
> +
>   cesa->dma = dma;
>  
>   return 0;
> diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
> index 74071e4..74b84bd 100644
> --- a/drivers/crypto/marvell/cesa.h
> +++ b/drivers/crypto/marvell/cesa.h
> @@ -275,6 +275,7 @@ struct mv_cesa_op_ctx {
>  #define CESA_TDMA_DUMMY  0
>  #define CESA_TDMA_DATA   1
>  #define CESA_TDMA_OP 2
> +#define CESA_TDMA_IV 4

Should be 3 and not 4: TDMA_TYPE is an enum, not a bit field.

>  
>  /**
>   * struct mv_cesa_tdma_desc - TDMA descriptor
> @@ -390,6 +391,7 @@ struct mv_cesa_dev_dma {
>   struct dma_pool *op_pool;
>   struct dma_pool *cache_pool;
>   struct dma_pool *padding_pool;
> + struct dma_pool *iv_pool;
>  };
>  
>  /**
> @@ -790,6 +792,9 @@ mv_cesa_tdma_desc_iter_init(struct mv_cesa_tdma_chain 
> *chain)
>   memset(chain, 0, sizeof(*chain));
>  }
>  
> +int mv_cesa_dma_add_iv_op(struct mv_cesa_tdma_chain *chain, dma_addr_t src,
> +   u32 size, u32 flags, gfp_t gfp_flags);
> +
>  struct mv_cesa_op_ctx *mv_cesa_dma_add_op(struct mv_cesa_tdma_chain *chain,
>   const struct mv_cesa_op_ctx *op_templ,
>   bool skip_ctx,
> diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
> index 8d0fabb..f42620e 100644
> --- a/drivers/crypto/marvell/cipher.c
> +++ b/drivers/crypto/marvell/cipher.c
> @@ -118,6 +118,7 @@ static int mv_cesa_ablkcipher_std_process(struct 
> ablkcipher_request *req,
>   struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
>   struct mv_cesa_engine *engine = sreq->base.engine;
>   size_t len;
> + unsigned int ivsize;
>  
>   len = sg_pcopy_from_buffer(req->dst, creq->dst_nents,
>  engine->sram + CESA_SA_DATA_SRAM_OFFSET,
> @@ -127,6 +128,10 @@ static int mv_cesa_ablkcipher_std_process(struct 
> ablkcipher_request *req,
>   if (sreq->offset < req->nbytes)
>   return -EINPROGRESS;
>  
> + ivsize = crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(req));
> + memcpy_fromio(req->info,
> +   engine->sram + CESA_SA_CRYPT_IV_SRAM_OFFSET, ivsize);
> +
>   return 0;
>  }
>  
> @@ -135,23 +140,23 @@ static int mv_cesa_ablkcipher_process(struct 
> crypto_async_request *req,
>  {
>   struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
>   struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
> - struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
> - struct mv_cesa_engine *engine = sreq->base.engine;
> - int ret;
>  
> - if (creq->req.base.type == CESA_DMA_REQ)
> + if (creq->req.base.type == CESA_DMA_REQ) {
> + int ret;
> + struct mv_cesa_tdma_req *dreq;
> + unsigned int ivsize;
> +
>   ret = mv_cesa_dma_process(&creq->req.dma, status);
> - else
> - ret = mv_cesa_ablkcipher_std_process(ablkreq, status);
> + if (ret)
> + return ret;
>  
> - if (ret)
> + dreq = &creq->req.dma;
> + ivsize = crypto_ablkcipher_ivsize(
> +  crypto_ablkcipher_reqtfm(ablkreq));

Sometime it's better to offend the < 80 characters rule than doing
funky stuff ;).

> + memcpy_fromio(ablkreq->info,

Re: [PATCH 2/7] crypto: marvell: Check engine is not already running when enabling a req

2016-06-15 Thread Boris Brezillon
On Wed, 15 Jun 2016 21:15:29 +0200
Romain Perier  wrote:

> Adding BUG_ON() macro to be sure that the step operation is not about
> to activate a request on the engine if the corresponding engine is
> already processing a crypto request. This is helpful when the support
> for chaining crypto requests will be added. Instead of hanging the
> system when the engine is in an incoherent state, we add this macro

You don't add the macro, you use it.

> which throws an understandable error.

How about rewording the commit message this way:

"
Add a BUG_ON() call when the driver tries to launch a crypto request
while the engine is still processing the previous one. This replaces
a silent system hang by a verbose kernel panic with the associated
backtrace to let the user know that something went wrong in the CESA
driver.
"

> 
> Signed-off-by: Romain Perier 

Apart from the coding style issue mentioned below,

Acked-by: Boris Brezillon 

> ---
>  drivers/crypto/marvell/cipher.c | 2 ++
>  drivers/crypto/marvell/hash.c   | 2 ++
>  drivers/crypto/marvell/tdma.c   | 2 ++
>  3 files changed, 6 insertions(+)
> 
> diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
> index dcf1fce..8d0fabb 100644
> --- a/drivers/crypto/marvell/cipher.c
> +++ b/drivers/crypto/marvell/cipher.c
> @@ -106,6 +106,8 @@ static void mv_cesa_ablkcipher_std_step(struct 
> ablkcipher_request *req)
>  
>   mv_cesa_set_int_mask(engine, CESA_SA_INT_ACCEL0_DONE);
>   writel_relaxed(CESA_SA_CFG_PARA_DIS, engine->regs + CESA_SA_CFG);
> + BUG_ON(readl(engine->regs + CESA_SA_CMD)
> +   & CESA_SA_CMD_EN_CESA_SA_ACCL0);

Nit: please put the '&' operator at the end of the first line and
align CESA_SA_CMD_EN_CESA_SA_ACCL0 on the open parenthesis.

BUG_ON(readl(engine->regs + CESA_SA_CMD) &
   CESA_SA_CMD_EN_CESA_SA_ACCL0);

>   writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
>  }
>  
> diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c
> index 7ca2e0f..0fae351 100644
> --- a/drivers/crypto/marvell/hash.c
> +++ b/drivers/crypto/marvell/hash.c
> @@ -237,6 +237,8 @@ static void mv_cesa_ahash_std_step(struct ahash_request 
> *req)
>  
>   mv_cesa_set_int_mask(engine, CESA_SA_INT_ACCEL0_DONE);
>   writel_relaxed(CESA_SA_CFG_PARA_DIS, engine->regs + CESA_SA_CFG);
> + BUG_ON(readl(engine->regs + CESA_SA_CMD)
> +   & CESA_SA_CMD_EN_CESA_SA_ACCL0);

Ditto.

>   writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
>  }
>  
> diff --git a/drivers/crypto/marvell/tdma.c b/drivers/crypto/marvell/tdma.c
> index 7642798..d493714 100644
> --- a/drivers/crypto/marvell/tdma.c
> +++ b/drivers/crypto/marvell/tdma.c
> @@ -53,6 +53,8 @@ void mv_cesa_dma_step(struct mv_cesa_tdma_req *dreq)
>  engine->regs + CESA_SA_CFG);
>   writel_relaxed(dreq->chain.first->cur_dma,
>  engine->regs + CESA_TDMA_NEXT_ADDR);
> + BUG_ON(readl(engine->regs + CESA_SA_CMD)
> +   & CESA_SA_CMD_EN_CESA_SA_ACCL0);

Ditto.

>   writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
>  }
>  



-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/7] crypto: marvell: Add a macro constant for the size of the crypto queue

2016-06-15 Thread Boris Brezillon
On Wed, 15 Jun 2016 21:15:28 +0200
Romain Perier  wrote:

> Adding a macro constant to be used for the size of the crypto queue,
> instead of using a numeric value directly. It will be easier to
> maintain in case we add more than one crypto queue of the same size.
> 
> Signed-off-by: Romain Perier 

Acked-by: Boris Brezillon 

> ---
>  drivers/crypto/marvell/cesa.c | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
> index 056a754..fb403e1 100644
> --- a/drivers/crypto/marvell/cesa.c
> +++ b/drivers/crypto/marvell/cesa.c
> @@ -31,6 +31,9 @@
>  
>  #include "cesa.h"
>  
> +/* Limit of the crypto queue before reaching the backlog */
> +#define CESA_CRYPTO_DEFAULT_MAX_QLEN 50
> +
>  static int allhwsupport = !IS_ENABLED(CONFIG_CRYPTO_DEV_MV_CESA);
>  module_param_named(allhwsupport, allhwsupport, int, 0444);
>  MODULE_PARM_DESC(allhwsupport, "Enable support for all hardware (even it if 
> overlaps with the mv_cesa driver)");
> @@ -416,7 +419,7 @@ static int mv_cesa_probe(struct platform_device *pdev)
>   return -ENOMEM;
>  
>   spin_lock_init(&cesa->lock);
> - crypto_init_queue(&cesa->queue, 50);
> + crypto_init_queue(&cesa->queue, CESA_CRYPTO_DEFAULT_MAX_QLEN);
>   res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
>   cesa->regs = devm_ioremap_resource(dev, res);
>   if (IS_ERR(cesa->regs))



-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 7/7] crypto: marvell: Add support for chaining crypto requests in TDMA mode

2016-06-15 Thread Romain Perier
The Cryptographic Engines and Security Accelerators (CESA) supports the
Multi-Packet Chain Mode. With this mode enabled, multiple tdma requests
can be chained and processed by the hardware without software
interferences. This mode was already activated, however the crypto
requests were not chained together. By doing so, we reduce significantly
the number of IRQs. Instead of being interrupted at the end of each
crypto request, we are interrupted at the end of the last cryptographic
request processed by the engine.

This commits re-factorizes the code, changes the code architecture and
adds the required data structures to chain cryptographic requests
together before sending them to an engine.

Signed-off-by: Romain Perier 
---
 drivers/crypto/marvell/cesa.c   | 117 +++-
 drivers/crypto/marvell/cesa.h   |  38 -
 drivers/crypto/marvell/cipher.c |   3 +-
 drivers/crypto/marvell/hash.c   |   9 +++-
 drivers/crypto/marvell/tdma.c   |  81 
 5 files changed, 218 insertions(+), 30 deletions(-)

diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index f9e6688..33411f6 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -32,7 +32,7 @@
 #include "cesa.h"
 
 /* Limit of the crypto queue before reaching the backlog */
-#define CESA_CRYPTO_DEFAULT_MAX_QLEN 50
+#define CESA_CRYPTO_DEFAULT_MAX_QLEN 128
 
 static int allhwsupport = !IS_ENABLED(CONFIG_CRYPTO_DEV_MV_CESA);
 module_param_named(allhwsupport, allhwsupport, int, 0444);
@@ -40,23 +40,83 @@ MODULE_PARM_DESC(allhwsupport, "Enable support for all 
hardware (even it if over
 
 struct mv_cesa_dev *cesa_dev;
 
-static void mv_cesa_dequeue_req_unlocked(struct mv_cesa_engine *engine)
+struct crypto_async_request *mv_cesa_dequeue_req_locked(
+   struct mv_cesa_engine *engine, struct crypto_async_request **backlog)
+{
+   struct crypto_async_request *req;
+
+   *backlog = crypto_get_backlog(&engine->queue);
+   req = crypto_dequeue_request(&engine->queue);
+
+   if (!req)
+   return NULL;
+
+   return req;
+}
+
+static void mv_cesa_rearm_engine(struct mv_cesa_engine *engine)
 {
struct crypto_async_request *req, *backlog;
struct mv_cesa_ctx *ctx;
 
-   backlog = crypto_get_backlog(&engine->queue);
-   req = crypto_dequeue_request(&engine->queue);
-   engine->req = req;
 
+   spin_lock_bh(&engine->lock);
+   if (engine->req)
+   goto out_unlock;
+
+   req = mv_cesa_dequeue_req_locked(engine, &backlog);
if (!req)
-   return;
+   goto out_unlock;
+
+   engine->req = req;
+   spin_unlock_bh(&engine->lock);
 
if (backlog)
backlog->complete(backlog, -EINPROGRESS);
 
ctx = crypto_tfm_ctx(req->tfm);
ctx->ops->step(req);
+   return;
+out_unlock:
+   spin_unlock_bh(&engine->lock);
+}
+
+static int mv_cesa_std_process(struct mv_cesa_engine *engine, u32 status)
+{
+   struct crypto_async_request *req;
+   struct mv_cesa_ctx *ctx;
+   int res;
+
+   req = engine->req;
+   ctx = crypto_tfm_ctx(req->tfm);
+   res = ctx->ops->process(req, status);
+
+   if (res == 0) {
+   ctx->ops->complete(req);
+   mv_cesa_engine_enqueue_complete_request(engine, req);
+   } else if (res == -EINPROGRESS) {
+   ctx->ops->step(req);
+   } else {
+   ctx->ops->complete(req);
+   }
+
+   return res;
+}
+
+static int mv_cesa_int_process(struct mv_cesa_engine *engine, u32 status)
+{
+   if (engine->chain.first && engine->chain.last)
+   return mv_cesa_tdma_process(engine, status);
+   return mv_cesa_std_process(engine, status);
+}
+
+static inline void mv_cesa_complete_req(struct mv_cesa_ctx *ctx,
+   struct crypto_async_request *req, int res)
+{
+   ctx->ops->cleanup(req);
+   local_bh_disable();
+   req->complete(req, res);
+   local_bh_enable();
 }
 
 static irqreturn_t mv_cesa_int(int irq, void *priv)
@@ -83,26 +143,31 @@ static irqreturn_t mv_cesa_int(int irq, void *priv)
writel(~status, engine->regs + CESA_SA_FPGA_INT_STATUS);
writel(~status, engine->regs + CESA_SA_INT_STATUS);
 
+   /* Process fetched requests */
+   res = mv_cesa_int_process(engine, status & mask);
ret = IRQ_HANDLED;
+
spin_lock_bh(&engine->lock);
req = engine->req;
+   if (res != -EINPROGRESS)
+   engine->req = NULL;
spin_unlock_bh(&engine->lock);
-   if (req) {
-   ctx = crypto_tfm_ctx(req->tfm);
-   res = ctx->ops->process(req, status & mask);
-   if (res != -EINPROGRESS) {
-   spin_lock_bh(&engine->lock);
-   engine->req = NULL;

[PATCH 0/7] Chain crypto requests together at the DMA level

2016-06-15 Thread Romain Perier
The Cryptographic Engines and Security Accelerators (CESA) supports
the TDMA chained mode support. When this mode is enabled and crypto
requests are chained at the DMA level, multiple crypto requests can be
handled by the hardware engine without requiring any software
intervention. This approach limits the number of interrupts generated
by the engines thus improving its throughput and making the whole system
behave nicely under heavy crypto load.

Benchmarking results with dmcrypt
=
I/O readI/O write
Before  81.7 MB/s   31.7 MB/s
After   129  MB/s   39.8 MB/s

Improvement +57.8 % +25.5 %



Romain Perier (7):
  crypto: marvell: Add a macro constant for the size of the crypto queue
  crypto: marvell: Check engine is not already running when enabling a
req
  crypto: marvell: Copy IV vectors by DMA transfers for acipher requests
  crypto: marvell: Moving the tdma chain out of mv_cesa_tdma_req
  crypto: marvell: Adding a complete operation for async requests
  crypto: marvell: Adding load balancing between engines
  crypto: marvell: Add support for chaining crypto requests in TDMA mode

 drivers/crypto/marvell/cesa.c   | 142 ++--
 drivers/crypto/marvell/cesa.h   | 103 +++--
 drivers/crypto/marvell/cipher.c | 141 +++
 drivers/crypto/marvell/hash.c   | 126 +--
 drivers/crypto/marvell/tdma.c   | 120 +++--
 5 files changed, 452 insertions(+), 180 deletions(-)

-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 6/7] crypto: marvell: Adding load balancing between engines

2016-06-15 Thread Romain Perier
This commits adds support for fine grained load balancing on
multi-engine IPs. The engine is pre-selected based on its current load
and on the weight of the crypto request that is about to be processed.
The global crypto queue is also moved to each engine. These changes are
useful for preparing the code to support TDMA chaining between crypto
requests, because each tdma chain will be handled per engine. By using
a crypto queue per engine, we make sure that we keep the state of the
tdma chain synchronized with the crypto queue. We also reduce contention
on 'cesa_dev->lock' and improve parallelism.

Signed-off-by: Romain Perier 
---
 drivers/crypto/marvell/cesa.c   | 30 +--
 drivers/crypto/marvell/cesa.h   | 26 +++--
 drivers/crypto/marvell/cipher.c | 59 ++---
 drivers/crypto/marvell/hash.c   | 65 +++--
 4 files changed, 97 insertions(+), 83 deletions(-)

diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index af96426..f9e6688 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -45,11 +45,9 @@ static void mv_cesa_dequeue_req_unlocked(struct 
mv_cesa_engine *engine)
struct crypto_async_request *req, *backlog;
struct mv_cesa_ctx *ctx;
 
-   spin_lock_bh(&cesa_dev->lock);
-   backlog = crypto_get_backlog(&cesa_dev->queue);
-   req = crypto_dequeue_request(&cesa_dev->queue);
+   backlog = crypto_get_backlog(&engine->queue);
+   req = crypto_dequeue_request(&engine->queue);
engine->req = req;
-   spin_unlock_bh(&cesa_dev->lock);
 
if (!req)
return;
@@ -58,7 +56,6 @@ static void mv_cesa_dequeue_req_unlocked(struct 
mv_cesa_engine *engine)
backlog->complete(backlog, -EINPROGRESS);
 
ctx = crypto_tfm_ctx(req->tfm);
-   ctx->ops->prepare(req, engine);
ctx->ops->step(req);
 }
 
@@ -116,21 +113,19 @@ int mv_cesa_queue_req(struct crypto_async_request *req,
  struct mv_cesa_req *creq)
 {
int ret;
-   int i;
+   struct mv_cesa_engine *engine = creq->engine;
 
-   spin_lock_bh(&cesa_dev->lock);
-   ret = crypto_enqueue_request(&cesa_dev->queue, req);
-   spin_unlock_bh(&cesa_dev->lock);
+   spin_lock_bh(&engine->lock);
+   ret = crypto_enqueue_request(&engine->queue, req);
+   spin_unlock_bh(&engine->lock);
 
if (ret != -EINPROGRESS)
return ret;
 
-   for (i = 0; i < cesa_dev->caps->nengines; i++) {
-   spin_lock_bh(&cesa_dev->engines[i].lock);
-   if (!cesa_dev->engines[i].req)
-   mv_cesa_dequeue_req_unlocked(&cesa_dev->engines[i]);
-   spin_unlock_bh(&cesa_dev->engines[i].lock);
-   }
+   spin_lock_bh(&engine->lock);
+   if (!engine->req)
+   mv_cesa_dequeue_req_unlocked(engine);
+   spin_unlock_bh(&engine->lock);
 
return -EINPROGRESS;
 }
@@ -425,7 +420,7 @@ static int mv_cesa_probe(struct platform_device *pdev)
return -ENOMEM;
 
spin_lock_init(&cesa->lock);
-   crypto_init_queue(&cesa->queue, CESA_CRYPTO_DEFAULT_MAX_QLEN);
+
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
cesa->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(cesa->regs))
@@ -498,6 +493,9 @@ static int mv_cesa_probe(struct platform_device *pdev)
engine);
if (ret)
goto err_cleanup;
+
+   crypto_init_queue(&engine->queue, CESA_CRYPTO_DEFAULT_MAX_QLEN);
+   atomic_set(&engine->load, 0);
}
 
cesa_dev = cesa;
diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
index 32de08b..5626aa7 100644
--- a/drivers/crypto/marvell/cesa.h
+++ b/drivers/crypto/marvell/cesa.h
@@ -400,7 +400,6 @@ struct mv_cesa_dev_dma {
  * @regs:  device registers
  * @sram_size: usable SRAM size
  * @lock:  device lock
- * @queue: crypto request queue
  * @engines:   array of engines
  * @dma:   dma pools
  *
@@ -412,7 +411,6 @@ struct mv_cesa_dev {
struct device *dev;
unsigned int sram_size;
spinlock_t lock;
-   struct crypto_queue queue;
struct mv_cesa_engine *engines;
struct mv_cesa_dev_dma *dma;
 };
@@ -431,6 +429,8 @@ struct mv_cesa_dev {
  * @int_mask:  interrupt mask cache
  * @pool:  memory pool pointing to the memory region reserved in
  * SRAM
+ * @queue: fifo of the pending crypto requests
+ * @load:  engine load counter, useful for load balancing
  *
  * Structure storing CESA engine information.
  */
@@ -446,6 +446,8 @@ struct mv_cesa_engine {
size_t max_req_len;
u32 int_mask;
struct gen_pool *pool;
+   struct crypto_queue queue;
+   atomic_t load;
 };
 
 /**
@@ -697,6 +6

[PATCH 4/7] crypto: marvell: Moving the tdma chain out of mv_cesa_tdma_req

2016-06-15 Thread Romain Perier
Actually the only way to access the tdma chain is to use the 'req' union
from a mv_cesa_{ablkcipher,ahash}. This will soon become a problem if we
want to handle the TDMA chaining vs standard/non-DMA processing in a
generic way (with generic functions at the cesa.c level detecting
whether the request should be queued at the DMA level or not). Hence the
decision to move the chain field a the mv_cesa_req level at the expense
of adding 2 void * fields to all request contexts (including non-DMA
ones). To limit the overhead, we get rid of the type field, which can
now be deduced from the req->chain.first value.

Signed-off-by: Romain Perier 
---
 drivers/crypto/marvell/cesa.c   |  3 ++-
 drivers/crypto/marvell/cesa.h   | 31 +--
 drivers/crypto/marvell/cipher.c | 40 ++--
 drivers/crypto/marvell/hash.c   | 36 +++-
 drivers/crypto/marvell/tdma.c   |  8 
 5 files changed, 56 insertions(+), 62 deletions(-)

diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index 93700cd..fe04d1b 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -111,7 +111,8 @@ static irqreturn_t mv_cesa_int(int irq, void *priv)
return ret;
 }
 
-int mv_cesa_queue_req(struct crypto_async_request *req)
+int mv_cesa_queue_req(struct crypto_async_request *req,
+ struct mv_cesa_req *creq)
 {
int ret;
int i;
diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
index 74b84bd..158ff82 100644
--- a/drivers/crypto/marvell/cesa.h
+++ b/drivers/crypto/marvell/cesa.h
@@ -509,21 +509,11 @@ enum mv_cesa_req_type {
 
 /**
  * struct mv_cesa_req - CESA request
- * @type:  request type
  * @engine:engine associated with this request
+ * @chain: list of tdma descriptors associated  with this request
  */
 struct mv_cesa_req {
-   enum mv_cesa_req_type type;
struct mv_cesa_engine *engine;
-};
-
-/**
- * struct mv_cesa_tdma_req - CESA TDMA request
- * @base:  base information
- * @chain: TDMA chain
- */
-struct mv_cesa_tdma_req {
-   struct mv_cesa_req base;
struct mv_cesa_tdma_chain chain;
 };
 
@@ -562,7 +552,6 @@ struct mv_cesa_ablkcipher_std_req {
 struct mv_cesa_ablkcipher_req {
union {
struct mv_cesa_req base;
-   struct mv_cesa_tdma_req dma;
struct mv_cesa_ablkcipher_std_req std;
} req;
int src_nents;
@@ -587,7 +576,6 @@ struct mv_cesa_ahash_std_req {
  * @cache_dma: DMA address of the cache buffer
  */
 struct mv_cesa_ahash_dma_req {
-   struct mv_cesa_tdma_req base;
u8 *padding;
dma_addr_t padding_dma;
u8 *cache;
@@ -625,6 +613,12 @@ struct mv_cesa_ahash_req {
 
 extern struct mv_cesa_dev *cesa_dev;
 
+static inline enum mv_cesa_req_type
+mv_cesa_req_get_type(struct mv_cesa_req *req)
+{
+   return req->chain.first ? CESA_DMA_REQ : CESA_STD_REQ;
+}
+
 static inline void mv_cesa_update_op_cfg(struct mv_cesa_op_ctx *op,
 u32 cfg, u32 mask)
 {
@@ -697,7 +691,8 @@ static inline bool mv_cesa_mac_op_is_first_frag(const 
struct mv_cesa_op_ctx *op)
CESA_SA_DESC_CFG_FIRST_FRAG;
 }
 
-int mv_cesa_queue_req(struct crypto_async_request *req);
+int mv_cesa_queue_req(struct crypto_async_request *req,
+ struct mv_cesa_req *creq);
 
 /*
  * Helper function that indicates whether a crypto request needs to be
@@ -767,9 +762,9 @@ static inline bool mv_cesa_req_dma_iter_next_op(struct 
mv_cesa_dma_iter *iter)
return iter->op_len;
 }
 
-void mv_cesa_dma_step(struct mv_cesa_tdma_req *dreq);
+void mv_cesa_dma_step(struct mv_cesa_req *dreq);
 
-static inline int mv_cesa_dma_process(struct mv_cesa_tdma_req *dreq,
+static inline int mv_cesa_dma_process(struct mv_cesa_req *dreq,
  u32 status)
 {
if (!(status & CESA_SA_INT_ACC0_IDMA_DONE))
@@ -781,10 +776,10 @@ static inline int mv_cesa_dma_process(struct 
mv_cesa_tdma_req *dreq,
return 0;
 }
 
-void mv_cesa_dma_prepare(struct mv_cesa_tdma_req *dreq,
+void mv_cesa_dma_prepare(struct mv_cesa_req *dreq,
 struct mv_cesa_engine *engine);
+void mv_cesa_dma_cleanup(struct mv_cesa_req *dreq);
 
-void mv_cesa_dma_cleanup(struct mv_cesa_tdma_req *dreq);
 
 static inline void
 mv_cesa_tdma_desc_iter_init(struct mv_cesa_tdma_chain *chain)
diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
index f42620e..15d2c5a 100644
--- a/drivers/crypto/marvell/cipher.c
+++ b/drivers/crypto/marvell/cipher.c
@@ -70,14 +70,14 @@ mv_cesa_ablkcipher_dma_cleanup(struct ablkcipher_request 
*req)
dma_unmap_sg(cesa_dev->dev, req->src, creq->src_nents,
 DMA_BIDIRECTIONAL);
}
-   mv_cesa_dma_cleanup(&creq->req.dma);
+   mv_cesa_dma_cleanup(&creq->req.base);
 }

[PATCH 5/7] crypto: marvell: Adding a complete operation for async requests

2016-06-15 Thread Romain Perier
So far, the 'process' operation was used to check if the current request
was correctly handled by the engine, if it was the case it copied
information from the SRAM to the main memory. Now, we split this
operation. We keep the 'process' operation, which still checks if the
request was correctly handled by the engine or not, then we add a new
operation for completion. The 'complete' method copies the content of
the SRAM to memory. This will soon become useful if we want to call
the process and the complete operations from different locations
depending on the type of the request (different cleanup logic).

Signed-off-by: Romain Perier 
---
 drivers/crypto/marvell/cesa.c   |  1 +
 drivers/crypto/marvell/cesa.h   |  3 +++
 drivers/crypto/marvell/cipher.c | 47 -
 drivers/crypto/marvell/hash.c   | 22 ++-
 4 files changed, 44 insertions(+), 29 deletions(-)

diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index fe04d1b..af96426 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -98,6 +98,7 @@ static irqreturn_t mv_cesa_int(int irq, void *priv)
engine->req = NULL;
mv_cesa_dequeue_req_unlocked(engine);
spin_unlock_bh(&engine->lock);
+   ctx->ops->complete(req);
ctx->ops->cleanup(req);
local_bh_disable();
req->complete(req, res);
diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
index 158ff82..32de08b 100644
--- a/drivers/crypto/marvell/cesa.h
+++ b/drivers/crypto/marvell/cesa.h
@@ -456,6 +456,8 @@ struct mv_cesa_engine {
  * code)
  * @step:  launch the crypto operation on the next chunk
  * @cleanup:   cleanup the crypto request (release associated data)
+ * @complete:  complete the request, i.e copy result from sram or contexts
+ * when it is needed.
  */
 struct mv_cesa_req_ops {
void (*prepare)(struct crypto_async_request *req,
@@ -463,6 +465,7 @@ struct mv_cesa_req_ops {
int (*process)(struct crypto_async_request *req, u32 status);
void (*step)(struct crypto_async_request *req);
void (*cleanup)(struct crypto_async_request *req);
+   void (*complete)(struct crypto_async_request *req);
 };
 
 /**
diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
index 15d2c5a..fbaae2f 100644
--- a/drivers/crypto/marvell/cipher.c
+++ b/drivers/crypto/marvell/cipher.c
@@ -118,7 +118,6 @@ static int mv_cesa_ablkcipher_std_process(struct 
ablkcipher_request *req,
struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
struct mv_cesa_engine *engine = sreq->base.engine;
size_t len;
-   unsigned int ivsize;
 
len = sg_pcopy_from_buffer(req->dst, creq->dst_nents,
   engine->sram + CESA_SA_DATA_SRAM_OFFSET,
@@ -128,10 +127,6 @@ static int mv_cesa_ablkcipher_std_process(struct 
ablkcipher_request *req,
if (sreq->offset < req->nbytes)
return -EINPROGRESS;
 
-   ivsize = crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(req));
-   memcpy_fromio(req->info,
- engine->sram + CESA_SA_CRYPT_IV_SRAM_OFFSET, ivsize);
-
return 0;
 }
 
@@ -141,21 +136,9 @@ static int mv_cesa_ablkcipher_process(struct 
crypto_async_request *req,
struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
 
-   if (mv_cesa_req_get_type(&creq->req.base) == CESA_DMA_REQ) {
-   int ret;
-   struct mv_cesa_req *basereq;
-   unsigned int ivsize;
-
-   ret = mv_cesa_dma_process(&creq->req.base, status);
-   if (ret)
-   return ret;
+   if (mv_cesa_req_get_type(&creq->req.base) == CESA_DMA_REQ)
+   return mv_cesa_dma_process(&creq->req.base, status);
 
-   basereq = &creq->req.base;
-   ivsize = crypto_ablkcipher_ivsize(
-crypto_ablkcipher_reqtfm(ablkreq));
-   memcpy_fromio(ablkreq->info, basereq->chain.last->data, ivsize);
-   return ret;
-   }
return mv_cesa_ablkcipher_std_process(ablkreq, status);
 }
 
@@ -197,6 +180,7 @@ static inline void mv_cesa_ablkcipher_prepare(struct 
crypto_async_request *req,
 {
struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
+
creq->req.base.engine = engine;
 
if (mv_cesa_req_get_type(&creq->req.base) == CESA_DMA_REQ)
@@ -213,11 +197,36 @@ mv_cesa_ablkcipher_req_cleanup(struct 
crypto_async_request *req)
mv_cesa_ablkcipher_cleanup(ablkreq);
 }
 
+static v

[PATCH 1/7] crypto: marvell: Add a macro constant for the size of the crypto queue

2016-06-15 Thread Romain Perier
Adding a macro constant to be used for the size of the crypto queue,
instead of using a numeric value directly. It will be easier to
maintain in case we add more than one crypto queue of the same size.

Signed-off-by: Romain Perier 
---
 drivers/crypto/marvell/cesa.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index 056a754..fb403e1 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -31,6 +31,9 @@
 
 #include "cesa.h"
 
+/* Limit of the crypto queue before reaching the backlog */
+#define CESA_CRYPTO_DEFAULT_MAX_QLEN 50
+
 static int allhwsupport = !IS_ENABLED(CONFIG_CRYPTO_DEV_MV_CESA);
 module_param_named(allhwsupport, allhwsupport, int, 0444);
 MODULE_PARM_DESC(allhwsupport, "Enable support for all hardware (even it if 
overlaps with the mv_cesa driver)");
@@ -416,7 +419,7 @@ static int mv_cesa_probe(struct platform_device *pdev)
return -ENOMEM;
 
spin_lock_init(&cesa->lock);
-   crypto_init_queue(&cesa->queue, 50);
+   crypto_init_queue(&cesa->queue, CESA_CRYPTO_DEFAULT_MAX_QLEN);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
cesa->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(cesa->regs))
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/7] crypto: marvell: Check engine is not already running when enabling a req

2016-06-15 Thread Romain Perier
Adding BUG_ON() macro to be sure that the step operation is not about
to activate a request on the engine if the corresponding engine is
already processing a crypto request. This is helpful when the support
for chaining crypto requests will be added. Instead of hanging the
system when the engine is in an incoherent state, we add this macro
which throws an understandable error.

Signed-off-by: Romain Perier 
---
 drivers/crypto/marvell/cipher.c | 2 ++
 drivers/crypto/marvell/hash.c   | 2 ++
 drivers/crypto/marvell/tdma.c   | 2 ++
 3 files changed, 6 insertions(+)

diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
index dcf1fce..8d0fabb 100644
--- a/drivers/crypto/marvell/cipher.c
+++ b/drivers/crypto/marvell/cipher.c
@@ -106,6 +106,8 @@ static void mv_cesa_ablkcipher_std_step(struct 
ablkcipher_request *req)
 
mv_cesa_set_int_mask(engine, CESA_SA_INT_ACCEL0_DONE);
writel_relaxed(CESA_SA_CFG_PARA_DIS, engine->regs + CESA_SA_CFG);
+   BUG_ON(readl(engine->regs + CESA_SA_CMD)
+ & CESA_SA_CMD_EN_CESA_SA_ACCL0);
writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
 }
 
diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c
index 7ca2e0f..0fae351 100644
--- a/drivers/crypto/marvell/hash.c
+++ b/drivers/crypto/marvell/hash.c
@@ -237,6 +237,8 @@ static void mv_cesa_ahash_std_step(struct ahash_request 
*req)
 
mv_cesa_set_int_mask(engine, CESA_SA_INT_ACCEL0_DONE);
writel_relaxed(CESA_SA_CFG_PARA_DIS, engine->regs + CESA_SA_CFG);
+   BUG_ON(readl(engine->regs + CESA_SA_CMD)
+ & CESA_SA_CMD_EN_CESA_SA_ACCL0);
writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
 }
 
diff --git a/drivers/crypto/marvell/tdma.c b/drivers/crypto/marvell/tdma.c
index 7642798..d493714 100644
--- a/drivers/crypto/marvell/tdma.c
+++ b/drivers/crypto/marvell/tdma.c
@@ -53,6 +53,8 @@ void mv_cesa_dma_step(struct mv_cesa_tdma_req *dreq)
   engine->regs + CESA_SA_CFG);
writel_relaxed(dreq->chain.first->cur_dma,
   engine->regs + CESA_TDMA_NEXT_ADDR);
+   BUG_ON(readl(engine->regs + CESA_SA_CMD)
+ & CESA_SA_CMD_EN_CESA_SA_ACCL0);
writel(CESA_SA_CMD_EN_CESA_SA_ACCL0, engine->regs + CESA_SA_CMD);
 }
 
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/7] crypto: marvell: Copy IV vectors by DMA transfers for acipher requests

2016-06-15 Thread Romain Perier
Adding a TDMA descriptor at the end of the request for copying the
output IV vector via a DMA transfer. This is required for processing
cipher requests asynchroniously in chained mode, otherwise the content
of the IV vector will be overwriten for each new finished request.

Signed-off-by: Romain Perier 
---
 drivers/crypto/marvell/cesa.c   |  4 
 drivers/crypto/marvell/cesa.h   |  5 +
 drivers/crypto/marvell/cipher.c | 40 +++-
 drivers/crypto/marvell/tdma.c   | 29 +
 4 files changed, 65 insertions(+), 13 deletions(-)

diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index fb403e1..93700cd 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -312,6 +312,10 @@ static int mv_cesa_dev_dma_init(struct mv_cesa_dev *cesa)
if (!dma->padding_pool)
return -ENOMEM;
 
+   dma->iv_pool = dmam_pool_create("cesa_iv", dev, 16, 1, 0);
+   if (!dma->iv_pool)
+   return -ENOMEM;
+
cesa->dma = dma;
 
return 0;
diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h
index 74071e4..74b84bd 100644
--- a/drivers/crypto/marvell/cesa.h
+++ b/drivers/crypto/marvell/cesa.h
@@ -275,6 +275,7 @@ struct mv_cesa_op_ctx {
 #define CESA_TDMA_DUMMY0
 #define CESA_TDMA_DATA 1
 #define CESA_TDMA_OP   2
+#define CESA_TDMA_IV   4
 
 /**
  * struct mv_cesa_tdma_desc - TDMA descriptor
@@ -390,6 +391,7 @@ struct mv_cesa_dev_dma {
struct dma_pool *op_pool;
struct dma_pool *cache_pool;
struct dma_pool *padding_pool;
+   struct dma_pool *iv_pool;
 };
 
 /**
@@ -790,6 +792,9 @@ mv_cesa_tdma_desc_iter_init(struct mv_cesa_tdma_chain 
*chain)
memset(chain, 0, sizeof(*chain));
 }
 
+int mv_cesa_dma_add_iv_op(struct mv_cesa_tdma_chain *chain, dma_addr_t src,
+ u32 size, u32 flags, gfp_t gfp_flags);
+
 struct mv_cesa_op_ctx *mv_cesa_dma_add_op(struct mv_cesa_tdma_chain *chain,
const struct mv_cesa_op_ctx *op_templ,
bool skip_ctx,
diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c
index 8d0fabb..f42620e 100644
--- a/drivers/crypto/marvell/cipher.c
+++ b/drivers/crypto/marvell/cipher.c
@@ -118,6 +118,7 @@ static int mv_cesa_ablkcipher_std_process(struct 
ablkcipher_request *req,
struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
struct mv_cesa_engine *engine = sreq->base.engine;
size_t len;
+   unsigned int ivsize;
 
len = sg_pcopy_from_buffer(req->dst, creq->dst_nents,
   engine->sram + CESA_SA_DATA_SRAM_OFFSET,
@@ -127,6 +128,10 @@ static int mv_cesa_ablkcipher_std_process(struct 
ablkcipher_request *req,
if (sreq->offset < req->nbytes)
return -EINPROGRESS;
 
+   ivsize = crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(req));
+   memcpy_fromio(req->info,
+ engine->sram + CESA_SA_CRYPT_IV_SRAM_OFFSET, ivsize);
+
return 0;
 }
 
@@ -135,23 +140,23 @@ static int mv_cesa_ablkcipher_process(struct 
crypto_async_request *req,
 {
struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);
-   struct mv_cesa_ablkcipher_std_req *sreq = &creq->req.std;
-   struct mv_cesa_engine *engine = sreq->base.engine;
-   int ret;
 
-   if (creq->req.base.type == CESA_DMA_REQ)
+   if (creq->req.base.type == CESA_DMA_REQ) {
+   int ret;
+   struct mv_cesa_tdma_req *dreq;
+   unsigned int ivsize;
+
ret = mv_cesa_dma_process(&creq->req.dma, status);
-   else
-   ret = mv_cesa_ablkcipher_std_process(ablkreq, status);
+   if (ret)
+   return ret;
 
-   if (ret)
+   dreq = &creq->req.dma;
+   ivsize = crypto_ablkcipher_ivsize(
+crypto_ablkcipher_reqtfm(ablkreq));
+   memcpy_fromio(ablkreq->info, dreq->chain.last->data, ivsize);
return ret;
-
-   memcpy_fromio(ablkreq->info,
- engine->sram + CESA_SA_CRYPT_IV_SRAM_OFFSET,
- 
crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(ablkreq)));
-
-   return 0;
+   }
+   return mv_cesa_ablkcipher_std_process(ablkreq, status);
 }
 
 static void mv_cesa_ablkcipher_step(struct crypto_async_request *req)
@@ -302,6 +307,7 @@ static int mv_cesa_ablkcipher_dma_req_init(struct 
ablkcipher_request *req,
struct mv_cesa_tdma_chain chain;
bool skip_ctx = false;
int ret;
+   unsigned int ivsize;
 
dreq->base.type = CESA_DMA_REQ;
dreq->chain.first = NUL

Re: [PATCH v4 0/5] /dev/random - a new approach

2016-06-15 Thread Stephan Mueller
Am Mittwoch, 15. Juni 2016, 18:17:43 schrieb David Jaša:

Hi David,

> Hello Stephan,
> 
> Did you consider blocking urandom output or returning error until
> initialized? Given the speed of initialization you report, it shouldn't
> break any userspace apps while making sure that nobody uses predictable
> pseudoranom numbers.

My LRNG will definitely touch the beginning of the initramfs booting until it 
is fully seeded. As these days the initramfs is driven by systemd which always 
pulls from /dev/urandom, we cannot block as this would block systemd. In Ted's 
last patch, he mentioned that he tried to make /dev/urandom block which caused 
user space pain.

But if you use the getrandom system call, it works like /dev/urandom but 
blocks until the DRBG behind /dev/urandom is fully initialized.
> 
> I was considering asking for patch (or even trying to write it myself)
> to make current urandom block/fail when not initialized but that would
> surely have to be off by default over "never break userspace" rule (even
> if it means way too easy security problem with both random and urandom).
> Properties of your urandom implementation makes this point moot and it
> could make the random/urandom wars over.

That patch unfortunately will not work. But if you are interested in that 
blocking /dev/urandom behavior for your application, use getrandom.

> 
> Best Regards,
> 
> David Jaša


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 0/5] /dev/random - a new approach

2016-06-15 Thread David Jaša
Hello Stephan,

Did you consider blocking urandom output or returning error until
initialized? Given the speed of initialization you report, it shouldn't
break any userspace apps while making sure that nobody uses predictable
pseudoranom numbers.

I was considering asking for patch (or even trying to write it myself)
to make current urandom block/fail when not initialized but that would
surely have to be off by default over "never break userspace" rule (even
if it means way too easy security problem with both random and urandom).
Properties of your urandom implementation makes this point moot and it
could make the random/urandom wars over.

Best Regards,

David Jaša

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RESEND PATCH 0/2] hw_random: Add Amlogic Meson SoCs Random Generator driver

2016-06-15 Thread Kevin Hilman
Neil Armstrong  writes:

> NOTE: This is a resent of the DT Bindings and DTSI patches based on the 
> Amlogic DT 64bit
> GIT pull request from Kevin Hilman at [1].
>
> Changes since v2 at 
> http://lkml.kernel.org/r/1465546915-24229-1-git-send-email-narmstr...@baylibre.com
>  :
> - Move rng peripheral node into periphs simple-bus node

Thanks for the update.

Applied to the dt64 branch of the amlogic tree.

Kevin
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v8 0/3] crypto: caam - add support for RSA algorithm

2016-06-15 Thread Tudor Ambarus
Changes in v8:
- store raw keys on stack
- use d_sz instead of n_sz for RSA private exponent
- add caam_read_raw_data function for reading RSA modulus raw byte stream
  as a positive integer. The function updates the n_sz byte length too.
  Needed because the decryption descriptor uses the RSA modulus length as
  decryption output length. The accelerator will try to write n_sz bytes
  to output SGT, resulting a SGT overfflow if RSA modulus contains leading
  zeros.
- add caam_rsa_check_key_length function. Maximum supported modulus size is
  4096 bits. Fallback mechanism to be added after removing
  the (same) key length constraint from software implementation.

Changes in v7:
- sync with ASN.1 parser

Changes in v6:
- write descriptor PDB fields with inline append
- move Protocol Data Block (pdb) structures to pdb.h
- move setting of PDB fields in new functions
- unmap sec4_sg_dma on done callback
- remove redundant clean code on error path
- fix doc typos

Changes in v5:
- sync with ASN.1 parser

Changes in v4:
- sync with ASN.1 parser

Changes in v3:
- sync with ASN.1 parser

Changes in v2:
- fix memory leaks on error path
- rename struct akcipher_alg rsa to caam_rsa

Tudor Ambarus (3):
  crypto: scatterwak - Add scatterwalk_sg_copychunks
  crypto: scatterwalk - export scatterwalk_pagedone
  crypto: caam - add support for RSA algorithm

 crypto/scatterwalk.c  |  31 +-
 drivers/crypto/caam/Kconfig   |  12 +
 drivers/crypto/caam/Makefile  |   4 +
 drivers/crypto/caam/caampkc.c | 693 ++
 drivers/crypto/caam/caampkc.h |  70 
 drivers/crypto/caam/compat.h  |   3 +
 drivers/crypto/caam/desc.h|   2 +
 drivers/crypto/caam/desc_constr.h |   7 +
 drivers/crypto/caam/pdb.h |  51 ++-
 drivers/crypto/caam/pkc_desc.c|  36 ++
 include/crypto/scatterwalk.h  |   4 +
 11 files changed, 910 insertions(+), 3 deletions(-)
 create mode 100644 drivers/crypto/caam/caampkc.c
 create mode 100644 drivers/crypto/caam/caampkc.h
 create mode 100644 drivers/crypto/caam/pkc_desc.c

-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-15 Thread Herbert Xu
On Mon, Jun 13, 2016 at 11:48:37AM -0400, Theodore Ts'o wrote:
> The CRNG is faster, and we don't pretend to track entropy usage in the
> CRNG any more.
> 
> Signed-off-by: Theodore Ts'o 
> ---
>  crypto/chacha20_generic.c |  61 
>  drivers/char/random.c | 374 
> +-
>  include/crypto/chacha20.h |   1 +
>  lib/Makefile  |   2 +-
>  lib/chacha20.c|  79 ++
>  5 files changed, 353 insertions(+), 164 deletions(-)
>  create mode 100644 lib/chacha20.c
> 
> diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
> index da9c899..1cab831 100644
> --- a/crypto/chacha20_generic.c
> +++ b/crypto/chacha20_generic.c

I think you should be accessing this through the crypto API rather
than going direct.  We already have at least one accelerated
implementation of chacha20 and there may well be more of them
in future.  Going through the crypto API means that you will
automatically pick up the best implementation for the platform.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v8 2/3] crypto: scatterwalk - export scatterwalk_pagedone

2016-06-15 Thread Tudor Ambarus
Used in caam driver. Export the symbol since the caam driver
can be built as a module.

Signed-off-by: Tudor Ambarus 
---
 crypto/scatterwalk.c | 5 +++--
 include/crypto/scatterwalk.h | 2 ++
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c
index bc3222d..03d34f9 100644
--- a/crypto/scatterwalk.c
+++ b/crypto/scatterwalk.c
@@ -47,8 +47,8 @@ void *scatterwalk_map(struct scatter_walk *walk)
 }
 EXPORT_SYMBOL_GPL(scatterwalk_map);
 
-static void scatterwalk_pagedone(struct scatter_walk *walk, int out,
-unsigned int more)
+void scatterwalk_pagedone(struct scatter_walk *walk, int out,
+ unsigned int more)
 {
if (out) {
struct page *page;
@@ -69,6 +69,7 @@ static void scatterwalk_pagedone(struct scatter_walk *walk, 
int out,
scatterwalk_start(walk, sg_next(walk->sg));
}
 }
+EXPORT_SYMBOL_GPL(scatterwalk_pagedone);
 
 void scatterwalk_done(struct scatter_walk *walk, int out, int more)
 {
diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h
index 8b799c5..6535a20 100644
--- a/include/crypto/scatterwalk.h
+++ b/include/crypto/scatterwalk.h
@@ -89,6 +89,8 @@ void scatterwalk_copychunks(void *buf, struct scatter_walk 
*walk,
 void scatterwalk_sg_copychunks(struct scatter_walk *dest,
   struct scatter_walk *src, size_t nbytes);
 void *scatterwalk_map(struct scatter_walk *walk);
+void scatterwalk_pagedone(struct scatter_walk *walk, int out,
+ unsigned int more);
 void scatterwalk_done(struct scatter_walk *walk, int out, int more);
 
 void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v8 3/3] crypto: caam - add support for RSA algorithm

2016-06-15 Thread Tudor Ambarus
Add RSA support to caam driver.

Initial author is Yashpal Dutta .

Signed-off-by: Tudor Ambarus 
---
 drivers/crypto/caam/Kconfig   |  12 +
 drivers/crypto/caam/Makefile  |   4 +
 drivers/crypto/caam/caampkc.c | 693 ++
 drivers/crypto/caam/caampkc.h |  70 
 drivers/crypto/caam/compat.h  |   3 +
 drivers/crypto/caam/desc.h|   2 +
 drivers/crypto/caam/desc_constr.h |   7 +
 drivers/crypto/caam/pdb.h |  51 ++-
 drivers/crypto/caam/pkc_desc.c|  36 ++
 9 files changed, 877 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/caam/caampkc.c
 create mode 100644 drivers/crypto/caam/caampkc.h
 create mode 100644 drivers/crypto/caam/pkc_desc.c

diff --git a/drivers/crypto/caam/Kconfig b/drivers/crypto/caam/Kconfig
index ff54c42..64bf302 100644
--- a/drivers/crypto/caam/Kconfig
+++ b/drivers/crypto/caam/Kconfig
@@ -99,6 +99,18 @@ config CRYPTO_DEV_FSL_CAAM_AHASH_API
  To compile this as a module, choose M here: the module
  will be called caamhash.
 
+config CRYPTO_DEV_FSL_CAAM_PKC_API
+tristate "Register public key cryptography implementations with Crypto 
API"
+depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR
+default y
+select CRYPTO_RSA
+help
+  Selecting this will allow SEC Public key support for RSA.
+  Supported cryptographic primitives: encryption, decryption,
+  signature and verification.
+  To compile this as a module, choose M here: the module
+  will be called caam_pkc.
+
 config CRYPTO_DEV_FSL_CAAM_RNG_API
tristate "Register caam device for hwrng API"
depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR
diff --git a/drivers/crypto/caam/Makefile b/drivers/crypto/caam/Makefile
index 550758a..399ad55 100644
--- a/drivers/crypto/caam/Makefile
+++ b/drivers/crypto/caam/Makefile
@@ -5,11 +5,15 @@ ifeq ($(CONFIG_CRYPTO_DEV_FSL_CAAM_DEBUG), y)
EXTRA_CFLAGS := -DDEBUG
 endif
 
+ccflags-y += -I$(srctree)/crypto
+
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam.o
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_JR) += caam_jr.o
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API) += caamalg.o
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_AHASH_API) += caamhash.o
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API) += caamrng.o
+obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_PKC_API) += caam_pkc.o
 
 caam-objs := ctrl.o
 caam_jr-objs := jr.o key_gen.o error.o
+caam_pkc-y := caampkc.o pkc_desc.o
diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
new file mode 100644
index 000..36d3419
--- /dev/null
+++ b/drivers/crypto/caam/caampkc.c
@@ -0,0 +1,693 @@
+/*
+ * caam - Freescale FSL CAAM support for Public Key Cryptography
+ *
+ * Copyright 2016 Freescale Semiconductor, Inc.
+ *
+ * There is no Shared Descriptor for PKC so that the Job Descriptor must carry
+ * all the desired key parameters, input and output pointers.
+ */
+#include "compat.h"
+#include "regs.h"
+#include "intern.h"
+#include "jr.h"
+#include "error.h"
+#include "desc_constr.h"
+#include "sg_sw_sec4.h"
+#include "caampkc.h"
+#include "rsapubkey-asn1.h"
+#include "rsaprivkey-asn1.h"
+
+#define DESC_RSA_PUB_LEN   (2 * CAAM_CMD_SZ + sizeof(struct rsa_pub_pdb))
+#define DESC_RSA_PRIV_F1_LEN   (2 * CAAM_CMD_SZ + \
+sizeof(struct rsa_priv_f1_pdb))
+
+static void rsa_io_unmap(struct device *dev, struct rsa_edesc *edesc,
+struct akcipher_request *req)
+{
+   dma_unmap_sg(dev, req->dst, edesc->dst_nents, DMA_FROM_DEVICE);
+   dma_unmap_sg(dev, req->src, edesc->src_nents, DMA_TO_DEVICE);
+
+   if (edesc->sec4_sg_bytes)
+   dma_unmap_single(dev, edesc->sec4_sg_dma, edesc->sec4_sg_bytes,
+DMA_TO_DEVICE);
+}
+
+static void rsa_pub_unmap(struct device *dev, struct rsa_edesc *edesc,
+ struct akcipher_request *req)
+{
+   struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+   struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+   struct caam_rsa_key *key = &ctx->key;
+   struct rsa_pub_pdb *pdb = &edesc->pdb.pub;
+
+   dma_unmap_single(dev, pdb->n_dma, key->n_sz, DMA_TO_DEVICE);
+   dma_unmap_single(dev, pdb->e_dma, key->e_sz, DMA_TO_DEVICE);
+}
+
+static void rsa_priv_f1_unmap(struct device *dev, struct rsa_edesc *edesc,
+ struct akcipher_request *req)
+{
+   struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+   struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+   struct caam_rsa_key *key = &ctx->key;
+   struct rsa_priv_f1_pdb *pdb = &edesc->pdb.priv_f1;
+
+   dma_unmap_single(dev, pdb->n_dma, key->n_sz, DMA_TO_DEVICE);
+   dma_unmap_single(dev, pdb->d_dma, key->d_sz, DMA_TO_DEVICE);
+}
+
+static size_t caam_count_leading_zeros(u8 *ptr, size_t nbytes)
+{
+   size_t nr_zeros = 0;
+
+   while (!(*ptr) && nbytes) {
+   nbytes--;
+

[PATCH v8 1/3] crypto: scatterwak - Add scatterwalk_sg_copychunks

2016-06-15 Thread Tudor Ambarus
This patch adds the function scatterwalk_sg_copychunks which writes
a chunk of data from a scatterwalk to another scatterwalk.
It will be used by caam driver to remove the leading zeros
for the output data of the RSA algorithm, after the computation completes.

Signed-off-by: Tudor Ambarus 
---
 crypto/scatterwalk.c | 26 ++
 include/crypto/scatterwalk.h |  2 ++
 2 files changed, 28 insertions(+)

diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c
index ea5815c..bc3222d 100644
--- a/crypto/scatterwalk.c
+++ b/crypto/scatterwalk.c
@@ -125,6 +125,32 @@ void scatterwalk_map_and_copy(void *buf, struct 
scatterlist *sg,
 }
 EXPORT_SYMBOL_GPL(scatterwalk_map_and_copy);
 
+void scatterwalk_sg_copychunks(struct scatter_walk *dest,
+  struct scatter_walk *src, size_t nbytes)
+{
+   for (;;) {
+   unsigned int len_this_page = scatterwalk_pagelen(dest);
+   u8 *vaddr;
+
+   if (len_this_page > nbytes)
+   len_this_page = nbytes;
+
+   vaddr = scatterwalk_map(dest);
+   scatterwalk_copychunks(vaddr, src, len_this_page, 0);
+   scatterwalk_unmap(vaddr);
+
+   scatterwalk_advance(dest, len_this_page);
+
+   if (nbytes == len_this_page)
+   break;
+
+   nbytes -= len_this_page;
+
+   scatterwalk_pagedone(dest, 0, 1);
+   }
+}
+EXPORT_SYMBOL_GPL(scatterwalk_sg_copychunks);
+
 int scatterwalk_bytes_sglen(struct scatterlist *sg, int num_bytes)
 {
int offset = 0, n = 0;
diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h
index 35f99b6..8b799c5 100644
--- a/include/crypto/scatterwalk.h
+++ b/include/crypto/scatterwalk.h
@@ -86,6 +86,8 @@ static inline void scatterwalk_unmap(void *vaddr)
 void scatterwalk_start(struct scatter_walk *walk, struct scatterlist *sg);
 void scatterwalk_copychunks(void *buf, struct scatter_walk *walk,
size_t nbytes, int out);
+void scatterwalk_sg_copychunks(struct scatter_walk *dest,
+  struct scatter_walk *src, size_t nbytes);
 void *scatterwalk_map(struct scatter_walk *walk);
 void scatterwalk_done(struct scatter_walk *walk, int out, int more);
 
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: select on non-existing Kconfig option CRC32C

2016-06-15 Thread Hendrik Brueckner
Hi Andreas,

On Wed, Jun 15, 2016 at 12:00:59PM +0200, Andreas Ziegler wrote:
> 
> your patch "s390/crc32-vx: add crypto API module for optimized CRC-32
> algorithms" showed up in linux-next today (next-20160615) as commit   
> 364148e0b195.
> 
> The patch defines the Kconfig option CRYPTO_CRC32_S390 which 'select's CRC32C.
> However, this should probably have been CRYPTO_CRC32C, as CRC32C does not 
> exist.

Thanks for informing me.  Actually, the crc32-vx driver requires the
__crc32c_le() function which is available by selecting CONFIG_CRC32. There is
no need for CRYPTO_CRC32C.  So this can be safely removed.

> Should I prepare a trivial patch to fix this up or would you like to do that 
> on
> your side?

Martin have already corrected the patch.

Thanks and kind regards,
  Hendrik

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: select on non-existing Kconfig option CRC32C

2016-06-15 Thread Randy Dunlap
On 06/15/16 03:00, Andreas Ziegler wrote:
> Hi Hendrik,
> 
> your patch "s390/crc32-vx: add crypto API module for optimized CRC-32
> algorithms" showed up in linux-next today (next-20160615) as commit   
> 364148e0b195.
> 
> The patch defines the Kconfig option CRYPTO_CRC32_S390 which 'select's CRC32C.
> However, this should probably have been CRYPTO_CRC32C, as CRC32C does not 
> exist.
> Should I prepare a trivial patch to fix this up or would you like to do that 
> on
> your side?
> 
> I found this issue by comparing yesterday's tree and today's tree using
> 'scripts/checkkconfigsymbols -f -d next-20160614..next-20160615'.

or should it select CRC32 or LIBCRC32C?  (probably not the LIB... one)

-- 
~Randy
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


crypto: gcm - Filter out async ghash if necessary

2016-06-15 Thread Herbert Xu
As it is if you ask for a sync gcm you may actually end up with
an async one because it does not filter out async implementations
of ghash.

This patch fixes this by adding the necessary filter when looking
for ghash.

Cc: sta...@vger.kernel.org
Signed-off-by: Herbert Xu 

diff --git a/crypto/gcm.c b/crypto/gcm.c
index bec329b..917b8fb 100644
--- a/crypto/gcm.c
+++ b/crypto/gcm.c
@@ -639,7 +639,9 @@ static int crypto_gcm_create_common(struct crypto_template 
*tmpl,
 
ghash_alg = crypto_find_alg(ghash_name, &crypto_ahash_type,
CRYPTO_ALG_TYPE_HASH,
-   CRYPTO_ALG_TYPE_AHASH_MASK);
+   CRYPTO_ALG_TYPE_AHASH_MASK |
+   crypto_requires_sync(algt->type,
+algt->mask));
if (IS_ERR(ghash_alg))
return PTR_ERR(ghash_alg);
 
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: padata - is serial actually serial?

2016-06-15 Thread Gary R Hook

On 06/15/2016 06:52 AM, Steffen Klassert wrote:

Hi Jason.

On Tue, Jun 14, 2016 at 11:00:54PM +0200, Jason A. Donenfeld wrote:

Hi Steffen & Folks,

I submit a job to padata_do_parallel(). When the parallel() function
triggers, I do some things, and then call padata_do_serial(). Finally
the serial() function triggers, where I complete the job (check a
nonce, etc).

The padata API is very appealing because not only does it allow for
parallel computation, but it claims that the serial() functions will
execute in the order that jobs were originally submitted to
padata_do_parallel().

Unfortunately, in practice, I'm pretty sure I'm seeing deviations from
this. When I submit tons and tons of tasks at rapid speed to
padata_do_parallel(), it seems like the serial() function isn't being
called in the exactly the same order that tasks were submitted to
padata_do_parallel().

Is this known (expected) behavior? Or have I stumbled upon a potential
bug that's worthwhile for me to investigate more?


It should return in the same order as the job were submitted,
given that the submitting cpu and the callback cpu are fixed
for all the jobs you want to preserve the order.  If you submit
jobs from more than one cpu, we can not know in which order
they are enqueued. The cpu that gets the lock as the first
has its job in front.


Isn't there an element of indeterminacy at the application thread level
(i.e. user space) too? We don't know how the jobs are being submitted, but
unless that is being handled by a single thread in a single process, I
think all bets are off with respect to ordering.

Then again, perhaps I'm not grokking the details here.


Same if you use more than one callback cpu
we can't know in which order they are dequeued, because the
serial workers are scheduled independent on each cpu.

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: padata - is serial actually serial?

2016-06-15 Thread Steffen Klassert
Hi Jason.

On Tue, Jun 14, 2016 at 11:00:54PM +0200, Jason A. Donenfeld wrote:
> Hi Steffen & Folks,
> 
> I submit a job to padata_do_parallel(). When the parallel() function
> triggers, I do some things, and then call padata_do_serial(). Finally
> the serial() function triggers, where I complete the job (check a
> nonce, etc).
> 
> The padata API is very appealing because not only does it allow for
> parallel computation, but it claims that the serial() functions will
> execute in the order that jobs were originally submitted to
> padata_do_parallel().
> 
> Unfortunately, in practice, I'm pretty sure I'm seeing deviations from
> this. When I submit tons and tons of tasks at rapid speed to
> padata_do_parallel(), it seems like the serial() function isn't being
> called in the exactly the same order that tasks were submitted to
> padata_do_parallel().
> 
> Is this known (expected) behavior? Or have I stumbled upon a potential
> bug that's worthwhile for me to investigate more?

It should return in the same order as the job were submitted,
given that the submitting cpu and the callback cpu are fixed
for all the jobs you want to preserve the order.  If you submit
jobs from more than one cpu, we can not know in which order
they are enqueued. The cpu that gets the lock as the first
has its job in front. Same if you use more than one callback cpu
we can't know in which order they are dequeued, because the
serial workers are scheduled independent on each cpu.

I use it in crypto/pcrypt.c and I never had problems.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] Crypto: Add SHA-3 hash algorithm

2016-06-15 Thread Stephan Mueller
Am Mittwoch, 15. Juni 2016, 15:11:58 schrieb Raveendra Padasalagi:

Hi Raveendra,

> From: Jeff Garzik 
> 
> This patch adds the implementation of SHA3 algorithm
> in software and it's based on original implementation
> pushed in patch https://lwn.net/Articles/518415/ with
> additional changes to match the padding rules specified
> in SHA-3 specification.
> 
> Signed-off-by: Jeff Garzik 
> Signed-off-by: Raveendra Padasalagi 
> ---
>  crypto/Kconfig|  10 ++
>  crypto/Makefile   |   1 +
>  crypto/sha3_generic.c | 296
> ++ include/crypto/sha3.h | 
> 29 +
>  4 files changed, 336 insertions(+)
>  create mode 100644 crypto/sha3_generic.c
>  create mode 100644 include/crypto/sha3.h
> 
> diff --git a/crypto/Kconfig b/crypto/Kconfig
> index 1d33beb..83ee8cb 100644
> --- a/crypto/Kconfig
> +++ b/crypto/Kconfig
> @@ -750,6 +750,16 @@ config CRYPTO_SHA512_SPARC64
> SHA-512 secure hash standard (DFIPS 180-2) implemented
> using sparc64 crypto instructions, when available.
> 
> +config CRYPTO_SHA3
> + tristate "SHA3 digest algorithm"
> + select CRYPTO_HASH
> + help
> +   SHA-3 secure hash standard (DFIPS 202). It's based on

Typo DFIPS?

> +   cryptographic sponge function family called Keccak.
> +
> +   References:
> +   http://keccak.noekeon.org/
> +
>  config CRYPTO_TGR192
>   tristate "Tiger digest algorithms"
>   select CRYPTO_HASH
> diff --git a/crypto/Makefile b/crypto/Makefile
> index 4f4ef7e..0b82c47 100644
> --- a/crypto/Makefile
> +++ b/crypto/Makefile
> @@ -61,6 +61,7 @@ obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
>  obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
>  obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
>  obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
> +obj-$(CONFIG_CRYPTO_SHA3) += sha3_generic.o
>  obj-$(CONFIG_CRYPTO_WP512) += wp512.o
>  obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
>  obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
> diff --git a/crypto/sha3_generic.c b/crypto/sha3_generic.c
> new file mode 100644
> index 000..162dfc3
> --- /dev/null
> +++ b/crypto/sha3_generic.c
> @@ -0,0 +1,296 @@
> +/*
> + * Cryptographic API.
> + *
> + * SHA-3, as specified in
> + * http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf
> + *
> + * SHA-3 code by Jeff Garzik 
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License as published by the
> Free + * Software Foundation; either version 2 of the License, or (at your
> option)• + * any later version.
> + *
> + */
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#define KECCAK_ROUNDS 24
> +
> +#define ROTL64(x, y) (((x) << (y)) | ((x) >> (64 - (y
> +
> +static const u64 keccakf_rndc[24] = {
> + 0x0001, 0x8082, 0x8000808a,
> + 0x800080008000, 0x808b, 0x8001,
> + 0x800080008081, 0x80008009, 0x008a,
> + 0x0088, 0x80008009, 0x800a,
> + 0x8000808b, 0x808b, 0x80008089,
> + 0x80008003, 0x80008002, 0x8080,
> + 0x800a, 0x8000800a, 0x800080008081,
> + 0x80008080, 0x8001, 0x800080008008
> +};
> +
> +static const int keccakf_rotc[24] = {
> + 1,  3,  6,  10, 15, 21, 28, 36, 45, 55, 2,  14,
> + 27, 41, 56, 8,  25, 43, 62, 18, 39, 61, 20, 44
> +};
> +
> +static const int keccakf_piln[24] = {
> + 10, 7,  11, 17, 18, 3, 5,  16, 8,  21, 24, 4,
> + 15, 23, 19, 13, 12, 2, 20, 14, 22, 9,  6,  1
> +};
> +
> +/* update the state with given number of rounds */
> +
> +static void keccakf(u64 st[25])
> +{
> + int i, j, round;
> + u64 t, bc[5];
> +
> + for (round = 0; round < KECCAK_ROUNDS; round++) {
> +
> + /* Theta */
> + for (i = 0; i < 5; i++)
> + bc[i] = st[i] ^ st[i + 5] ^ st[i + 10] ^ st[i + 15]
> + ^ st[i + 20];
> +
> + for (i = 0; i < 5; i++) {
> + t = bc[(i + 4) % 5] ^ ROTL64(bc[(i + 1) % 5], 1);
> + for (j = 0; j < 25; j += 5)
> + st[j + i] ^= t;
> + }
> +
> + /* Rho Pi */
> + t = st[1];
> + for (i = 0; i < 24; i++) {
> + j = keccakf_piln[i];
> + bc[0] = st[j];
> + st[j] = ROTL64(t, keccakf_rotc[i]);
> + t = bc[0];
> + }
> +
> + /* Chi */
> + for (j = 0; j < 25; j += 5) {
> + for (i = 0; i < 5; i++)
> + bc[i] = st[j + i];
> + for (i = 0; i < 5; i++)
> + st[j + i] ^= (~bc[(i + 1) % 5]) &
> +  bc[(i + 2) % 5];
> + 

Re: [PATCH] crypto: fix semicolon.cocci warnings

2016-06-15 Thread Stephan Mueller
Am Mittwoch, 15. Juni 2016, 19:13:25 schrieb kbuild test robot:

Hi Fengguang,

> crypto/drbg.c:1637:39-40: Unneeded semicolon
> 
> 
>  Remove unneeded semicolon.

Thank you!

> 
> Generated by: scripts/coccinelle/misc/semicolon.cocci
> 
> CC: Stephan Mueller 
> Signed-off-by: Fengguang Wu 

Acked-by: Stephan Mueller 
> ---
> 
>  drbg.c |2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- a/crypto/drbg.c
> +++ b/crypto/drbg.c
> @@ -1634,7 +1634,7 @@ static int drbg_fini_sym_kernel(struct d
>   drbg->ctr_handle = NULL;
> 
>   if (drbg->ctr_req)
> - skcipher_request_free(drbg->ctr_req);;
> + skcipher_request_free(drbg->ctr_req);
>   drbg->ctr_req = NULL;
> 
>   kfree(drbg->ctr_null_value_buf);


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: fix semicolon.cocci warnings

2016-06-15 Thread kbuild test robot
crypto/drbg.c:1637:39-40: Unneeded semicolon


 Remove unneeded semicolon.

Generated by: scripts/coccinelle/misc/semicolon.cocci

CC: Stephan Mueller 
Signed-off-by: Fengguang Wu 
---

 drbg.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -1634,7 +1634,7 @@ static int drbg_fini_sym_kernel(struct d
drbg->ctr_handle = NULL;
 
if (drbg->ctr_req)
-   skcipher_request_free(drbg->ctr_req);;
+   skcipher_request_free(drbg->ctr_req);
drbg->ctr_req = NULL;
 
kfree(drbg->ctr_null_value_buf);
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[cryptodev:master 47/51] crypto/drbg.c:1637:39-40: Unneeded semicolon

2016-06-15 Thread kbuild test robot
tree:   
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
head:   5a7de97309f5af4458b1a25a2a529a1a893c5269
commit: 355912852115cd8aa4ad02c25182ae615ce925fb [47/51] crypto: drbg - use CTR 
AES instead of ECB AES


coccinelle warnings: (new ones prefixed by >>)

>> crypto/drbg.c:1637:39-40: Unneeded semicolon

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 0/2] Add SHA-3 algorithm and test vectors.

2016-06-15 Thread Raveendra Padasalagi
The patch set can be fetched from iproc-sha3-v1 branch of
https://github.com/Broadcom/arm64-linux.git


Regards,
Raveendra
> -Original Message-
> From: Raveendra Padasalagi [mailto:raveendra.padasal...@broadcom.com]
> Sent: 15 June 2016 15:20
> To: 'Herbert Xu'; 'David S. Miller'; 'linux-crypto@vger.kernel.org';
'linux-
> ker...@vger.kernel.org'; 'Jeff Garzik'; 'Jeff Garzik'
> Cc: 'Jon Mason'; 'Florian Fainelli'; Anup Patel; 'Ray Jui'; 'Scott
Branden'; Pramod
> Kumar; 'bcm-kernel-feedback-l...@broadcom.com'
> Subject: RE: [PATCH 0/2] Add SHA-3 algorithm and test vectors.
>
> Forgot to add Jeff Garzik in the email list.
>
> ++  Jeff Garzik.
>
>
> Regards,
> Raveendra
>
> > -Original Message-
> > From: Raveendra Padasalagi [mailto:raveendra.padasal...@broadcom.com]
> > Sent: 15 June 2016 15:12
> > To: Herbert Xu; David S. Miller; linux-crypto@vger.kernel.org; linux-
> > ker...@vger.kernel.org
> > Cc: Jon Mason; Florian Fainelli; Anup Patel; Ray Jui; Scott Branden;
> > Pramod Kumar; bcm-kernel-feedback-l...@broadcom.com; Raveendra
> > Padasalagi
> > Subject: [PATCH 0/2] Add SHA-3 algorithm and test vectors.
> >
> > This patchset adds the implementation of SHA-3 algorithm in software
> > and it's based on original implementation pushed in patch
> > https://lwn.net/Articles/518415/ with additional changes to match the
> > padding rules specified in SHA-3 specification.
> >
> > This patchset also includes changes in tcrypt module to add support
> > for SHA-3 algorithms test and related test vectors for basic testing.
> >
> > Broadcom Secure Processing Unit-2(SPU-2) engine supports offloading of
> > SHA-3 operations in hardware, in order to add SHA-3 support in SPU-2
> > driver we needed to have the software implementation and test
framework in
> place.
> >
> > The patchset is based on v4.7-rc1 tag and its tested on Broadcom
> > NorthStar2 SoC.
> >
> > Jeff Garzik (1):
> >   Crypto: Add SHA-3 hash algorithm
> >
> > Raveendra Padasalagi (1):
> >   Crypto: Add SHA-3 Test's in tcrypt
> >
> >  crypto/Kconfig|  10 ++
> >  crypto/Makefile   |   1 +
> >  crypto/sha3_generic.c | 296
> > ++
> >  crypto/tcrypt.c   |  53 -
> >  crypto/testmgr.c  |  40 +++
> >  crypto/testmgr.h  | 125 +
> >  include/crypto/sha3.h |  29 +
> >  7 files changed, 553 insertions(+), 1 deletion(-)  create mode 100644
> > crypto/sha3_generic.c  create mode 100644 include/crypto/sha3.h
> >
> > --
> > 1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


select on non-existing Kconfig option CRC32C

2016-06-15 Thread Andreas Ziegler
Hi Hendrik,

your patch "s390/crc32-vx: add crypto API module for optimized CRC-32
algorithms" showed up in linux-next today (next-20160615) as commit 
364148e0b195.

The patch defines the Kconfig option CRYPTO_CRC32_S390 which 'select's CRC32C.
However, this should probably have been CRYPTO_CRC32C, as CRC32C does not exist.
Should I prepare a trivial patch to fix this up or would you like to do that on
your side?

I found this issue by comparing yesterday's tree and today's tree using
'scripts/checkkconfigsymbols -f -d next-20160614..next-20160615'.

Best regards,

Andreas
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RESEND PATCH 0/2] hw_random: Add Amlogic Meson SoCs Random Generator driver

2016-06-15 Thread Neil Armstrong
NOTE: This is a resent of the DT Bindings and DTSI patches based on the Amlogic 
DT 64bit
GIT pull request from Kevin Hilman at [1].

Changes since v2 at 
http://lkml.kernel.org/r/1465546915-24229-1-git-send-email-narmstr...@baylibre.com
 :
- Move rng peripheral node into periphs simple-bus node

Add support for the Amlogic Meson SoCs Hardware Random generator as a hw_random 
char driver.
The generator is a single 32bit wide register.
Also adds the Meson GXBB SoC DTSI node and corresponding DT bindings.

Changes since v1 at 
http://lkml.kernel.org/r/1464943621-18278-1-git-send-email-narmstr...@baylibre.com
 :
- change to depend on ARCH_MESON || COMPILE_TEST
- check buffer max size in read

[1] [GIT PULL] Amlogic DT 64-bit changes for v4.8 : 
http://lkml.kernel.org/r/7hshwfbiit@baylibre.com

Neil Armstrong (2):
  dt-bindings: hwrng: Add Amlogic Meson Hardware Random Generator
bindings
  ARM64: dts: meson-gxbb: Add Hardware Random Generator node

 .../devicetree/bindings/rng/amlogic,meson-rng.txt  | 14 ++
 arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi|  5 +
 2 files changed, 19 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/rng/amlogic,meson-rng.txt

-- 
2.7.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RESEND PATCH 1/2] dt-bindings: hwrng: Add Amlogic Meson Hardware Random Generator bindings

2016-06-15 Thread Neil Armstrong
Acked-by: Rob Herring 
Signed-off-by: Neil Armstrong 
---
 .../devicetree/bindings/rng/amlogic,meson-rng.txt  | 14 ++
 1 file changed, 14 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/rng/amlogic,meson-rng.txt

diff --git a/Documentation/devicetree/bindings/rng/amlogic,meson-rng.txt 
b/Documentation/devicetree/bindings/rng/amlogic,meson-rng.txt
new file mode 100644
index 000..202f2d0
--- /dev/null
+++ b/Documentation/devicetree/bindings/rng/amlogic,meson-rng.txt
@@ -0,0 +1,14 @@
+Amlogic Meson Random number generator
+=
+
+Required properties:
+
+- compatible : should be "amlogic,meson-rng"
+- reg : Specifies base physical address and size of the registers.
+
+Example:
+
+rng {
+compatible = "amlogic,meson-rng";
+reg = <0x0 0xc8834000 0x0 0x4>;
+};
-- 
2.7.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RESEND PATCH 2/2] ARM64: dts: meson-gxbb: Add Hardware Random Generator node

2016-06-15 Thread Neil Armstrong
Signed-off-by: Neil Armstrong 
---
 arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi 
b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
index 063e3b6..806b903 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
@@ -221,6 +221,11 @@
#size-cells = <2>;
ranges = <0x0 0x0 0x0 0xc8834000 0x0 0x2000>;
 
+   rng {
+   compatible = "amlogic,meson-rng";
+   reg = <0x0 0x0 0x0 0x4>;
+   };
+
pinctrl_periphs: pinctrl@4b0 {
compatible = 
"amlogic,meson-gxbb-periphs-pinctrl";
#address-cells = <2>;
-- 
2.7.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 2/2] Crypto: Add SHA-3 Test's in tcrypt

2016-06-15 Thread Raveendra Padasalagi
Forgot to add Jeff Garzik in the email list.

++  Jeff Garzik.

Regards,
Raveendra
> -Original Message-
> From: Raveendra Padasalagi [mailto:raveendra.padasal...@broadcom.com]
> Sent: 15 June 2016 15:12
> To: Herbert Xu; David S. Miller; linux-crypto@vger.kernel.org; linux-
> ker...@vger.kernel.org
> Cc: Jon Mason; Florian Fainelli; Anup Patel; Ray Jui; Scott Branden;
Pramod
> Kumar; bcm-kernel-feedback-l...@broadcom.com; Raveendra Padasalagi
> Subject: [PATCH 2/2] Crypto: Add SHA-3 Test's in tcrypt
>
> Added support for SHA-3 algorithm test's in tcrypt module and related
test
> vectors.
>
> Signed-off-by: Raveendra Padasalagi 
> ---
>  crypto/tcrypt.c  |  53 ++-  crypto/testmgr.c |  40
> ++  crypto/testmgr.h | 125
> +++
>  3 files changed, 217 insertions(+), 1 deletion(-)
>
> diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 579dce0..4675459
100644
> --- a/crypto/tcrypt.c
> +++ b/crypto/tcrypt.c
> @@ -72,7 +72,8 @@ static char *check[] = {
>   "cast6", "arc4", "michael_mic", "deflate", "crc32c", "tea",
"xtea",
>   "khazad", "wp512", "wp384", "wp256", "tnepres", "xeta",  "fcrypt",
>   "camellia", "seed", "salsa20", "rmd128", "rmd160", "rmd256",
> "rmd320",
> - "lzo", "cts", "zlib", NULL
> + "lzo", "cts", "zlib", "sha3-224", "sha3-256", "sha3-384",
"sha3-512",
> + NULL
>  };
>
>  struct tcrypt_result {
> @@ -1284,6 +1285,22 @@ static int do_test(const char *alg, u32 type, u32
> mask, int m)
>   ret += tcrypt_test("crct10dif");
>   break;
>
> + case 48:
> + ret += tcrypt_test("sha3-224");
> + break;
> +
> + case 49:
> + ret += tcrypt_test("sha3-256");
> + break;
> +
> + case 50:
> + ret += tcrypt_test("sha3-384");
> + break;
> +
> + case 51:
> + ret += tcrypt_test("sha3-512");
> + break;
> +
>   case 100:
>   ret += tcrypt_test("hmac(md5)");
>   break;
> @@ -1691,6 +1708,22 @@ static int do_test(const char *alg, u32 type, u32
> mask, int m)
>   test_hash_speed("poly1305", sec, poly1305_speed_template);
>   if (mode > 300 && mode < 400) break;
>
> + case 322:
> + test_hash_speed("sha3-224", sec,
> generic_hash_speed_template);
> + if (mode > 300 && mode < 400) break;
> +
> + case 323:
> + test_hash_speed("sha3-256", sec,
> generic_hash_speed_template);
> + if (mode > 300 && mode < 400) break;
> +
> + case 324:
> + test_hash_speed("sha3-384", sec,
> generic_hash_speed_template);
> + if (mode > 300 && mode < 400) break;
> +
> + case 325:
> + test_hash_speed("sha3-512", sec,
> generic_hash_speed_template);
> + if (mode > 300 && mode < 400) break;
> +
>   case 399:
>   break;
>
> @@ -1770,6 +1803,24 @@ static int do_test(const char *alg, u32 type, u32
> mask, int m)
>   test_ahash_speed("rmd320", sec,
> generic_hash_speed_template);
>   if (mode > 400 && mode < 500) break;
>
> + case 418:
> + test_ahash_speed("sha3-224", sec,
> generic_hash_speed_template);
> + if (mode > 400 && mode < 500) break;
> +
> + case 419:
> + test_ahash_speed("sha3-256", sec,
> generic_hash_speed_template);
> + if (mode > 400 && mode < 500) break;
> +
> + case 420:
> + test_ahash_speed("sha3-384", sec,
> generic_hash_speed_template);
> + if (mode > 400 && mode < 500) break;
> +
> +
> + case 421:
> + test_ahash_speed("sha3-512", sec,
> generic_hash_speed_template);
> + if (mode > 400 && mode < 500) break;
> +
> +
>   case 499:
>   break;
>
> diff --git a/crypto/testmgr.c b/crypto/testmgr.c index c727fb0..b773a56
100644
> --- a/crypto/testmgr.c
> +++ b/crypto/testmgr.c
> @@ -3659,6 +3659,46 @@ static const struct alg_test_desc
alg_test_descs[] = {
>   }
>   }
>   }, {
> + .alg = "sha3-224",
> + .test = alg_test_hash,
> + .fips_allowed = 1,
> + .suite = {
> + .hash = {
> + .vecs = sha3_224_tv_template,
> + .count = SHA3_224_TEST_VECTORS
> + }
> + }
> + }, {
> + .alg = "sha3-256",
> + .test = alg_test_hash,
> + .fips_allowed = 1,
> + .suite = {
> + .hash = {
> + .vecs = sha3_256_tv_template,
> + .count = SHA3_256_TEST_VECTORS
> + }
> + }
> + }, {
> + .alg = "sha3-384",
> + .test = alg_test_hash,
> + .fips_allowed = 1,
> + .suite = {
> + .hash = 

RE: [PATCH 0/2] Add SHA-3 algorithm and test vectors.

2016-06-15 Thread Raveendra Padasalagi
Forgot to add Jeff Garzik in the email list.

++  Jeff Garzik.


Regards,
Raveendra

> -Original Message-
> From: Raveendra Padasalagi [mailto:raveendra.padasal...@broadcom.com]
> Sent: 15 June 2016 15:12
> To: Herbert Xu; David S. Miller; linux-crypto@vger.kernel.org; linux-
> ker...@vger.kernel.org
> Cc: Jon Mason; Florian Fainelli; Anup Patel; Ray Jui; Scott Branden;
Pramod
> Kumar; bcm-kernel-feedback-l...@broadcom.com; Raveendra Padasalagi
> Subject: [PATCH 0/2] Add SHA-3 algorithm and test vectors.
>
> This patchset adds the implementation of SHA-3 algorithm in software and
it's
> based on original implementation pushed in patch
> https://lwn.net/Articles/518415/ with additional changes to match the
padding
> rules specified in SHA-3 specification.
>
> This patchset also includes changes in tcrypt module to add support for
SHA-3
> algorithms test and related test vectors for basic testing.
>
> Broadcom Secure Processing Unit-2(SPU-2) engine supports offloading of
SHA-3
> operations in hardware, in order to add SHA-3 support in SPU-2 driver we
> needed to have the software implementation and test framework in place.
>
> The patchset is based on v4.7-rc1 tag and its tested on Broadcom
NorthStar2
> SoC.
>
> Jeff Garzik (1):
>   Crypto: Add SHA-3 hash algorithm
>
> Raveendra Padasalagi (1):
>   Crypto: Add SHA-3 Test's in tcrypt
>
>  crypto/Kconfig|  10 ++
>  crypto/Makefile   |   1 +
>  crypto/sha3_generic.c | 296
> ++
>  crypto/tcrypt.c   |  53 -
>  crypto/testmgr.c  |  40 +++
>  crypto/testmgr.h  | 125 +
>  include/crypto/sha3.h |  29 +
>  7 files changed, 553 insertions(+), 1 deletion(-)  create mode 100644
> crypto/sha3_generic.c  create mode 100644 include/crypto/sha3.h
>
> --
> 1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/2] Crypto: Add SHA-3 Test's in tcrypt

2016-06-15 Thread Raveendra Padasalagi
Added support for SHA-3 algorithm test's
in tcrypt module and related test vectors.

Signed-off-by: Raveendra Padasalagi 
---
 crypto/tcrypt.c  |  53 ++-
 crypto/testmgr.c |  40 ++
 crypto/testmgr.h | 125 +++
 3 files changed, 217 insertions(+), 1 deletion(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 579dce0..4675459 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -72,7 +72,8 @@ static char *check[] = {
"cast6", "arc4", "michael_mic", "deflate", "crc32c", "tea", "xtea",
"khazad", "wp512", "wp384", "wp256", "tnepres", "xeta",  "fcrypt",
"camellia", "seed", "salsa20", "rmd128", "rmd160", "rmd256", "rmd320",
-   "lzo", "cts", "zlib", NULL
+   "lzo", "cts", "zlib", "sha3-224", "sha3-256", "sha3-384", "sha3-512",
+   NULL
 };
 
 struct tcrypt_result {
@@ -1284,6 +1285,22 @@ static int do_test(const char *alg, u32 type, u32 mask, 
int m)
ret += tcrypt_test("crct10dif");
break;
 
+   case 48:
+   ret += tcrypt_test("sha3-224");
+   break;
+
+   case 49:
+   ret += tcrypt_test("sha3-256");
+   break;
+
+   case 50:
+   ret += tcrypt_test("sha3-384");
+   break;
+
+   case 51:
+   ret += tcrypt_test("sha3-512");
+   break;
+
case 100:
ret += tcrypt_test("hmac(md5)");
break;
@@ -1691,6 +1708,22 @@ static int do_test(const char *alg, u32 type, u32 mask, 
int m)
test_hash_speed("poly1305", sec, poly1305_speed_template);
if (mode > 300 && mode < 400) break;
 
+   case 322:
+   test_hash_speed("sha3-224", sec, generic_hash_speed_template);
+   if (mode > 300 && mode < 400) break;
+
+   case 323:
+   test_hash_speed("sha3-256", sec, generic_hash_speed_template);
+   if (mode > 300 && mode < 400) break;
+
+   case 324:
+   test_hash_speed("sha3-384", sec, generic_hash_speed_template);
+   if (mode > 300 && mode < 400) break;
+
+   case 325:
+   test_hash_speed("sha3-512", sec, generic_hash_speed_template);
+   if (mode > 300 && mode < 400) break;
+
case 399:
break;
 
@@ -1770,6 +1803,24 @@ static int do_test(const char *alg, u32 type, u32 mask, 
int m)
test_ahash_speed("rmd320", sec, generic_hash_speed_template);
if (mode > 400 && mode < 500) break;
 
+   case 418:
+   test_ahash_speed("sha3-224", sec, generic_hash_speed_template);
+   if (mode > 400 && mode < 500) break;
+
+   case 419:
+   test_ahash_speed("sha3-256", sec, generic_hash_speed_template);
+   if (mode > 400 && mode < 500) break;
+
+   case 420:
+   test_ahash_speed("sha3-384", sec, generic_hash_speed_template);
+   if (mode > 400 && mode < 500) break;
+
+
+   case 421:
+   test_ahash_speed("sha3-512", sec, generic_hash_speed_template);
+   if (mode > 400 && mode < 500) break;
+
+
case 499:
break;
 
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index c727fb0..b773a56 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -3659,6 +3659,46 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
+   .alg = "sha3-224",
+   .test = alg_test_hash,
+   .fips_allowed = 1,
+   .suite = {
+   .hash = {
+   .vecs = sha3_224_tv_template,
+   .count = SHA3_224_TEST_VECTORS
+   }
+   }
+   }, {
+   .alg = "sha3-256",
+   .test = alg_test_hash,
+   .fips_allowed = 1,
+   .suite = {
+   .hash = {
+   .vecs = sha3_256_tv_template,
+   .count = SHA3_256_TEST_VECTORS
+   }
+   }
+   }, {
+   .alg = "sha3-384",
+   .test = alg_test_hash,
+   .fips_allowed = 1,
+   .suite = {
+   .hash = {
+   .vecs = sha3_384_tv_template,
+   .count = SHA3_384_TEST_VECTORS
+   }
+   }
+   }, {
+   .alg = "sha3-512",
+   .test = alg_test_hash,
+   .fips_allowed = 1,
+   .suite = {
+   .hash = {
+   .vecs = sha3_512_tv_template,
+   .count = SHA3_512_TEST_VECTORS
+   }
+   }
+   }, {
.alg = "sha384",
.test = alg_test_hash,
  

[PATCH 0/2] Add SHA-3 algorithm and test vectors.

2016-06-15 Thread Raveendra Padasalagi
This patchset adds the implementation of SHA-3 algorithm
in software and it's based on original implementation
pushed in patch https://lwn.net/Articles/518415/ with
additional changes to match the padding rules specified
in SHA-3 specification.

This patchset also includes changes in tcrypt module to
add support for SHA-3 algorithms test and related test
vectors for basic testing.

Broadcom Secure Processing Unit-2(SPU-2) engine supports
offloading of SHA-3 operations in hardware, in order to
add SHA-3 support in SPU-2 driver we needed to have the
software implementation and test framework in place.

The patchset is based on v4.7-rc1 tag and its tested on
Broadcom NorthStar2 SoC.

Jeff Garzik (1):
  Crypto: Add SHA-3 hash algorithm

Raveendra Padasalagi (1):
  Crypto: Add SHA-3 Test's in tcrypt

 crypto/Kconfig|  10 ++
 crypto/Makefile   |   1 +
 crypto/sha3_generic.c | 296 ++
 crypto/tcrypt.c   |  53 -
 crypto/testmgr.c  |  40 +++
 crypto/testmgr.h  | 125 +
 include/crypto/sha3.h |  29 +
 7 files changed, 553 insertions(+), 1 deletion(-)
 create mode 100644 crypto/sha3_generic.c
 create mode 100644 include/crypto/sha3.h

-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/2] Crypto: Add SHA-3 hash algorithm

2016-06-15 Thread Raveendra Padasalagi
From: Jeff Garzik 

This patch adds the implementation of SHA3 algorithm
in software and it's based on original implementation
pushed in patch https://lwn.net/Articles/518415/ with
additional changes to match the padding rules specified
in SHA-3 specification.

Signed-off-by: Jeff Garzik 
Signed-off-by: Raveendra Padasalagi 
---
 crypto/Kconfig|  10 ++
 crypto/Makefile   |   1 +
 crypto/sha3_generic.c | 296 ++
 include/crypto/sha3.h |  29 +
 4 files changed, 336 insertions(+)
 create mode 100644 crypto/sha3_generic.c
 create mode 100644 include/crypto/sha3.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1d33beb..83ee8cb 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -750,6 +750,16 @@ config CRYPTO_SHA512_SPARC64
  SHA-512 secure hash standard (DFIPS 180-2) implemented
  using sparc64 crypto instructions, when available.
 
+config CRYPTO_SHA3
+   tristate "SHA3 digest algorithm"
+   select CRYPTO_HASH
+   help
+ SHA-3 secure hash standard (DFIPS 202). It's based on
+ cryptographic sponge function family called Keccak.
+
+ References:
+ http://keccak.noekeon.org/
+
 config CRYPTO_TGR192
tristate "Tiger digest algorithms"
select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index 4f4ef7e..0b82c47 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -61,6 +61,7 @@ obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
 obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
 obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
 obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
+obj-$(CONFIG_CRYPTO_SHA3) += sha3_generic.o
 obj-$(CONFIG_CRYPTO_WP512) += wp512.o
 obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
 obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
diff --git a/crypto/sha3_generic.c b/crypto/sha3_generic.c
new file mode 100644
index 000..162dfc3
--- /dev/null
+++ b/crypto/sha3_generic.c
@@ -0,0 +1,296 @@
+/*
+ * Cryptographic API.
+ *
+ * SHA-3, as specified in
+ * http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf
+ *
+ * SHA-3 code by Jeff Garzik 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)???
+ * any later version.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define KECCAK_ROUNDS 24
+
+#define ROTL64(x, y) (((x) << (y)) | ((x) >> (64 - (y
+
+static const u64 keccakf_rndc[24] = {
+   0x0001, 0x8082, 0x8000808a,
+   0x800080008000, 0x808b, 0x8001,
+   0x800080008081, 0x80008009, 0x008a,
+   0x0088, 0x80008009, 0x800a,
+   0x8000808b, 0x808b, 0x80008089,
+   0x80008003, 0x80008002, 0x8080,
+   0x800a, 0x8000800a, 0x800080008081,
+   0x80008080, 0x8001, 0x800080008008
+};
+
+static const int keccakf_rotc[24] = {
+   1,  3,  6,  10, 15, 21, 28, 36, 45, 55, 2,  14,
+   27, 41, 56, 8,  25, 43, 62, 18, 39, 61, 20, 44
+};
+
+static const int keccakf_piln[24] = {
+   10, 7,  11, 17, 18, 3, 5,  16, 8,  21, 24, 4,
+   15, 23, 19, 13, 12, 2, 20, 14, 22, 9,  6,  1
+};
+
+/* update the state with given number of rounds */
+
+static void keccakf(u64 st[25])
+{
+   int i, j, round;
+   u64 t, bc[5];
+
+   for (round = 0; round < KECCAK_ROUNDS; round++) {
+
+   /* Theta */
+   for (i = 0; i < 5; i++)
+   bc[i] = st[i] ^ st[i + 5] ^ st[i + 10] ^ st[i + 15]
+   ^ st[i + 20];
+
+   for (i = 0; i < 5; i++) {
+   t = bc[(i + 4) % 5] ^ ROTL64(bc[(i + 1) % 5], 1);
+   for (j = 0; j < 25; j += 5)
+   st[j + i] ^= t;
+   }
+
+   /* Rho Pi */
+   t = st[1];
+   for (i = 0; i < 24; i++) {
+   j = keccakf_piln[i];
+   bc[0] = st[j];
+   st[j] = ROTL64(t, keccakf_rotc[i]);
+   t = bc[0];
+   }
+
+   /* Chi */
+   for (j = 0; j < 25; j += 5) {
+   for (i = 0; i < 5; i++)
+   bc[i] = st[j + i];
+   for (i = 0; i < 5; i++)
+   st[j + i] ^= (~bc[(i + 1) % 5]) &
+bc[(i + 2) % 5];
+   }
+
+   /* Iota */
+   st[0] ^= keccakf_rndc[round];
+   }
+}
+
+static void sha3_init(struct sha3_state *sctx, unsigned int digest_sz)
+{
+   memset(sctx, 0, sizeof(*sctx));
+   sctx->md_len = digest_sz;
+   sctx->rsiz = 200 - 2 *

Re: [PATCH v8 1/3] crypto: Key-agreement Protocol Primitives API (KPP)

2016-06-15 Thread Herbert Xu
On Tue, Jun 14, 2016 at 02:36:54PM +, Benedetto, Salvatore wrote:
>
> My very first patch used PKCS3 and there were some objections to that.
> https://patchwork.kernel.org/patch/8311881/
>  
> Both Bluetooth or keyctl KEYCTL_DH_COMPUTE would have to first pack the
> key to whatever format we choose and I don't see that very convinient. We
> only want to provide the acceleration here, without bounding the user to a
> certain key format.

Have you looked at the rtnetlink encoding used by authenc.c? It is
much more light-weight.  There is no need to depend on ASN.1 parsers
at all.

To make it even easier for users such as bluetooth, you can create
helpers that convert struct dh to a byte stream, and vice versa.
For example, the interface could look like this:

struct dh key;
char *buf;
unsigned int len;

/* init key */
key = ...;

len = crypto_dh_key_len(&key);

buf = kmalloc(len, GFP_KERNEL);
if (!buf)
...;

crypto_dh_encode_key(buf, len, &key);
crypto_kpp_set_secret(tfm, buf, len);

The driver would do:

set_secret(char *buf, unsigned int len)
{
struct dh key;
int err;

err = crypto_dh_decode_key(buf, len, &key);
...

>  akcipher is different as PKCS1 is a recognized standard for RSA keys.
>  
>  Please don't get me wrong, it's not much of an issue for me to respin the
>  patchset and change that to PKCS3 for example, but I see no harm in leaving
>  it as it is and moving the key check format to whatever upper layer is using 
> us
>  (like BT and keyctl). Just more work for who is using the API.
>  
>  Could you reconsider that?

I'm sorry but using a void * for this is not acceptable.  We're
talking about a data structure that comes from arbitrary users
and then has to be decoded by random drivers.  It's something
totally different compared to a limited environment where the
same author is writing the code that creates and consumes the
pointer.

Throwing random void pointers at drivers is not a good idea.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5] crypto: rsa - return raw integers for the ASN.1 parser

2016-06-15 Thread Herbert Xu
On Tue, Jun 14, 2016 at 04:14:58PM +0300, Tudor Ambarus wrote:
> Return the raw key with no other processing so that the caller
> can copy it or MPI parse it, etc.
> 
> The scope is to have only one ANS.1 parser for all RSA
> implementations.
> 
> Update the RSA software implementation so that it does
> the MPI conversion on top.
> 
> Signed-off-by: Tudor Ambarus 

Patch applied.  Thanks for persisting with this.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 0/4] crypto: CTR DRBG - performance improvements

2016-06-15 Thread Herbert Xu
On Tue, Jun 14, 2016 at 07:33:48AM +0200, Stephan Mueller wrote:
> Hi,
> 
> The following patch set is aimed to increase the performance of the CTR
> DRBG, especially when assembler implementations of the CTR AES mode are
> available.
> 
> The patch set increases the performance by 10% for random numbers of 16 bytes
> and reaches 450% for random numbers reaching 4096 bytes (larger random
> numbers will even have more performance gains). The performance gains were
> measured when using ctr-aes-aesni.
> 
> Note, when using the C implementation of the CTR mode (cipher/ctr.c), the
> performance of the CTR DRBG is slightly worse than it is now, but still it
> is much faster than the Hash or HMAC DRBGs.
> 
> The patch set is CAVS tested.
> 
> Changes v2:
> * the alignment patch is updated to use the alignment of the underlying TFM
> 
> Stephan Mueller (4):
>   crypto: CTR DRBG - use CTR AES instead of ECB AES
>   crypto: DRBG - use aligned buffers
>   crypto: CTR DRBG - use full CTR AES for update
>   crypto: CTR DRBG - avoid duplicate maintenance of key

All applied.  Thanks!
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag

2016-06-15 Thread Baolin Wang
On 15 June 2016 at 15:39, Herbert Xu  wrote:
> On Wed, Jun 15, 2016 at 03:38:02PM +0800, Baolin Wang wrote:
>>
>> But that means we should divide the bulk request into 512-byte size
>> requests and break up the mapped sg table for each request. Another
>> hand we should allocate memory for each request in crypto layer, which
>> dm-crypt have supplied one high efficiency way. I think these are
>> really top level how to use the crypro APIs, does that need to move
>> into crypto laryer? Thanks.
>
> I have already explained to you how you can piggy-back off dm-crypt's
> allocation, so what's the problem?

Because the request created in dm-crypt is connecting with dm-crypt
closely, I am worried if it can work or introduce other issues if we
move these top level things into crypto layer. Anyway I will try to do
that. Thanks.

> --
> Email: Herbert Xu 
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



-- 
Baolin.wang
Best Regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag

2016-06-15 Thread Herbert Xu
On Wed, Jun 15, 2016 at 03:38:02PM +0800, Baolin Wang wrote:
>
> But that means we should divide the bulk request into 512-byte size
> requests and break up the mapped sg table for each request. Another
> hand we should allocate memory for each request in crypto layer, which
> dm-crypt have supplied one high efficiency way. I think these are
> really top level how to use the crypro APIs, does that need to move
> into crypto laryer? Thanks.

I have already explained to you how you can piggy-back off dm-crypt's
allocation, so what's the problem?
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v4 2/4] crypto: Introduce CRYPTO_ALG_BULK flag

2016-06-15 Thread Baolin Wang
On 15 June 2016 at 14:49, Herbert Xu  wrote:
> On Wed, Jun 15, 2016 at 02:27:04PM +0800, Baolin Wang wrote:
>>
>> After some investigation, I still think we should divide the bulk
>> request from dm-crypt into small request (each one is 512bytes) if
>> this algorithm is not support bulk mode (like CBC). We have talked
>> with dm-crypt
>> maintainers why dm-crypt always use 512 bytes as one request size in
>> below thread, could you please check it?
>> http://www.kernelhub.org/?p=2&msg=907022
>
> That link only points to an email about an oops.

Ah, sorry. Would you check this thread?
http://lkml.iu.edu/hypermail/linux/kernel/1601.1/03829.html

>
> Diggin through that thread, the only objection I have seen is about
> the fact that you have to generate a fresh IV for each sector, which
> is precisely what I'm suggesting that you do.
>
> IOW, implement the IV generators in the crypto API, and then you can
> easily generate a new IV (if necessary) for each sector.
>
>> That means if we move the IV handling into crypto API, we still can
>> not use bulk interface for all algorithm, for example we still need to
>> read/write with 512 bytes for CBC, you can't use 4k or more block on
>> CBC (and most other encryption modes). If only a part of 4k block is
>> written (and then system crash happens), CBC would corrupt the block
>> completely. It means if we map one whole bio with bulk interface in
>> dm-crypt, we need to divide into every 512 bytes requests in crypto
>> layer. So I don't think we can handle every algorithm with bulk
>> interface just moving the IV handling into crypto API. Thanks.
>
> Of course you would do CBC in 512-byte blocks, but my point is that
> you should do this in a crypto API algorithm, rather than dm-crypt
> as we do now.  Once you implement that then dm-crypt can treat
> every algorithm as if they supported bulk processing.

But that means we should divide the bulk request into 512-byte size
requests and break up the mapped sg table for each request. Another
hand we should allocate memory for each request in crypto layer, which
dm-crypt have supplied one high efficiency way. I think these are
really top level how to use the crypro APIs, does that need to move
into crypto laryer? Thanks.

>
> Cheers,
> --
> Email: Herbert Xu 
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



-- 
Baolin.wang
Best Regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 3/6] crypto: AF_ALG -- add asymmetric cipher interface

2016-06-15 Thread Stephan Mueller
Am Dienstag, 14. Juni 2016, 10:22:15 schrieb Mat Martineau:

Hi Mat,

> Stephan,
> 
> On Sat, 14 May 2016, Tadeusz Struk wrote:
> > From: Stephan Mueller 
> > 
> > This patch adds the user space interface for asymmetric ciphers. The
> > interface allows the use of sendmsg as well as vmsplice to provide data.
> > 
> > This version has been rebased on top of 4.6 and a few chackpatch issues
> > have been fixed.
> > 
> > Signed-off-by: Stephan Mueller 
> > Signed-off-by: Tadeusz Struk 
> > ---
> > crypto/algif_akcipher.c |  542
> > +++ 1 file changed, 542
> > insertions(+)
> > create mode 100644 crypto/algif_akcipher.c
> > 
> > diff --git a/crypto/algif_akcipher.c b/crypto/algif_akcipher.c
> > new file mode 100644
> > index 000..6342b6e
> > --- /dev/null
> > +++ b/crypto/algif_akcipher.c
> > +
> > +static int akcipher_sendmsg(struct socket *sock, struct msghdr *msg,
> > +   size_t size)
> > +{
> > +   struct sock *sk = sock->sk;
> > +   struct alg_sock *ask = alg_sk(sk);
> > +   struct akcipher_ctx *ctx = ask->private;
> > +   struct akcipher_sg_list *sgl = &ctx->tsgl;
> > +   struct af_alg_control con = {};
> > +   long copied = 0;
> > +   int op = 0;
> > +   bool init = 0;
> > +   int err;
> > +
> > +   if (msg->msg_controllen) {
> > +   err = af_alg_cmsg_send(msg, &con);
> > +   if (err)
> > +   return err;
> > +
> > +   init = 1;
> > +   switch (con.op) {
> > +   case ALG_OP_VERIFY:
> > +   case ALG_OP_SIGN:
> > +   case ALG_OP_ENCRYPT:
> > +   case ALG_OP_DECRYPT:
> > +   op = con.op;
> > +   break;
> > +   default:
> > +   return -EINVAL;
> > +   }
> > +   }
> > +
> > +   lock_sock(sk);
> > +   if (!ctx->more && ctx->used)
> > +   goto unlock;
> 
> err might be uninitialised at this goto. Should it be set to something
> like -EALREADY to indicate that data is already queued for a different
> crypto op?

Thanks for the hint. Tadeusz, I will provide you with an updated 
algif_akcipher.c for your patchset.

I will also have a look at the comment from Andrew.

> 
> 
> 
> > +unlock:
> > +   akcipher_data_wakeup(sk);
> > +   release_sock(sk);
> > +
> > +   return err ?: copied;
> > +}
> 
> Regards,
> 
> --
> Mat Martineau
> Intel OTC


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html