Re: [PATCH] crypto: squash lines for simple wrapper functions

2016-09-13 Thread Joe Perches
On Wed, 2016-09-14 at 11:10 +0900, Masahiro Yamada wrote:
> 2016-09-13 4:44 GMT+09:00 Joe Perches :
> > On Tue, 2016-09-13 at 04:27 +0900, Masahiro Yamada wrote:
> > > Remove unneeded variables and assignments.
> > Was this found by visual inspection or some tool?
> > If it's via a tool, it's good to mention that in the changelog.
> I used Coccinelle, but I did not mention it
> in case somebody may say "then, please provide your semantic patch".
> As a Coccinelle beginner, I do not want to expose my stupid semantic patch.

If you get it "exposed", you'd likely learn something from others
that would give a few suggestions/tips on how to improve it.


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: squash lines for simple wrapper functions

2016-09-13 Thread Masahiro Yamada
Hi Joe,

2016-09-13 4:44 GMT+09:00 Joe Perches :
> On Tue, 2016-09-13 at 04:27 +0900, Masahiro Yamada wrote:
>> Remove unneeded variables and assignments.
>
> Was this found by visual inspection or some tool?
>
> If it's via a tool, it's good to mention that in the changelog.


I used Coccinelle, but I did not mention it
in case somebody may say "then, please provide your semantic patch".

As a Coccinelle beginner, I do not want to expose my stupid semantic patch.



-- 
Best Regards
Masahiro Yamada
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] hwrng: pasemi-rng - Use linux/io.h instead of asm/io.h

2016-09-13 Thread Michael Ellerman
Herbert Xu  writes:

> On Tue, Sep 06, 2016 at 01:58:39PM +0530, PrasannaKumar Muralidharan wrote:
>> Checkpatch.pl warns about usage of asm/io.h. Use linux/io.h instead.
>> 
>> Signed-off-by: PrasannaKumar Muralidharan 
>
> Patch applied.  Thanks.

Oops I merged it too, my bad.

Hopefully git will work out the resolution.

cheers
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[BUG] crypto: atmel-aes - erro when compiling with VERBOSE_DEBUG enable

2016-09-13 Thread levent demir
Hello, 

if you enable VERBOSE_DEBUG and compile you will have the following
error : 

drivers/crypto/atmel-aes.c:323:5: error: too few arguments to function
'atmel_aes_reg_name'
 atmel_aes_reg_name(offset, tmp));
 ^
include/linux/device.h:1306:41: note: in definition of macro 'dev_vdbg'
   dev_printk(KERN_DEBUG, dev, format, ##arg); \
 ^
drivers/crypto/atmel-aes.c:205:20: note: declared here
 static const char *atmel_aes_reg_name(u32 offset, char *tmp, size_t sz)

Indeed, in atmel_aes_write function the call to atmel_aes_reg_name
contains only two arguments instead of 3 : 

atmel_aes_reg_name(offset, tmp));

To fix it, one has to only add the size of tmp as third argument : 

atmel_aes_reg_name(offset, tmp, sizeof(tmp)));



--- atmel-aes.c 2016-09-13 17:01:11.199014981 +0200
+++ atmel-aes-fixed.c   2016-09-13 17:01:54.056389455 +0200
@@ -317,7 +317,7 @@
char tmp[16];
 
dev_vdbg(dd->dev, "write 0x%08x into %s\n", value,
-atmel_aes_reg_name(offset, tmp));
+   atmel_aes_reg_name(offset, tmp, sizeof(tmp)));
}
 #endif /* VERBOSE_DEBUG */




--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] crypto: qat - fix resource release omissions

2016-09-13 Thread Quentin Lambert



On 13/09/2016 14:40, Herbert Xu wrote:

On Tue, Sep 06, 2016 at 11:18:51AM +0100, Giovanni Cabiddu wrote:

---8<---
Subject: [PATCH] crypto: qat - fix leak on error path

Fix a memory leak in an error path in uc loader.

Signed-off-by: Giovanni Cabiddu 

Patch applied.  Thanks.

Sorry, I completly missed Giovanni's message. Good work!

Quentin
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 4/9] crypto: acomp - add support for lzo via scomp

2016-09-13 Thread Giovanni Cabiddu
Add scomp backend for lzo compression algorithm

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig |1 +
 crypto/lzo.c   |  146 +++-
 2 files changed, 134 insertions(+), 13 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index f553f66..d275591 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1589,6 +1589,7 @@ config CRYPTO_DEFLATE
 config CRYPTO_LZO
tristate "LZO compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select LZO_COMPRESS
select LZO_DECOMPRESS
help
diff --git a/crypto/lzo.c b/crypto/lzo.c
index c3f3dd9..6faed95 100644
--- a/crypto/lzo.c
+++ b/crypto/lzo.c
@@ -22,40 +22,61 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+
+#define LZO_SCRATCH_SIZE   131072
 
 struct lzo_ctx {
void *lzo_comp_mem;
 };
 
+static void * __percpu *lzo_src_scratches;
+static void * __percpu *lzo_dst_scratches;
+
+static void *lzo_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = kmalloc(LZO1X_MEM_COMPRESS, GFP_KERNEL | __GFP_NOWARN);
+   if (!ctx)
+   ctx = vmalloc(LZO1X_MEM_COMPRESS);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
 static int lzo_init(struct crypto_tfm *tfm)
 {
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   ctx->lzo_comp_mem = kmalloc(LZO1X_MEM_COMPRESS,
-   GFP_KERNEL | __GFP_NOWARN);
-   if (!ctx->lzo_comp_mem)
-   ctx->lzo_comp_mem = vmalloc(LZO1X_MEM_COMPRESS);
-   if (!ctx->lzo_comp_mem)
+   ctx->lzo_comp_mem = lzo_alloc_ctx(NULL);
+   if (IS_ERR(ctx->lzo_comp_mem))
return -ENOMEM;
 
return 0;
 }
 
+static void lzo_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   kvfree(ctx);
+}
+
 static void lzo_exit(struct crypto_tfm *tfm)
 {
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   kvfree(ctx->lzo_comp_mem);
+   lzo_free_ctx(NULL, ctx->lzo_comp_mem);
 }
 
-static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lzo_compress(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
 {
-   struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
int err;
 
-   err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx->lzo_comp_mem);
+   err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx);
 
if (err != LZO_E_OK)
return -EINVAL;
@@ -64,8 +85,16 @@ static int lzo_compress(struct crypto_tfm *tfm, const u8 
*src,
return 0;
 }
 
-static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
+   unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   return __lzo_compress(src, slen, dst, dlen, ctx->lzo_comp_mem);
+}
+
+static int __lzo_decompress(const u8 *src, unsigned int slen,
+   u8 *dst, unsigned int *dlen)
 {
int err;
size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
@@ -77,7 +106,56 @@ static int lzo_decompress(struct crypto_tfm *tfm, const u8 
*src,
 
*dlen = tmp_len;
return 0;
+}
 
+static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   return __lzo_decompress(src, slen, dst, dlen);
+}
+
+static int lzo_scomp_comp_decomp(struct crypto_scomp *tfm,
+struct scatterlist *src, unsigned int slen,
+struct scatterlist *dst, unsigned int *dlen,
+void *ctx, int dir)
+{
+   const int cpu = get_cpu();
+   u8 *scratch_src = *per_cpu_ptr(lzo_src_scratches, cpu);
+   u8 *scratch_dst = *per_cpu_ptr(lzo_dst_scratches, cpu);
+   int ret;
+
+   if (slen > LZO_SCRATCH_SIZE || *dlen > LZO_SCRATCH_SIZE) {
+   ret = -EINVAL;
+   goto out;
+   }
+
+   scatterwalk_map_and_copy(scratch_src, src, 0, slen, 0);
+   if (dir)
+   ret = __lzo_compress(scratch_src, slen, scratch_dst, dlen, ctx);
+   else
+   ret = __lzo_decompress(scratch_src, slen, scratch_dst, dlen);
+   if (!ret)
+   scatterwalk_map_and_copy(scratch_dst, dst, 0, *dlen, 1);
+
+out:
+   put_cpu();
+   return ret;
+}
+
+static int lzo_scomp_compress(struct crypto_scomp *tfm,
+ struct scatterlist *src, unsigned int slen,
+ struct scatterlist *dst, unsigned int *dlen,
+ void *ctx)
+{
+   return lzo_scomp_comp_deco

[PATCH v7 7/9] crypto: acomp - add support for 842 via scomp

2016-09-13 Thread Giovanni Cabiddu
Add scomp backend for 842 compression algorithm

Signed-off-by: Giovanni Cabiddu 
---
 crypto/842.c   |  135 +++-
 crypto/Kconfig |1 +
 2 files changed, 134 insertions(+), 2 deletions(-)

diff --git a/crypto/842.c b/crypto/842.c
index 98e387e..d0894e2 100644
--- a/crypto/842.c
+++ b/crypto/842.c
@@ -30,12 +30,53 @@
 #include 
 #include 
 #include 
+#include 
 #include 
+#include 
+
+#define SW842_SCRATCH_SIZE 131072
 
 struct crypto842_ctx {
-   char wmem[SW842_MEM_COMPRESS];  /* working memory for compress */
+   void *wmem; /* working memory for compress */
 };
 
+static void * __percpu *sw842_src_scratches;
+static void * __percpu *sw842_dst_scratches;
+
+static void *crypto842_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = kmalloc(SW842_MEM_COMPRESS, GFP_KERNEL);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
+static int crypto842_init(struct crypto_tfm *tfm)
+{
+   struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   ctx->wmem = crypto842_alloc_ctx(NULL);
+   if (IS_ERR(ctx->wmem))
+   return -ENOMEM;
+
+   return 0;
+}
+
+static void crypto842_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   kfree(ctx);
+}
+
+static void crypto842_exit(struct crypto_tfm *tfm)
+{
+   struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   crypto842_free_ctx(NULL, ctx->wmem);
+}
+
 static int crypto842_compress(struct crypto_tfm *tfm,
  const u8 *src, unsigned int slen,
  u8 *dst, unsigned int *dlen)
@@ -52,6 +93,51 @@ static int crypto842_decompress(struct crypto_tfm *tfm,
return sw842_decompress(src, slen, dst, dlen);
 }
 
+static int sw842_scomp_comp_decomp(struct crypto_scomp *tfm,
+  struct scatterlist *src, unsigned int slen,
+  struct scatterlist *dst, unsigned int *dlen,
+  void *ctx, int dir)
+{
+   const int cpu = get_cpu();
+   u8 *scratch_src = *per_cpu_ptr(sw842_src_scratches, cpu);
+   u8 *scratch_dst = *per_cpu_ptr(sw842_dst_scratches, cpu);
+   int ret;
+
+   if (slen > SW842_SCRATCH_SIZE || *dlen > SW842_SCRATCH_SIZE) {
+   ret = -EINVAL;
+   goto out;
+   }
+
+   scatterwalk_map_and_copy(scratch_src, src, 0, slen, 0);
+   if (dir)
+   ret = sw842_compress(scratch_src, slen, scratch_dst, dlen,
+ctx);
+   else
+   ret = sw842_decompress(scratch_src, slen, scratch_dst, dlen);
+   if (!ret)
+   scatterwalk_map_and_copy(scratch_dst, dst, 0, *dlen, 1);
+
+out:
+   put_cpu();
+   return ret;
+}
+
+static int sw842_scomp_compress(struct crypto_scomp *tfm,
+   struct scatterlist *src, unsigned int slen,
+   struct scatterlist *dst, unsigned int *dlen,
+   void *ctx)
+{
+   return sw842_scomp_comp_decomp(tfm, src, slen, dst, dlen, ctx, 1);
+}
+
+static int sw842_scomp_decompress(struct crypto_scomp *tfm,
+ struct scatterlist *src, unsigned int slen,
+ struct scatterlist *dst, unsigned int *dlen,
+ void *ctx)
+{
+   return sw842_scomp_comp_decomp(tfm, src, slen, dst, dlen, ctx, 0);
+}
+
 static struct crypto_alg alg = {
.cra_name   = "842",
.cra_driver_name= "842-generic",
@@ -59,20 +145,65 @@ static struct crypto_alg alg = {
.cra_flags  = CRYPTO_ALG_TYPE_COMPRESS,
.cra_ctxsize= sizeof(struct crypto842_ctx),
.cra_module = THIS_MODULE,
+   .cra_init   = crypto842_init,
+   .cra_exit   = crypto842_exit,
.cra_u  = { .compress = {
.coa_compress   = crypto842_compress,
.coa_decompress = crypto842_decompress } }
 };
 
+static struct scomp_alg scomp = {
+   .alloc_ctx  = crypto842_alloc_ctx,
+   .free_ctx   = crypto842_free_ctx,
+   .compress   = sw842_scomp_compress,
+   .decompress = sw842_scomp_decompress,
+   .base   = {
+   .cra_name   = "842",
+   .cra_driver_name = "842-scomp",
+   .cra_priority= 100,
+   .cra_module  = THIS_MODULE,
+   }
+};
+
 static int __init crypto842_mod_init(void)
 {
-   return crypto_register_alg(&alg);
+   int ret;
+
+   sw842_src_scratches = crypto_scomp_alloc_scratches(SW842_SCRATCH_SIZE);
+   if (!sw842_src_scratches)
+   return -ENOMEM;
+
+   sw842_dst_scratches = crypto_scomp_alloc_scratches(SW842_SCRATCH_SIZE);
+   if (!sw842_dst_scratches) {
+  

[PATCH v7 5/9] crypto: acomp - add support for lz4 via scomp

2016-09-13 Thread Giovanni Cabiddu
Add scomp backend for lz4 compression algorithm

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig |1 +
 crypto/lz4.c   |  147 
 2 files changed, 138 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index d275591..e95cbbd 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1606,6 +1606,7 @@ config CRYPTO_842
 config CRYPTO_LZ4
tristate "LZ4 compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select LZ4_COMPRESS
select LZ4_DECOMPRESS
help
diff --git a/crypto/lz4.c b/crypto/lz4.c
index aefbcea..8e4915d 100644
--- a/crypto/lz4.c
+++ b/crypto/lz4.c
@@ -22,37 +22,60 @@
 #include 
 #include 
 #include 
+#include 
 #include 
+#include 
+
+#define LZ4_SCRATCH_SIZE   131072
 
 struct lz4_ctx {
void *lz4_comp_mem;
 };
 
+static void * __percpu *lz4_src_scratches;
+static void * __percpu *lz4_dst_scratches;
+
+static void *lz4_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = vmalloc(LZ4_MEM_COMPRESS);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
 static int lz4_init(struct crypto_tfm *tfm)
 {
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   ctx->lz4_comp_mem = vmalloc(LZ4_MEM_COMPRESS);
-   if (!ctx->lz4_comp_mem)
+   ctx->lz4_comp_mem = lz4_alloc_ctx(NULL);
+   if (IS_ERR(ctx->lz4_comp_mem))
return -ENOMEM;
 
return 0;
 }
 
+static void lz4_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   vfree(ctx);
+}
+
 static void lz4_exit(struct crypto_tfm *tfm)
 {
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
-   vfree(ctx->lz4_comp_mem);
+
+   lz4_free_ctx(NULL, ctx->lz4_comp_mem);
 }
 
-static int lz4_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lz4_compress_crypto(const u8 *src, unsigned int slen,
+u8 *dst, unsigned int *dlen, void *ctx)
 {
-   struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen;
int err;
 
-   err = lz4_compress(src, slen, dst, &tmp_len, ctx->lz4_comp_mem);
+   err = lz4_compress(src, slen, dst, &tmp_len, ctx);
 
if (err < 0)
return -EINVAL;
@@ -61,8 +84,16 @@ static int lz4_compress_crypto(struct crypto_tfm *tfm, const 
u8 *src,
return 0;
 }
 
-static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lz4_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   return __lz4_compress_crypto(src, slen, dst, dlen, ctx->lz4_comp_mem);
+}
+
+static int __lz4_decompress_crypto(const u8 *src, unsigned int slen,
+  u8 *dst, unsigned int *dlen, void *ctx)
 {
int err;
size_t tmp_len = *dlen;
@@ -76,6 +107,59 @@ static int lz4_decompress_crypto(struct crypto_tfm *tfm, 
const u8 *src,
return err;
 }
 
+static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
+unsigned int slen, u8 *dst,
+unsigned int *dlen)
+{
+   return __lz4_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
+static int lz4_scomp_comp_decomp(struct crypto_scomp *tfm,
+struct scatterlist *src, unsigned int slen,
+struct scatterlist *dst, unsigned int *dlen,
+void *ctx, int dir)
+{
+   const int cpu = get_cpu();
+   u8 *scratch_src = *per_cpu_ptr(lz4_src_scratches, cpu);
+   u8 *scratch_dst = *per_cpu_ptr(lz4_dst_scratches, cpu);
+   int ret;
+
+   if (slen > LZ4_SCRATCH_SIZE || *dlen > LZ4_SCRATCH_SIZE) {
+   ret = -EINVAL;
+   goto out;
+   }
+
+   scatterwalk_map_and_copy(scratch_src, src, 0, slen, 0);
+   if (dir)
+   ret = __lz4_compress_crypto(scratch_src, slen, scratch_dst,
+   dlen, ctx);
+   else
+   ret = __lz4_decompress_crypto(scratch_src, slen, scratch_dst,
+ dlen, NULL);
+   if (!ret)
+   scatterwalk_map_and_copy(scratch_dst, dst, 0, *dlen, 1);
+
+out:
+   put_cpu();
+   return ret;
+}
+
+static int lz4_scomp_compress(struct crypto_scomp *tfm,
+ struct scatterlist *src, unsigned int slen,
+ struct scatterlist *dst, unsigned int *dlen,
+ void *ctx)
+{
+   return lz4_scomp_comp_decomp(tfm, src, slen, dst, dlen, ctx, 1);
+}
+
+static int lz4_scomp_decompress(struct crypto_scomp *tfm,
+

[PATCH v7 0/9] crypto: asynchronous compression api

2016-09-13 Thread Giovanni Cabiddu
The following patch set introduces acomp, a generic asynchronous
(de)compression api with support for SG lists.
We propose a new crypto type called crypto_acomp_type, a new struct acomp_alg
and struct crypto_acomp, together with number of helper functions to register
acomp type algorithms and allocate tfm instances.
This interface will allow the following operations:

int (*compress)(struct acomp_req *req);
int (*decompress)(struct acomp_req *req);

Together with acomp we propose a new driver-side interface, scomp, which
targets legacy synchronous implementation. We converted all compression
algorithms available in LKCF to use this interface so that those algos will be 
accessible through the acomp api.

Changes in v7:
- removed linearization of SG lists and per-request vmalloc allocations in
  scomp layer
- modified scomp internal API to use SG lists
- introduced per-cpu cache of 128K scratch buffers allocated using vmalloc 
  in legacy scomp algorithms

Changes in v6:
- changed acomp_request_alloc prototype by removing gfp parameter. 
  acomp_request_alloc will always use GFP_KERNEL

Changes in v5:
- removed qdecompress api, no longer needed
- removed produced and consumed counters in acomp_req
- added crypto_has_acomp function 

Changes in v4:
- added qdecompress api, a front-end for decompression algorithms which
  do not need additional vmalloc work space

Changes in v3:
- added driver-side scomp interface
- provided support for lzo, lz4, lz4hc, 842, deflate compression algorithms
  via the acomp api (through scomp)
- extended testmgr to support acomp
- removed extended acomp api for supporting deflate algorithm parameters
  (will be enhanced and re-proposed in future)
Note that (2) to (7) are a rework of Joonsoo Kim's scomp patches.

Changes in v2:
- added compression and decompression request sizes in acomp_alg
  in order to enable noctx support
- extended api with helpers to allocate compression and
  decompression requests

Changes from initial submit:
- added consumed and produced fields to acomp_req
- extended api to support configuration of deflate compressors

--- 
Giovanni Cabiddu (9):
  crypto: add asynchronous compression api
  crypto: add driver-side scomp interface
  crypto: scomp - add scratch buffers allocator and deallocator
  crypto: acomp - add support for lzo via scomp
  crypto: acomp - add support for lz4 via scomp
  crypto: acomp - add support for lz4hc via scomp
  crypto: acomp - add support for 842 via scomp
  crypto: acomp - add support for deflate via scomp
  crypto: acomp - update testmgr with support for acomp

 crypto/842.c|  135 ++-
 crypto/Kconfig  |   15 ++
 crypto/Makefile |3 +
 crypto/acompress.c  |  163 ++
 crypto/crypto_user.c|   19 +++
 crypto/deflate.c|  166 +--
 crypto/lz4.c|  147 +++--
 crypto/lz4hc.c  |  147 +++--
 crypto/lzo.c|  146 ++--
 crypto/scompress.c  |  225 +++
 crypto/testmgr.c|  158 --
 include/crypto/acompress.h  |  253 +++
 include/crypto/internal/acompress.h |   81 +++
 include/crypto/internal/scompress.h |  139 +++
 include/linux/crypto.h  |3 +
 15 files changed, 1742 insertions(+), 58 deletions(-)
 create mode 100644 crypto/acompress.c
 create mode 100644 crypto/scompress.c
 create mode 100644 include/crypto/acompress.h
 create mode 100644 include/crypto/internal/acompress.h
 create mode 100644 include/crypto/internal/scompress.h

-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 2/9] crypto: add driver-side scomp interface

2016-09-13 Thread Giovanni Cabiddu
Add a synchronous back-end (scomp) to acomp. This allows to easily
expose the already present compression algorithms in LKCF via acomp

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Makefile |1 +
 crypto/acompress.c  |   49 +-
 crypto/scompress.c  |  184 +++
 include/crypto/acompress.h  |   32 +++
 include/crypto/internal/acompress.h |   15 +++
 include/crypto/internal/scompress.h |  137 ++
 include/linux/crypto.h  |2 +
 7 files changed, 398 insertions(+), 22 deletions(-)
 create mode 100644 crypto/scompress.c
 create mode 100644 include/crypto/internal/scompress.h

diff --git a/crypto/Makefile b/crypto/Makefile
index 0933dc6..5c83f3d 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -51,6 +51,7 @@ rsa_generic-y += rsa-pkcs1pad.o
 obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o
 
 obj-$(CONFIG_CRYPTO_ACOMP2) += acompress.o
+obj-$(CONFIG_CRYPTO_ACOMP2) += scompress.o
 
 cryptomgr-y := algboss.o testmgr.o
 
diff --git a/crypto/acompress.c b/crypto/acompress.c
index f24fef3..17200b9 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -22,8 +22,11 @@
 #include 
 #include 
 #include 
+#include 
 #include "internal.h"
 
+static const struct crypto_type crypto_acomp_type;
+
 #ifdef CONFIG_NET
 static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
 {
@@ -67,6 +70,13 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
struct acomp_alg *alg = crypto_acomp_alg(acomp);
 
+   if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
+   return crypto_init_scomp_ops_async(tfm);
+
+   acomp->compress = alg->compress;
+   acomp->decompress = alg->decompress;
+   acomp->reqsize = alg->reqsize;
+
if (alg->exit)
acomp->base.exit = crypto_acomp_exit_tfm;
 
@@ -76,15 +86,25 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
return 0;
 }
 
+static unsigned int crypto_acomp_extsize(struct crypto_alg *alg)
+{
+   int extsize = crypto_alg_extsize(alg);
+
+   if (alg->cra_type != &crypto_acomp_type)
+   extsize += sizeof(struct crypto_scomp *);
+
+   return extsize;
+}
+
 static const struct crypto_type crypto_acomp_type = {
-   .extsize = crypto_alg_extsize,
+   .extsize = crypto_acomp_extsize,
.init_tfm = crypto_acomp_init_tfm,
 #ifdef CONFIG_PROC_FS
.show = crypto_acomp_show,
 #endif
.report = crypto_acomp_report,
.maskclear = ~CRYPTO_ALG_TYPE_MASK,
-   .maskset = CRYPTO_ALG_TYPE_MASK,
+   .maskset = CRYPTO_ALG_TYPE_ACOMPRESS_MASK,
.type = CRYPTO_ALG_TYPE_ACOMPRESS,
.tfmsize = offsetof(struct crypto_acomp, base),
 };
@@ -96,6 +116,31 @@ struct crypto_acomp *crypto_alloc_acomp(const char 
*alg_name, u32 type,
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_acomp);
 
+struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp)
+{
+   struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+   struct acomp_req *req;
+
+   req = __acomp_request_alloc(acomp);
+   if (req && (tfm->__crt_alg->cra_type != &crypto_acomp_type))
+   return crypto_acomp_scomp_alloc_ctx(req);
+
+   return req;
+}
+EXPORT_SYMBOL_GPL(acomp_request_alloc);
+
+void acomp_request_free(struct acomp_req *req)
+{
+   struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
+   struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+
+   if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
+   crypto_acomp_scomp_free_ctx(req);
+
+   __acomp_request_free(req);
+}
+EXPORT_SYMBOL_GPL(acomp_request_free);
+
 int crypto_register_acomp(struct acomp_alg *alg)
 {
struct crypto_alg *base = &alg->base;
diff --git a/crypto/scompress.c b/crypto/scompress.c
new file mode 100644
index 000..9f426cc
--- /dev/null
+++ b/crypto/scompress.c
@@ -0,0 +1,184 @@
+/*
+ * Synchronous Compression operations
+ *
+ * Copyright 2015 LG Electronics Inc.
+ * Copyright (c) 2016, Intel Corporation
+ * Author: Giovanni Cabiddu 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "internal.h"
+
+static const struct crypto_type crypto_scomp_type;
+
+#ifdef CONFIG_NET
+static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   struct crypto_report_comp rscomp;
+
+   strncpy(rscomp.type, "scomp", sizeof(rscomp.type));
+
+   if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
+   sizeof(struct crypto_report_comp), &rscomp))
+   goto nla_put

[PATCH v7 8/9] crypto: acomp - add support for deflate via scomp

2016-09-13 Thread Giovanni Cabiddu
Add scomp backend for deflate compression algorithm

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig   |1 +
 crypto/deflate.c |  166 ++---
 2 files changed, 157 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index ac7b519..7d4808f 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1578,6 +1578,7 @@ comment "Compression"
 config CRYPTO_DEFLATE
tristate "Deflate compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select ZLIB_INFLATE
select ZLIB_DEFLATE
help
diff --git a/crypto/deflate.c b/crypto/deflate.c
index 95d8d37..f924031 100644
--- a/crypto/deflate.c
+++ b/crypto/deflate.c
@@ -32,16 +32,22 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #define DEFLATE_DEF_LEVEL  Z_DEFAULT_COMPRESSION
 #define DEFLATE_DEF_WINBITS11
 #define DEFLATE_DEF_MEMLEVEL   MAX_MEM_LEVEL
+#define DEFLATE_SCR_SIZE   131072
 
 struct deflate_ctx {
struct z_stream_s comp_stream;
struct z_stream_s decomp_stream;
 };
 
+static void * __percpu *deflate_src_scratches;
+static void * __percpu *deflate_dst_scratches;
+
 static int deflate_comp_init(struct deflate_ctx *ctx)
 {
int ret = 0;
@@ -101,9 +107,8 @@ static void deflate_decomp_exit(struct deflate_ctx *ctx)
vfree(ctx->decomp_stream.workspace);
 }
 
-static int deflate_init(struct crypto_tfm *tfm)
+static int __deflate_init(void *ctx)
 {
-   struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
int ret;
 
ret = deflate_comp_init(ctx);
@@ -116,19 +121,55 @@ out:
return ret;
 }
 
-static void deflate_exit(struct crypto_tfm *tfm)
+static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
+{
+   struct deflate_ctx *ctx;
+   int ret;
+
+   ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   ret = __deflate_init(ctx);
+   if (ret) {
+   kfree(ctx);
+   return ERR_PTR(ret);
+   }
+
+   return ctx;
+}
+
+static int deflate_init(struct crypto_tfm *tfm)
 {
struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
 
+   return __deflate_init(ctx);
+}
+
+static void __deflate_exit(void *ctx)
+{
deflate_comp_exit(ctx);
deflate_decomp_exit(ctx);
 }
 
-static int deflate_compress(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static void deflate_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   __deflate_exit(ctx);
+   kzfree(ctx);
+}
+
+static void deflate_exit(struct crypto_tfm *tfm)
+{
+   struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   __deflate_exit(ctx);
+}
+
+static int __deflate_compress(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
 {
int ret = 0;
-   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+   struct deflate_ctx *dctx = ctx;
struct z_stream_s *stream = &dctx->comp_stream;
 
ret = zlib_deflateReset(stream);
@@ -153,12 +194,20 @@ out:
return ret;
 }
 
-static int deflate_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int deflate_compress(struct crypto_tfm *tfm, const u8 *src,
+   unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+
+   return __deflate_compress(src, slen, dst, dlen, dctx);
+}
+
+static int __deflate_decompress(const u8 *src, unsigned int slen,
+   u8 *dst, unsigned int *dlen, void *ctx)
 {
 
int ret = 0;
-   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+   struct deflate_ctx *dctx = ctx;
struct z_stream_s *stream = &dctx->decomp_stream;
 
ret = zlib_inflateReset(stream);
@@ -194,6 +243,61 @@ out:
return ret;
 }
 
+static int deflate_decompress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+
+   return __deflate_decompress(src, slen, dst, dlen, dctx);
+}
+
+static int deflate_scomp_comp_decomp(struct crypto_scomp *tfm,
+struct scatterlist *src,
+unsigned int slen,
+struct scatterlist *dst,
+unsigned int *dlen, void *ctx, int dir)
+{
+   const int cpu = get_cpu();
+   u8 *scratch_src = *per_cpu_ptr(deflate_src_scratches, cpu);
+   u8 *scratch_dst = *per_cpu_ptr(deflate_dst_scratches, cpu);
+   int ret;
+
+   if (slen > DEFLATE_SCR_SIZE || *dlen > DEFLATE_SCR_SIZE) {
+   ret = -EINVAL;
+   goto out;
+   }
+
+   scatterwalk_map_and_copy(scratc

[PATCH v7 1/9] crypto: add asynchronous compression api

2016-09-13 Thread Giovanni Cabiddu
Add acomp, an asynchronous compression api that uses scatterlist
buffers

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig  |   10 ++
 crypto/Makefile |2 +
 crypto/acompress.c  |  118 
 crypto/crypto_user.c|   19 +++
 include/crypto/acompress.h  |  261 +++
 include/crypto/internal/acompress.h |   66 +
 include/linux/crypto.h  |1 +
 7 files changed, 477 insertions(+), 0 deletions(-)
 create mode 100644 crypto/acompress.c
 create mode 100644 include/crypto/acompress.h
 create mode 100644 include/crypto/internal/acompress.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 84d7148..f553f66 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -102,6 +102,15 @@ config CRYPTO_KPP
select CRYPTO_ALGAPI
select CRYPTO_KPP2
 
+config CRYPTO_ACOMP2
+   tristate
+   select CRYPTO_ALGAPI2
+
+config CRYPTO_ACOMP
+   tristate
+   select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
+
 config CRYPTO_RSA
tristate "RSA algorithm"
select CRYPTO_AKCIPHER
@@ -138,6 +147,7 @@ config CRYPTO_MANAGER2
select CRYPTO_BLKCIPHER2
select CRYPTO_AKCIPHER2
select CRYPTO_KPP2
+   select CRYPTO_ACOMP2
 
 config CRYPTO_USER
tristate "Userspace cryptographic algorithm configuration"
diff --git a/crypto/Makefile b/crypto/Makefile
index 99cc64a..0933dc6 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -50,6 +50,8 @@ rsa_generic-y += rsa_helper.o
 rsa_generic-y += rsa-pkcs1pad.o
 obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o
 
+obj-$(CONFIG_CRYPTO_ACOMP2) += acompress.o
+
 cryptomgr-y := algboss.o testmgr.o
 
 obj-$(CONFIG_CRYPTO_MANAGER2) += cryptomgr.o
diff --git a/crypto/acompress.c b/crypto/acompress.c
new file mode 100644
index 000..f24fef3
--- /dev/null
+++ b/crypto/acompress.c
@@ -0,0 +1,118 @@
+/*
+ * Asynchronous Compression operations
+ *
+ * Copyright (c) 2016, Intel Corporation
+ * Authors: Weigang Li 
+ *  Giovanni Cabiddu 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "internal.h"
+
+#ifdef CONFIG_NET
+static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   struct crypto_report_comp racomp;
+
+   strncpy(racomp.type, "acomp", sizeof(racomp.type));
+
+   if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
+   sizeof(struct crypto_report_comp), &racomp))
+   goto nla_put_failure;
+   return 0;
+
+nla_put_failure:
+   return -EMSGSIZE;
+}
+#else
+static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   return -ENOSYS;
+}
+#endif
+
+static void crypto_acomp_show(struct seq_file *m, struct crypto_alg *alg)
+   __attribute__ ((unused));
+
+static void crypto_acomp_show(struct seq_file *m, struct crypto_alg *alg)
+{
+   seq_puts(m, "type : acomp\n");
+}
+
+static void crypto_acomp_exit_tfm(struct crypto_tfm *tfm)
+{
+   struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+   struct acomp_alg *alg = crypto_acomp_alg(acomp);
+
+   alg->exit(acomp);
+}
+
+static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
+{
+   struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+   struct acomp_alg *alg = crypto_acomp_alg(acomp);
+
+   if (alg->exit)
+   acomp->base.exit = crypto_acomp_exit_tfm;
+
+   if (alg->init)
+   return alg->init(acomp);
+
+   return 0;
+}
+
+static const struct crypto_type crypto_acomp_type = {
+   .extsize = crypto_alg_extsize,
+   .init_tfm = crypto_acomp_init_tfm,
+#ifdef CONFIG_PROC_FS
+   .show = crypto_acomp_show,
+#endif
+   .report = crypto_acomp_report,
+   .maskclear = ~CRYPTO_ALG_TYPE_MASK,
+   .maskset = CRYPTO_ALG_TYPE_MASK,
+   .type = CRYPTO_ALG_TYPE_ACOMPRESS,
+   .tfmsize = offsetof(struct crypto_acomp, base),
+};
+
+struct crypto_acomp *crypto_alloc_acomp(const char *alg_name, u32 type,
+   u32 mask)
+{
+   return crypto_alloc_tfm(alg_name, &crypto_acomp_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_acomp);
+
+int crypto_register_acomp(struct acomp_alg *alg)
+{
+   struct crypto_alg *base = &alg->base;
+
+   base->cra_type = &crypto_acomp_type;
+   base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+   base->cra_flags |= CRYPTO_ALG_TYPE_ACOMPRESS;
+
+   return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_acomp);
+
+int crypto_unregister_acomp(struct acomp_alg *alg)
+{
+   return crypto_unregister_alg(&alg->base);
+}
+EXPORT_SYM

[PATCH v7 9/9] crypto: acomp - update testmgr with support for acomp

2016-09-13 Thread Giovanni Cabiddu
Add tests to the test manager for algorithms exposed through the acomp
api

Signed-off-by: Giovanni Cabiddu 
---
 crypto/testmgr.c |  158 +-
 1 files changed, 145 insertions(+), 13 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 0b01c3d..65a2d3d 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -33,6 +33,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "internal.h"
 
@@ -1439,6 +1440,121 @@ out:
return ret;
 }
 
+static int test_acomp(struct crypto_acomp *tfm, struct comp_testvec *ctemplate,
+ struct comp_testvec *dtemplate, int ctcount, int dtcount)
+{
+   const char *algo = crypto_tfm_alg_driver_name(crypto_acomp_tfm(tfm));
+   unsigned int i;
+   char output[COMP_BUF_SIZE];
+   int ret;
+   struct scatterlist src, dst;
+   struct acomp_req *req;
+   struct tcrypt_result result;
+
+   for (i = 0; i < ctcount; i++) {
+   unsigned int dlen = COMP_BUF_SIZE;
+   int ilen = ctemplate[i].inlen;
+
+   memset(output, 0, sizeof(output));
+   init_completion(&result.completion);
+   sg_init_one(&src, ctemplate[i].input, ilen);
+   sg_init_one(&dst, output, dlen);
+
+   req = acomp_request_alloc(tfm);
+   if (!req) {
+   pr_err("alg: acomp: request alloc failed for %s\n",
+  algo);
+   ret = -ENOMEM;
+   goto out;
+   }
+
+   acomp_request_set_params(req, &src, &dst, ilen, dlen);
+   acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+  tcrypt_complete, &result);
+
+   ret = wait_async_op(&result, crypto_acomp_compress(req));
+   if (ret) {
+   pr_err("alg: acomp: compression failed on test %d for 
%s: ret=%d\n",
+  i + 1, algo, -ret);
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (req->dlen != ctemplate[i].outlen) {
+   pr_err("alg: acomp: Compression test %d failed for %s: 
output len = %d\n",
+  i + 1, algo, req->dlen);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (memcmp(output, ctemplate[i].output, req->dlen)) {
+   pr_err("alg: acomp: Compression test %d failed for 
%s\n",
+  i + 1, algo);
+   hexdump(output, req->dlen);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   acomp_request_free(req);
+   }
+
+   for (i = 0; i < dtcount; i++) {
+   unsigned int dlen = COMP_BUF_SIZE;
+   int ilen = dtemplate[i].inlen;
+
+   memset(output, 0, sizeof(output));
+   init_completion(&result.completion);
+   sg_init_one(&src, dtemplate[i].input, ilen);
+   sg_init_one(&dst, output, dlen);
+
+   req = acomp_request_alloc(tfm);
+   if (!req) {
+   pr_err("alg: acomp: request alloc failed for %s\n",
+  algo);
+   ret = -ENOMEM;
+   goto out;
+   }
+
+   acomp_request_set_params(req, &src, &dst, ilen, dlen);
+   acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+  tcrypt_complete, &result);
+
+   ret = wait_async_op(&result, crypto_acomp_decompress(req));
+   if (ret) {
+   pr_err("alg: acomp: decompression failed on test %d for 
%s: ret=%d\n",
+  i + 1, algo, -ret);
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (req->dlen != dtemplate[i].outlen) {
+   pr_err("alg: acomp: Decompression test %d failed for 
%s: output len = %d\n",
+  i + 1, algo, req->dlen);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (memcmp(output, dtemplate[i].output, req->dlen)) {
+   pr_err("alg: acomp: Decompression test %d failed for 
%s\n",
+  i + 1, algo);
+   hexdump(output, req->dlen);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   acomp_request_free(req);
+   }
+
+   ret = 0;
+
+out:
+   retur

[PATCH v7 6/9] crypto: acomp - add support for lz4hc via scomp

2016-09-13 Thread Giovanni Cabiddu
Add scomp backend for lz4hc compression algorithm

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig |1 +
 crypto/lz4hc.c |  147 
 2 files changed, 138 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index e95cbbd..4258e85 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1615,6 +1615,7 @@ config CRYPTO_LZ4
 config CRYPTO_LZ4HC
tristate "LZ4HC compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select LZ4HC_COMPRESS
select LZ4_DECOMPRESS
help
diff --git a/crypto/lz4hc.c b/crypto/lz4hc.c
index a1d3b5b..d509901 100644
--- a/crypto/lz4hc.c
+++ b/crypto/lz4hc.c
@@ -22,37 +22,59 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+
+#define LZ4HC_SCRATCH_SIZE 131072
 
 struct lz4hc_ctx {
void *lz4hc_comp_mem;
 };
 
+static void * __percpu *lz4hc_src_scratches;
+static void * __percpu *lz4hc_dst_scratches;
+
+static void *lz4hc_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = vmalloc(LZ4HC_MEM_COMPRESS);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
 static int lz4hc_init(struct crypto_tfm *tfm)
 {
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   ctx->lz4hc_comp_mem = vmalloc(LZ4HC_MEM_COMPRESS);
-   if (!ctx->lz4hc_comp_mem)
+   ctx->lz4hc_comp_mem = lz4hc_alloc_ctx(NULL);
+   if (IS_ERR(ctx->lz4hc_comp_mem))
return -ENOMEM;
 
return 0;
 }
 
+static void lz4hc_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   vfree(ctx);
+}
+
 static void lz4hc_exit(struct crypto_tfm *tfm)
 {
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   vfree(ctx->lz4hc_comp_mem);
+   lz4hc_free_ctx(NULL, ctx->lz4hc_comp_mem);
 }
 
-static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lz4hc_compress_crypto(const u8 *src, unsigned int slen,
+  u8 *dst, unsigned int *dlen, void *ctx)
 {
-   struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen;
int err;
 
-   err = lz4hc_compress(src, slen, dst, &tmp_len, ctx->lz4hc_comp_mem);
+   err = lz4hc_compress(src, slen, dst, &tmp_len, ctx);
 
if (err < 0)
return -EINVAL;
@@ -61,8 +83,18 @@ static int lz4hc_compress_crypto(struct crypto_tfm *tfm, 
const u8 *src,
return 0;
 }
 
-static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
+unsigned int slen, u8 *dst,
+unsigned int *dlen)
+{
+   struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   return __lz4hc_compress_crypto(src, slen, dst, dlen,
+   ctx->lz4hc_comp_mem);
+}
+
+static int __lz4hc_decompress_crypto(const u8 *src, unsigned int slen,
+u8 *dst, unsigned int *dlen, void *ctx)
 {
int err;
size_t tmp_len = *dlen;
@@ -76,6 +108,59 @@ static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, 
const u8 *src,
return err;
 }
 
+static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
+  unsigned int slen, u8 *dst,
+  unsigned int *dlen)
+{
+   return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
+static int lz4hc_scomp_comp_decomp(struct crypto_scomp *tfm,
+  struct scatterlist *src, unsigned int slen,
+  struct scatterlist *dst, unsigned int *dlen,
+  void *ctx, int dir)
+{
+   const int cpu = get_cpu();
+   u8 *scratch_src = *per_cpu_ptr(lz4hc_src_scratches, cpu);
+   u8 *scratch_dst = *per_cpu_ptr(lz4hc_dst_scratches, cpu);
+   int ret;
+
+   if (slen > LZ4HC_SCRATCH_SIZE || *dlen > LZ4HC_SCRATCH_SIZE) {
+   ret = -EINVAL;
+   goto out;
+   }
+
+   scatterwalk_map_and_copy(scratch_src, src, 0, slen, 0);
+   if (dir)
+   ret = __lz4hc_compress_crypto(scratch_src, slen, scratch_dst,
+ dlen, ctx);
+   else
+   ret = __lz4hc_decompress_crypto(scratch_src, slen, scratch_dst,
+   dlen, NULL);
+   if (!ret)
+   scatterwalk_map_and_copy(scratch_dst, dst, 0, *dlen, 1);
+
+out:
+   put_cpu();
+   return ret;
+}
+
+static int lz4hc_scomp_compress(struct crypto_scomp *tfm,
+   struct scatterlist *src, unsigned int slen,
+   struct scatterlist *dst, unsigned int *dlen,
+   

[PATCH v7 3/9] crypto: scomp - add scratch buffers allocator and deallocator

2016-09-13 Thread Giovanni Cabiddu
Add utility functions to allocate and deallocate scratch buffers used by
software implementations of scomp

Signed-off-by: Giovanni Cabiddu 
---
 crypto/scompress.c  |   41 +++
 include/crypto/internal/scompress.h |2 +
 2 files changed, 43 insertions(+), 0 deletions(-)

diff --git a/crypto/scompress.c b/crypto/scompress.c
index 9f426cc..385e1da 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -28,6 +29,46 @@
 
 static const struct crypto_type crypto_scomp_type;
 
+void crypto_scomp_free_scratches(void * __percpu *scratches)
+{
+   int i;
+
+   if (!scratches)
+   return;
+
+   for_each_possible_cpu(i)
+   vfree(*per_cpu_ptr(scratches, i));
+
+   free_percpu(scratches);
+}
+EXPORT_SYMBOL_GPL(crypto_scomp_free_scratches);
+
+void * __percpu *crypto_scomp_alloc_scratches(unsigned long size)
+{
+   void * __percpu *scratches;
+   int i;
+
+   scratches = alloc_percpu(void *);
+   if (!scratches)
+   return NULL;
+
+   for_each_possible_cpu(i) {
+   void *scratch;
+
+   scratch = vmalloc_node(size, cpu_to_node(i));
+   if (!scratch)
+   goto error;
+   *per_cpu_ptr(scratches, i) = scratch;
+   }
+
+   return scratches;
+
+error:
+   crypto_scomp_free_scratches(scratches);
+   return NULL;
+}
+EXPORT_SYMBOL_GPL(crypto_scomp_alloc_scratches);
+
 #ifdef CONFIG_NET
 static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg)
 {
diff --git a/include/crypto/internal/scompress.h 
b/include/crypto/internal/scompress.h
index 8708611..a3547c1 100644
--- a/include/crypto/internal/scompress.h
+++ b/include/crypto/internal/scompress.h
@@ -109,6 +109,8 @@ static inline int crypto_scomp_decompress(struct 
crypto_scomp *tfm,
 int crypto_init_scomp_ops_async(struct crypto_tfm *tfm);
 struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req);
 void crypto_acomp_scomp_free_ctx(struct acomp_req *req);
+void crypto_scomp_free_scratches(void * __percpu *scratches);
+void * __percpu *crypto_scomp_alloc_scratches(unsigned long size);
 
 /**
  * crypto_register_scomp() -- Register synchronous compression algorithm
-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] crypto: arm/aes-ctr: fix NULL dereference in tail processing

2016-09-13 Thread Herbert Xu
On Tue, Sep 13, 2016 at 09:48:52AM +0100, Ard Biesheuvel wrote:
> The AES-CTR glue code avoids calling into the blkcipher API for the
> tail portion of the walk, by comparing the remainder of walk.nbytes
> modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight
> into the tail processing block if they are equal. This tail processing
> block checks whether nbytes != 0, and does nothing otherwise.
> 
> However, in case of an allocation failure in the blkcipher layer, we
> may enter this code with walk.nbytes == 0, while nbytes > 0. In this
> case, we should not dereference the source and destination pointers,
> since they may be NULL. So instead of checking for nbytes != 0, check
> for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in
> non-error conditions.
> 
> Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 
> Crypto Extensions")
> Reported-by: xiakaixu 
> Signed-off-by: Ard Biesheuvel 
> ---

Both patches applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: squash lines for simple wrapper functions

2016-09-13 Thread Herbert Xu
On Tue, Sep 13, 2016 at 04:27:54AM +0900, Masahiro Yamada wrote:
> Remove unneeded variables and assignments.
> 
> Signed-off-by: Masahiro Yamada 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] hwrng: geode-rng - Use linux/io.h instead of asm/io.h

2016-09-13 Thread Herbert Xu
On Sun, Sep 11, 2016 at 08:54:26PM +0530, PrasannaKumar Muralidharan wrote:
> Fix checkpatch.pl warning by changing from asm/io.h to linux/io.h. In
> the mean time arrange the includes in alphabetical order.
> 
> Signed-off-by: PrasannaKumar Muralidharan 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] hwrng: geode-rng - Migrate to managed API

2016-09-13 Thread Herbert Xu
On Sun, Sep 11, 2016 at 08:53:21PM +0530, PrasannaKumar Muralidharan wrote:
> Use devm_ioremap and devm_hwrng_register instead of ioremap and
> hwrng_register. This removes error handling code. Also moved code around
> by removing goto statements. This improves code readability.
> 
> Signed-off-by: PrasannaKumar Muralidharan 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH -next] hwrng: st - Fix missing clk_disable_unprepare() on error in st_rng_probe()

2016-09-13 Thread Herbert Xu
On Sat, Sep 10, 2016 at 12:03:42PM +, Wei Yongjun wrote:
> From: Wei Yongjun 
> 
> Fix the missing clk_disable_unprepare() before return
> from st_rng_probe() in the error handling case.
> 
> Signed-off-by: Wei Yongjun 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] crypto: qat - fix resource release omissions

2016-09-13 Thread Herbert Xu
On Tue, Sep 06, 2016 at 11:18:51AM +0100, Giovanni Cabiddu wrote:
> 
> ---8<---
> Subject: [PATCH] crypto: qat - fix leak on error path
> 
> Fix a memory leak in an error path in uc loader.
> 
> Signed-off-by: Giovanni Cabiddu 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] hwrng: amd-rng - Migrate to managed API

2016-09-13 Thread Herbert Xu
On Fri, Sep 09, 2016 at 01:28:23PM +0530, PrasannaKumar Muralidharan wrote:
> Managed API eliminates error handling code, thus reduces several lines
> of code.
> 
> Signed-off-by: PrasannaKumar Muralidharan 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] hwrng: core - Allocate memory during module init

2016-09-13 Thread Herbert Xu
On Wed, Sep 07, 2016 at 08:18:02PM +0530, PrasannaKumar Muralidharan wrote:
> In core rng_buffer and rng_fillbuf is allocated in hwrng_register only
> once and it is freed during module exit. This patch moves allocating
> rng_buffer and rng_fillbuf from hwrng_register to rng core's init. This
> avoids checking whether rng_buffer and rng_fillbuf was allocated from
> every hwrng_register call. Also moving them to module init makes it
> explicit that it is freed in module exit.
> 
> Change in v2:
> Fix memory leak when register_miscdev fails.
> 
> Signed-off-by: PrasannaKumar Muralidharan 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] hwrng: pasemi-rng - Use linux/io.h instead of asm/io.h

2016-09-13 Thread Herbert Xu
On Tue, Sep 06, 2016 at 01:58:39PM +0530, PrasannaKumar Muralidharan wrote:
> Checkpatch.pl warns about usage of asm/io.h. Use linux/io.h instead.
> 
> Signed-off-by: PrasannaKumar Muralidharan 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv3 00/11] crypto: omap HW crypto fixes

2016-09-13 Thread Herbert Xu
On Thu, Aug 04, 2016 at 01:28:35PM +0300, Tero Kristo wrote:
> Hi,
> 
> This revision took quite a bit time to craft due to the rework needed
> for sham buffer handling and export/import. I ended up implementing
> a flush functionality for draining out the sham buffer when doing
> export/import; just shrinking the buffer to sufficiently small size
> impacted the performance with small data chunks too much so I dropped
> this approach.
> 
> The series also fixes a couple of existing issues with omap2/omap3
> hardware acceleration, I ran a full boot test / crypto manager
> test suite on all boards accessible to me now.
> 
> Based on top of latest mainline, which is somewhere before 4.8-rc1
> as of writing this, I am unable to rebase the series during the next
> three weeks so wanted to get this out now. Targeted for 4.9 merge
> window, some fixes could be picked up earlier though if needed.

I have applied patches 1,4-5,7-11.  Some of them didn't apply
cleanly so please check the result in my tree.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: hwrng: pasemi-rng - Use linux/io.h instead of asm/io.h

2016-09-13 Thread Michael Ellerman
On Tue, 2016-06-09 at 08:28:39 UTC, PrasannaKumar Muralidharan wrote:
> Checkpatch.pl warns about usage of asm/io.h. Use linux/io.h instead.
> 
> Signed-off-by: PrasannaKumar Muralidharan 

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/63019f3cab99c7acd27df5a5b8

cheers
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: algif_aead: AIO broken with more than one iocb

2016-09-13 Thread Stephan Mueller
Am Dienstag, 13. September 2016, 18:12:46 CEST schrieb Herbert Xu:

Hi Herbert,

> I don't think we should allow that.  We should make it so that you
> must start a recvmsg before you can send data for a new request.
> 
> Remember that the async path should be identical to the sync path,
> except that you don't wait for completion.

The question is, how does the algif code knows when more than one iocb was 
submitted? Note, each iocb is translated into an independent call of the 
recvmsg_async.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3] crypto: only call put_page on referenced and used pages

2016-09-13 Thread Stephan Mueller
Am Dienstag, 13. September 2016, 18:08:16 CEST schrieb Herbert Xu:

Hi Herbert,

> This patch appears to be papering over a real bug.
> 
> The async path should be exactly the same as the sync path, except
> that we don't wait for completion.  So the question is why are we
> getting this crash here for async but not sync?

At least one reason is found in skcipher_recvmsg_async with the following code 
path:

 if (txbufs == tx_nents) {
struct scatterlist *tmp;
int x;
/* Ran out of tx slots in async request
 * need to expand */
tmp = kcalloc(tx_nents * 2, sizeof(*tmp),
  GFP_KERNEL);
if (!tmp)
goto free;

sg_init_table(tmp, tx_nents * 2);
for (x = 0; x < tx_nents; x++)
sg_set_page(&tmp[x], sg_page(&sreq->tsg[x]),
sreq->tsg[x].length,
sreq->tsg[x].offset);
kfree(sreq->tsg);
sreq->tsg = tmp;
tx_nents *= 2;
mark = true;
}


==> the code allocates twice the amount of the previously existing memory, 
copies the existing SGs over, but does not set the remaining SGs to anything. 
If the caller provides less pages than the number of allocated SGs, some SGs 
are unset. Hence, the deallocation must not do anything with the yet 
uninitialized SGs.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Crypto Fixes for 4.8

2016-09-13 Thread Herbert Xu
Hi Linus:

This push fixes a bug in the cryptd code that may lead to crashes.


Please pull from

git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git linus


Ard Biesheuvel (1):
  crypto: cryptd - initialize child shash_desc on import

 crypto/cryptd.c |9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: algif_aead: AIO broken with more than one iocb

2016-09-13 Thread Herbert Xu
On Sun, Sep 11, 2016 at 04:59:19AM +0200, Stephan Mueller wrote:
> Hi Herbert,
> 
> The AIO support for algif_aead is broken when submitting more than one iocb. 
> The break happens in aead_recvmsg_async at the following code:
> 
> /* ensure output buffer is sufficiently large */
> if (usedpages < outlen)
> goto free;
> 
> The reason is that when submitting, say, two iocb, ctx->used contains the 
> buffer length for two AEAD operations (as expected). However, the recvmsg 
> code 

I don't think we should allow that.  We should make it so that you
must start a recvmsg before you can send data for a new request.

Remember that the async path should be identical to the sync path,
except that you don't wait for completion.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3] crypto: only call put_page on referenced and used pages

2016-09-13 Thread Herbert Xu
On Tue, Sep 13, 2016 at 10:18:54AM +0200, Stephan Mueller wrote:
> Am Montag, 12. September 2016, 14:43:45 CEST schrieb Stephan Mueller:
> 
> Hi Herbert,
> 
> > Hi Herbert,
> > 
> > after getting the AIO code working on sendmsg, tried it with vmsplice/splice
> > and I get a memory corruption. Interestingly, the stack trace is partially
> > garbled too. Thus, tracking this one down may be a bit of a challenge.
> 
> The issue is a NULL pointer dereference in skcipher_free_async_sgls. The 
> issue is that SGs may not have even a page mapped to them and thus the page 
> entry is NULL.
> 
> The following patch fixes the issue and replaces the patch I sent earlier.

This patch appears to be papering over a real bug.

The async path should be exactly the same as the sync path, except
that we don't wait for completion.  So the question is why are we
getting this crash here for async but not sync?

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes

2016-09-13 Thread Borislav Petkov
On Mon, Aug 22, 2016 at 07:23:44PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky 
> 
> AMD hardware adds two additional bits to aid in nested page fault handling.
> 
> Bit 32 - NPF occurred while translating the guest's final physical address
> Bit 33 - NPF occurred while translating the guest page tables
> 
> The guest page tables fault indicator can be used as an aid for nested
> virtualization. Using V0 for the host, V1 for the first level guest and
> V2 for the second level guest, when both V1 and V2 are using nested paging
> there are currently a number of unnecessary instruction emulations. When
> V2 is launched shadow paging is used in V1 for the nested tables of V2. As
> a result, KVM marks these pages as RO in the host nested page tables. When
> V2 exits and we resume V1, these pages are still marked RO.
> 
> Every nested walk for a guest page table is treated as a user-level write
> access and this causes a lot of NPFs because the V1 page tables are marked
> RO in the V0 nested tables. While executing V1, when these NPFs occur KVM
> sees a write to a read-only page, emulates the V1 instruction and unprotects
> the page (marking it RW). This patch looks for cases where we get a NPF due
> to a guest page table walk where the page was marked RO. It immediately
> unprotects the page and resumes the guest, leading to far fewer instruction
> emulations when nested virtualization is used.
> 
> Signed-off-by: Tom Lendacky 
> ---
>  arch/x86/include/asm/kvm_host.h |   11 ++-
>  arch/x86/kvm/mmu.c  |   20 ++--
>  arch/x86/kvm/svm.c  |2 +-
>  3 files changed, 29 insertions(+), 4 deletions(-)

FWIW: Reviewed-by: Borislav Petkov 

-- 
Regards/Gruss,
Boris.

ECO tip #101: Trim your mails when you reply.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)
--
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/8] hwrng: omap - Switch to non-obsolete read API implementation

2016-09-13 Thread Herbert Xu
On Wed, Sep 07, 2016 at 05:57:38PM +0200, Romain Perier wrote:
> +
> +static int omap_rng_do_read(struct hwrng *rng, void *data, size_t max,
> + bool wait)
>  {
>   struct omap_rng_dev *priv;
> - int data, i;
>  
>   priv = (struct omap_rng_dev *)rng->priv;
>  
> - for (i = 0; i < 20; i++) {
> - data = priv->pdata->data_present(priv);
> - if (data || !wait)
> - break;
> - /* RNG produces data fast enough (2+ MBit/sec, even
> -  * during "rngtest" loads, that these delays don't
> -  * seem to trigger.  We *could* use the RNG IRQ, but
> -  * that'd be higher overhead ... so why bother?
> -  */
> - udelay(10);

So in the wait case you're changing the driver's behaviour.  Instead
of waiting for 1us you'll now wait for 1s if there is no data.  Is
this what really what you want?

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv3 06/11] crypto: omap-des: Fix support for unequal lengths

2016-09-13 Thread Herbert Xu
On Thu, Aug 04, 2016 at 01:28:41PM +0300, Tero Kristo wrote:
> From: Lokesh Vutla 
> 
> For cases where total length of an input SGs is not same as
> length of the input data for encryption, omap-des driver
> crashes. This happens in the case when IPsec is trying to use
> omap-des driver.
> 
> To avoid this, we copy all the pages from the input SG list
> into a contiguous buffer and prepare a single element SG list
> for this buffer with length as the total bytes to crypt, which is
> similar thing that is done in case of unaligned lengths.

Ugh, that means copying every single packet, right?

So if it's just the SG list that's the problem, why don't you
copy that instead? That is, allocate a new SG list and set it
up so that there is no excess data.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/2] crypto: arm64/aes-ctr: fix NULL dereference in tail processing

2016-09-13 Thread Ard Biesheuvel
The AES-CTR glue code avoids calling into the blkcipher API for the
tail portion of the walk, by comparing the remainder of walk.nbytes
modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight
into the tail processing block if they are equal. This tail processing
block checks whether nbytes != 0, and does nothing otherwise.

However, in case of an allocation failure in the blkcipher layer, we
may enter this code with walk.nbytes == 0, while nbytes > 0. In this
case, we should not dereference the source and destination pointers,
since they may be NULL. So instead of checking for nbytes != 0, check
for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in
non-error conditions.

Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and 
Crypto Extensions")
Reported-by: xiakaixu 
Signed-off-by: Ard Biesheuvel 
---
 arch/arm64/crypto/aes-glue.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index 5c888049d061..6b2aa0fd6cd0 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -216,7 +216,7 @@ static int ctr_encrypt(struct blkcipher_desc *desc, struct 
scatterlist *dst,
err = blkcipher_walk_done(desc, &walk,
  walk.nbytes % AES_BLOCK_SIZE);
}
-   if (nbytes) {
+   if (walk.nbytes % AES_BLOCK_SIZE) {
u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE;
u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE;
u8 __aligned(8) tail[AES_BLOCK_SIZE];
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/2] crypto: arm/aes-ctr: fix NULL dereference in tail processing

2016-09-13 Thread Ard Biesheuvel
The AES-CTR glue code avoids calling into the blkcipher API for the
tail portion of the walk, by comparing the remainder of walk.nbytes
modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight
into the tail processing block if they are equal. This tail processing
block checks whether nbytes != 0, and does nothing otherwise.

However, in case of an allocation failure in the blkcipher layer, we
may enter this code with walk.nbytes == 0, while nbytes > 0. In this
case, we should not dereference the source and destination pointers,
since they may be NULL. So instead of checking for nbytes != 0, check
for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in
non-error conditions.

Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 
Crypto Extensions")
Reported-by: xiakaixu 
Signed-off-by: Ard Biesheuvel 
---
 arch/arm/crypto/aes-ce-glue.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c
index da3c0428507b..aef022a87c53 100644
--- a/arch/arm/crypto/aes-ce-glue.c
+++ b/arch/arm/crypto/aes-ce-glue.c
@@ -284,7 +284,7 @@ static int ctr_encrypt(struct blkcipher_desc *desc, struct 
scatterlist *dst,
err = blkcipher_walk_done(desc, &walk,
  walk.nbytes % AES_BLOCK_SIZE);
}
-   if (nbytes) {
+   if (walk.nbytes % AES_BLOCK_SIZE) {
u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE;
u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE;
u8 __aligned(8) tail[AES_BLOCK_SIZE];
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3] crypto: only call put_page on referenced and used pages

2016-09-13 Thread Stephan Mueller
Am Montag, 12. September 2016, 14:43:45 CEST schrieb Stephan Mueller:

Hi Herbert,

> Hi Herbert,
> 
> after getting the AIO code working on sendmsg, tried it with vmsplice/splice
> and I get a memory corruption. Interestingly, the stack trace is partially
> garbled too. Thus, tracking this one down may be a bit of a challenge.

The issue is a NULL pointer dereference in skcipher_free_async_sgls. The issue 
is that SGs may not have even a page mapped to them and thus the page entry is 
NULL.

The following patch fixes the issue and replaces the patch I sent earlier.

---8<---

For asynchronous operation, SGs are allocated without a page mapped to
them or with a page that is not used (ref-counted). If the SGL is freed,
the code must only call put_page for an SG if there was a page assigned
and ref-counted in the first place.

This fixes a kernel crash when using io_submit with more than one iocb
using the sendmsg and sendpage (vmsplice/splice) interface

Signed-off-by: Stephan Mueller 
---
 crypto/algif_skcipher.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 28556fc..45af0fe 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -86,8 +86,13 @@ static void skcipher_free_async_sgls(struct 
skcipher_async_req *sreq)
}
sgl = sreq->tsg;
n = sg_nents(sgl);
-   for_each_sg(sgl, sg, n, i)
-   put_page(sg_page(sg));
+   for_each_sg(sgl, sg, n, i) {
+   struct page *page = sg_page(sg);
+
+   /* some SGs may not have a page mapped */
+   if (page && page_ref_count(page))
+   put_page(page);
+   }
 
kfree(sreq->tsg);
 }
-- 
2.7.4


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel panic - encryption/decryption failed when open file on Arm64

2016-09-13 Thread Ard Biesheuvel
On 13 September 2016 at 07:43, Herbert Xu  wrote:
> On Mon, Sep 12, 2016 at 06:40:15PM +0100, Ard Biesheuvel wrote:
>>
>> So to me, it seems like we should be taking the blkcipher_next_slow()
>> path, which does a kmalloc() and bails with -ENOMEM if that fails.
>
> Indeed.  This was broken a long time ago.  It does seem to be
> fixed in the new skcipher_walk code but here is a patch to fix
> it for older kernels.
>
> ---8<---
> Subject: crypto: skcipher - Fix blkcipher walk OOM crash
>
> When we need to allocate a temporary blkcipher_walk_next and it
> fails, the code is supposed to take the slow path of processing
> the data block by block.  However, due to an unrelated change
> we instead end up dereferencing the NULL pointer.
>
> This patch fixes it by moving the unrelated bsize setting out
> of the way so that we enter the slow path as inteded.
>
inteNded ^^^

> Fixes: 7607bd8ff03b ("[CRYPTO] blkcipher: Added blkcipher_walk_virt_block")
> Cc: sta...@vger.kernel.org
> Reported-by: xiakaixu 
> Reported-by: Ard Biesheuvel 
> Signed-off-by: Herbert Xu 
>

This fixes the issue for me

Tested-by: Ard Biesheuvel 

I will follow up with fixes for the ARM and arm64 CTR code shortly.

Thanks,
Ard.

> diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
> index 365..a832426 100644
> --- a/crypto/blkcipher.c
> +++ b/crypto/blkcipher.c
> @@ -233,6 +233,8 @@ static int blkcipher_walk_next(struct blkcipher_desc 
> *desc,
> return blkcipher_walk_done(desc, walk, -EINVAL);
> }
>
> +   bsize = min(walk->walk_blocksize, n);
> +
> walk->flags &= ~(BLKCIPHER_WALK_SLOW | BLKCIPHER_WALK_COPY |
>  BLKCIPHER_WALK_DIFF);
> if (!scatterwalk_aligned(&walk->in, walk->alignmask) ||
> @@ -245,7 +247,6 @@ static int blkcipher_walk_next(struct blkcipher_desc 
> *desc,
> }
> }
>
> -   bsize = min(walk->walk_blocksize, n);
> n = scatterwalk_clamp(&walk->in, n);
> n = scatterwalk_clamp(&walk->out, n);
>
> --
> Email: Herbert Xu 
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html