Re: [RFC PATCH] KEYS: Use individual pages in big_key for crypto buffers [ver #2]

2018-02-15 Thread Eric Biggers
Hi David,

On Thu, Feb 15, 2018 at 10:53:49PM +, David Howells wrote:
>  /*
> + * Free up the buffer.
> + */
> +static void big_key_free_buffer(struct big_key_buf *buf)
> +{
> + unsigned int i;
> +
> + vunmap(buf->virt);
> + for (i = 0; i < buf->nr_pages; i++)
> + if (buf->pages[i])
> + __free_page(buf->pages[i]);
> +
> + memset(buf->virt, 0, buf->nr_pages * PAGE_SIZE);
> + kfree(buf);
> +}

memset() after vunmap(), and also when buf->virt can be NULL?  I had suggested:

if (buf->virt) {
memset(buf->virt, 0, buf->nr_pages * PAGE_SIZE);
vunmap(buf->virt);
}

- Eric


Re: [PATCH v3 2/3] MIPS: crypto: Add crc32 and crc32c hw accelerated module

2018-02-15 Thread James Hogan
On Thu, Feb 15, 2018 at 02:22:14PM -0800, Eric Biggers wrote:
> On Fri, Feb 09, 2018 at 10:11:06PM +, James Hogan wrote:
> > +static struct shash_alg crc32_alg = {
> > +   .digestsize =   CHKSUM_DIGEST_SIZE,
> > +   .setkey =   chksum_setkey,
> > +   .init   =   chksum_init,
> > +   .update =   chksum_update,
> > +   .final  =   chksum_final,
> > +   .finup  =   chksum_finup,
> > +   .digest =   chksum_digest,
> > +   .descsize   =   sizeof(struct chksum_desc_ctx),
> > +   .base   =   {
> > +   .cra_name   =   "crc32",
> > +   .cra_driver_name=   "crc32-mips-hw",
> > +   .cra_priority   =   300,
> > +   .cra_blocksize  =   CHKSUM_BLOCK_SIZE,
> > +   .cra_alignmask  =   0,
> > +   .cra_ctxsize=   sizeof(struct chksum_ctx),
> > +   .cra_module =   THIS_MODULE,
> > +   .cra_init   =   chksum_cra_init,
> > +   }
> > +};
> > +
> > +
> > +static struct shash_alg crc32c_alg = {
> > +   .digestsize =   CHKSUM_DIGEST_SIZE,
> > +   .setkey =   chksum_setkey,
> > +   .init   =   chksum_init,
> > +   .update =   chksumc_update,
> > +   .final  =   chksumc_final,
> > +   .finup  =   chksumc_finup,
> > +   .digest =   chksumc_digest,
> > +   .descsize   =   sizeof(struct chksum_desc_ctx),
> > +   .base   =   {
> > +   .cra_name   =   "crc32c",
> > +   .cra_driver_name=   "crc32c-mips-hw",
> > +   .cra_priority   =   300,
> > +   .cra_blocksize  =   CHKSUM_BLOCK_SIZE,
> > +   .cra_alignmask  =   0,
> > +   .cra_ctxsize=   sizeof(struct chksum_ctx),
> > +   .cra_module =   THIS_MODULE,
> > +   .cra_init   =   chksum_cra_init,
> > +   }
> > +};
>
> Does this actually work on the latest kernel?  Now hash algorithms must have
> CRYPTO_ALG_OPTIONAL_KEY in .cra_flags if they have a .setkey method but don't
> require it to be called, otherwise the crypto API will think it's a keyed hash
> and not allow it to be used without a key.  I had to add this flag to the 
> other
> CRC32 and CRC32C algorithms (commit a208fa8f33031).  One of the CRC32C test
> vectors even doesn't set a key so it should be causing the self-tests to fail
> for "crc32c-mips-hw".  (We should add such a test vector for CRC32 too, 
> though.)

Thanks Eric. It does indeed fail now with:

alg: hash: digest failed on test 1 for crc32c-mips-hw: ret=161

I'll squash in the following change:

diff --git a/arch/mips/crypto/crc32-mips.c b/arch/mips/crypto/crc32-mips.c
index 8d4122f37fa5..7d1d2425746f 100644
--- a/arch/mips/crypto/crc32-mips.c
+++ b/arch/mips/crypto/crc32-mips.c
@@ -284,6 +284,7 @@ static struct shash_alg crc32_alg = {
.cra_name   =   "crc32",
.cra_driver_name=   "crc32-mips-hw",
.cra_priority   =   300,
+   .cra_flags  =   CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize  =   CHKSUM_BLOCK_SIZE,
.cra_alignmask  =   0,
.cra_ctxsize=   sizeof(struct chksum_ctx),
@@ -305,6 +306,7 @@ static struct shash_alg crc32c_alg = {
.cra_name   =   "crc32c",
.cra_driver_name=   "crc32c-mips-hw",
.cra_priority   =   300,
+   .cra_flags  =   CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize  =   CHKSUM_BLOCK_SIZE,
.cra_alignmask  =   0,
.cra_ctxsize=   sizeof(struct chksum_ctx),

Cheers
James


signature.asc
Description: Digital signature


[RFC PATCH] KEYS: Use individual pages in big_key for crypto buffers [ver #2]

2018-02-15 Thread David Howells
kmalloc() can't always allocate large enough buffers for big_key to use for
crypto (1MB + some metadata) so we cannot use that to allocate the buffer.
Further, vmalloc'd pages can't be passed to sg_init_one() and the aead
crypto accessors cannot be called progressively and must be passed all the
data in one go (which means we can't pass the data in one block at a time).

Fix this by allocating the buffer pages individually and passing them
through a multientry scatterlist to the crypto layer.  This has the bonus
advantage that we don't have to allocate a contiguous series of pages.

We then vmap() the page list and pass that through to the VFS read/write
routines.

This can trigger a warning:

WARNING: CPU: 0 PID: 60912 at mm/page_alloc.c:3883 
__alloc_pages_nodemask+0xb7c/0x15f8
([<002acbb6>] __alloc_pages_nodemask+0x1ee/0x15f8)
 [<002dd356>] kmalloc_order+0x46/0x90
 [<002dd3e0>] kmalloc_order_trace+0x40/0x1f8
 [<00326a10>] __kmalloc+0x430/0x4c0
 [<004343e4>] big_key_preparse+0x7c/0x210
 [<0042c040>] key_create_or_update+0x128/0x420
 [<0042e52c>] SyS_add_key+0x124/0x220
 [<007bba2c>] system_call+0xc4/0x2b0

from the keyctl/padd/useradd test of the keyutils testsuite on s390x.

Note that it might be better to shovel data through in page-sized lumps
instead as there's no particular need to use a monolithic buffer unless the
kernel itself wants to access the data.

Fixes: 13100a72f40f ("Security: Keys: Big keys stored encrypted")
Reported-by: Paul Bunyan 
Signed-off-by: David Howells 
cc: Kirill Marinushkin 
---

 security/keys/big_key.c |  107 +--
 1 file changed, 84 insertions(+), 23 deletions(-)

diff --git a/security/keys/big_key.c b/security/keys/big_key.c
index 929e14978c42..4deb7df56ec8 100644
--- a/security/keys/big_key.c
+++ b/security/keys/big_key.c
@@ -22,6 +22,13 @@
 #include 
 #include 
 
+struct big_key_buf {
+   unsigned intnr_pages;
+   void*virt;
+   struct scatterlist  *sg;
+   struct page *pages[];
+};
+
 /*
  * Layout of key payload words.
  */
@@ -91,10 +98,9 @@ static DEFINE_MUTEX(big_key_aead_lock);
 /*
  * Encrypt/decrypt big_key data
  */
-static int big_key_crypt(enum big_key_op op, u8 *data, size_t datalen, u8 *key)
+static int big_key_crypt(enum big_key_op op, struct big_key_buf *buf, size_t 
datalen, u8 *key)
 {
int ret;
-   struct scatterlist sgio;
struct aead_request *aead_req;
/* We always use a zero nonce. The reason we can get away with this is
 * because we're using a different randomly generated key for every
@@ -109,8 +115,7 @@ static int big_key_crypt(enum big_key_op op, u8 *data, 
size_t datalen, u8 *key)
return -ENOMEM;
 
memset(zero_nonce, 0, sizeof(zero_nonce));
-   sg_init_one(, data, datalen + (op == BIG_KEY_ENC ? 
ENC_AUTHTAG_SIZE : 0));
-   aead_request_set_crypt(aead_req, , , datalen, zero_nonce);
+   aead_request_set_crypt(aead_req, buf->sg, buf->sg, datalen, zero_nonce);
aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, 
NULL);
aead_request_set_ad(aead_req, 0);
 
@@ -130,21 +135,78 @@ static int big_key_crypt(enum big_key_op op, u8 *data, 
size_t datalen, u8 *key)
 }
 
 /*
+ * Free up the buffer.
+ */
+static void big_key_free_buffer(struct big_key_buf *buf)
+{
+   unsigned int i;
+
+   vunmap(buf->virt);
+   for (i = 0; i < buf->nr_pages; i++)
+   if (buf->pages[i])
+   __free_page(buf->pages[i]);
+
+   memset(buf->virt, 0, buf->nr_pages * PAGE_SIZE);
+   kfree(buf);
+}
+
+/*
+ * Allocate a buffer consisting of a set of pages with a virtual mapping
+ * applied over them.
+ */
+static void *big_key_alloc_buffer(size_t len)
+{
+   struct big_key_buf *buf;
+   unsigned int npg = (len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+   unsigned int i, l;
+
+   buf = kzalloc(sizeof(struct big_key_buf) +
+ sizeof(struct page) * npg +
+ sizeof(struct scatterlist) * npg,
+ GFP_KERNEL);
+   if (!buf)
+   return NULL;
+
+   buf->nr_pages = npg;
+   buf->sg = (void *)(buf->pages + npg);
+   sg_init_table(buf->sg, npg);
+
+   for (i = 0; i < buf->nr_pages; i++) {
+   buf->pages[i] = alloc_page(GFP_KERNEL);
+   if (!buf->pages[i])
+   goto nomem;
+
+   l = min(len, PAGE_SIZE);
+   sg_set_page(>sg[i], buf->pages[i], l, 0);
+   len -= l;
+   }
+
+   buf->virt = vmap(buf->pages, buf->nr_pages, VM_MAP, PAGE_KERNEL);
+   if (!buf->virt)
+   goto nomem;
+
+   return buf;
+
+nomem:
+   big_key_free_buffer(buf);
+   

Re: [PATCH v3 2/3] MIPS: crypto: Add crc32 and crc32c hw accelerated module

2018-02-15 Thread Eric Biggers
On Fri, Feb 09, 2018 at 10:11:06PM +, James Hogan wrote:
> +static struct shash_alg crc32_alg = {
> + .digestsize =   CHKSUM_DIGEST_SIZE,
> + .setkey =   chksum_setkey,
> + .init   =   chksum_init,
> + .update =   chksum_update,
> + .final  =   chksum_final,
> + .finup  =   chksum_finup,
> + .digest =   chksum_digest,
> + .descsize   =   sizeof(struct chksum_desc_ctx),
> + .base   =   {
> + .cra_name   =   "crc32",
> + .cra_driver_name=   "crc32-mips-hw",
> + .cra_priority   =   300,
> + .cra_blocksize  =   CHKSUM_BLOCK_SIZE,
> + .cra_alignmask  =   0,
> + .cra_ctxsize=   sizeof(struct chksum_ctx),
> + .cra_module =   THIS_MODULE,
> + .cra_init   =   chksum_cra_init,
> + }
> +};
> +
> +
> +static struct shash_alg crc32c_alg = {
> +   .digestsize =   CHKSUM_DIGEST_SIZE,
> +   .setkey =   chksum_setkey,
> +   .init   =   chksum_init,
> +   .update =   chksumc_update,
> +   .final  =   chksumc_final,
> +   .finup  =   chksumc_finup,
> +   .digest =   chksumc_digest,
> +   .descsize   =   sizeof(struct chksum_desc_ctx),
> +   .base   =   {
> +   .cra_name   =   "crc32c",
> +   .cra_driver_name=   "crc32c-mips-hw",
> +   .cra_priority   =   300,
> +   .cra_blocksize  =   CHKSUM_BLOCK_SIZE,
> +   .cra_alignmask  =   0,
> +   .cra_ctxsize=   sizeof(struct chksum_ctx),
> +   .cra_module =   THIS_MODULE,
> +   .cra_init   =   chksum_cra_init,
> +   }
> +};

Does this actually work on the latest kernel?  Now hash algorithms must have
CRYPTO_ALG_OPTIONAL_KEY in .cra_flags if they have a .setkey method but don't
require it to be called, otherwise the crypto API will think it's a keyed hash
and not allow it to be used without a key.  I had to add this flag to the other
CRC32 and CRC32C algorithms (commit a208fa8f33031).  One of the CRC32C test
vectors even doesn't set a key so it should be causing the self-tests to fail
for "crc32c-mips-hw".  (We should add such a test vector for CRC32 too, though.)

Eric


Re: [PATCH v3 2/3] MIPS: crypto: Add crc32 and crc32c hw accelerated module

2018-02-15 Thread James Hogan
On Thu, Feb 15, 2018 at 04:33:16PM +0800, Herbert Xu wrote:
> Acked-by: Herbert Xu 
 
Thanks Herbert,

Series applied for 4.17.

Cheers
James


signature.asc
Description: Digital signature


[cryptodev:master 22/38] drivers/crypto/virtio/virtio_crypto_algs.c:494:9: error: implicit declaration of function 'crypto_transfer_cipher_request_to_engine'

2018-02-15 Thread kbuild test robot
tree:   
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
head:   a43a34845a156c9e1dae00e33595a508d53e0365
commit: 218d1cc1860c45b77f6814b44f6f0ffb9e40a82f [22/38] crypto: engine - 
Permit to enqueue all async requests
config: x86_64-randconfig-s2-02160419 (attached as .config)
compiler: gcc-6 (Debian 6.4.0-9) 6.4.0 20171026
reproduce:
git checkout 218d1cc1860c45b77f6814b44f6f0ffb9e40a82f
# save the attached .config to linux build tree
make ARCH=x86_64 

Note: the cryptodev/master HEAD a43a34845a156c9e1dae00e33595a508d53e0365 builds 
fine.
  It only hurts bisectibility.

All errors (new ones prefixed by >>):

   drivers/crypto/virtio/virtio_crypto_algs.c: In function 
'virtio_crypto_ablkcipher_encrypt':
>> drivers/crypto/virtio/virtio_crypto_algs.c:494:9: error: implicit 
>> declaration of function 'crypto_transfer_cipher_request_to_engine' 
>> [-Werror=implicit-function-declaration]
 return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
^~~~
   drivers/crypto/virtio/virtio_crypto_algs.c: In function 
'virtio_crypto_ablkcipher_finalize_req':
>> drivers/crypto/virtio/virtio_crypto_algs.c:564:2: error: implicit 
>> declaration of function 'crypto_finalize_cipher_request' 
>> [-Werror=implicit-function-declaration]
 crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
 ^~
   cc1: some warnings being treated as errors

vim +/crypto_transfer_cipher_request_to_engine +494 
drivers/crypto/virtio/virtio_crypto_algs.c

dbaf0624 Gonglei  2016-12-15  476  
dbaf0624 Gonglei  2016-12-15  477  static int 
virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
dbaf0624 Gonglei  2016-12-15  478  {
dbaf0624 Gonglei  2016-12-15  479   struct crypto_ablkcipher *atfm 
= crypto_ablkcipher_reqtfm(req);
dbaf0624 Gonglei  2016-12-15  480   struct 
virtio_crypto_ablkcipher_ctx *ctx = crypto_ablkcipher_ctx(atfm);
d31e7123 Zeng, Xin2017-06-23  481   struct 
virtio_crypto_sym_request *vc_sym_req =
d31e7123 Zeng, Xin2017-06-23  482   
ablkcipher_request_ctx(req);
d31e7123 Zeng, Xin2017-06-23  483   struct virtio_crypto_request 
*vc_req = _sym_req->base;
dbaf0624 Gonglei  2016-12-15  484   struct virtio_crypto *vcrypto = 
ctx->vcrypto;
dbaf0624 Gonglei  2016-12-15  485   /* Use the first data virtqueue 
as default */
dbaf0624 Gonglei  2016-12-15  486   struct data_queue *data_vq = 
>data_vq[0];
dbaf0624 Gonglei  2016-12-15  487  
d79b5d0b Gonglei \(Arei\  2016-12-27  488)  vc_req->dataq = data_vq;
d31e7123 Zeng, Xin2017-06-23  489   vc_req->alg_cb = 
virtio_crypto_dataq_sym_callback;
d31e7123 Zeng, Xin2017-06-23  490   vc_sym_req->ablkcipher_ctx = 
ctx;
d31e7123 Zeng, Xin2017-06-23  491   vc_sym_req->ablkcipher_req = 
req;
d31e7123 Zeng, Xin2017-06-23  492   vc_sym_req->encrypt = true;
dbaf0624 Gonglei  2016-12-15  493  
d79b5d0b Gonglei \(Arei\  2016-12-27 @494)  return 
crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
dbaf0624 Gonglei  2016-12-15  495  }
dbaf0624 Gonglei  2016-12-15  496  
dbaf0624 Gonglei  2016-12-15  497  static int 
virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
dbaf0624 Gonglei  2016-12-15  498  {
dbaf0624 Gonglei  2016-12-15  499   struct crypto_ablkcipher *atfm 
= crypto_ablkcipher_reqtfm(req);
dbaf0624 Gonglei  2016-12-15  500   struct 
virtio_crypto_ablkcipher_ctx *ctx = crypto_ablkcipher_ctx(atfm);
d31e7123 Zeng, Xin2017-06-23  501   struct 
virtio_crypto_sym_request *vc_sym_req =
d31e7123 Zeng, Xin2017-06-23  502   
ablkcipher_request_ctx(req);
d31e7123 Zeng, Xin2017-06-23  503   struct virtio_crypto_request 
*vc_req = _sym_req->base;
dbaf0624 Gonglei  2016-12-15  504   struct virtio_crypto *vcrypto = 
ctx->vcrypto;
dbaf0624 Gonglei  2016-12-15  505   /* Use the first data virtqueue 
as default */
dbaf0624 Gonglei  2016-12-15  506   struct data_queue *data_vq = 
>data_vq[0];
dbaf0624 Gonglei  2016-12-15  507  
d79b5d0b Gonglei \(Arei\  2016-12-27  508)  vc_req->dataq = data_vq;
d31e7123 Zeng, Xin2017-06-23  509   vc_req->alg_cb = 
virtio_crypto_dataq_sym_callback;
d31e7123 Zeng, Xin2017-06-23  510   vc_sym_req->ablkcipher_ctx = 
ctx;
d31e7123 Zeng, Xin2017-06-23  511   vc_sym_req->ablkcipher_req = 
req;
d31e7123 Zeng, Xin2017-06-23  512   vc_sym_req->encrypt = false;
dbaf0624 Gonglei  2016-12-15  513  
d79b5d0b Gonglei \(Arei\  2016-12-27  514)  return 
crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
dbaf0624 Gonglei  2016-12-15  515  }

Re: [RFC PATCH] KEYS: Use individual pages in big_key for crypto buffers

2018-02-15 Thread David Howells
Eric Biggers  wrote:

> If big_key_alloc_buffer() fails to allocate one of the pages then some of
> the pages may still be NULL here, causing __free_page() to crash.  You need
> to check for NULL first.

Ah, yes.  I incorrectly used free_page() first - and that does check for a
NULL pointer.  Applied your other changes also.

David


Re: [RFC PATCH] KEYS: Use individual pages in big_key for crypto buffers

2018-02-15 Thread Eric Biggers
Hi David,

On Thu, Feb 15, 2018 at 03:54:26PM +, David Howells wrote:
> kmalloc() can't always allocate large enough buffers for big_key to use for
> crypto (1MB + some metadata) so we cannot use that to allocate the buffer.
> Further, vmalloc'd pages can't be passed to sg_init_one() and the aead
> crypto accessors cannot be called progressively and must be passed all the
> data in one go (which means we can't pass the data in one block at a time).
> 
> Fix this by allocating the buffer pages individually and passing them
> through a multientry scatterlist to the crypto layer.  This has the bonus
> advantage that we don't have to allocate a contiguous series of pages.

Thanks for fixing this.  The proposed solution is ugly but I don't have a better
one in mind.  Someday the crypto API might support virtual addresses itself, in
which case vmalloc (or kvmalloc()) could just be used.  But it's tough since
some crypto drivers really do need pages, so there would still have to be a
translation from virtual addresses to pages somewhere.

> +struct big_key_buf {
> + int nr_pages;
> + void*virt;
> + struct scatterlist  *sg;
> + struct page *pages[];
> +};

nr_pages should be an 'unsigned int' to be consistent with the rest of the code.

> + * Free up the buffer.
> + */
> +static void big_key_free_buffer(struct big_key_buf *buf)
> +{
> + unsigned int i;
> +
> + vunmap(buf->virt);
> + for (i = 0; i < buf->nr_pages; i++)
> + __free_page(buf->pages[i]);
> +
> + kfree(buf);
> +}

If big_key_alloc_buffer() fails to allocate one of the pages then some of the
pages may still be NULL here, causing __free_page() to crash.  You need to check
for NULL first.

Also how about doing the memset to 0 in here so that the callers don't thave to
remember it:

if (buf->virt) {
memset(buf->virt, 0, buf->nr_pages * PAGE_SIZE);
vunmap(buf->virt);
}

- Eric


[cryptodev:master 22/38] drivers/crypto/virtio/virtio_crypto_algs.c:494:9: error: implicit declaration of function 'crypto_transfer_cipher_request_to_engine'; did you mean 'crypto_transfer_skcipher_re

2018-02-15 Thread kbuild test robot
tree:   
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
head:   a43a34845a156c9e1dae00e33595a508d53e0365
commit: 218d1cc1860c45b77f6814b44f6f0ffb9e40a82f [22/38] crypto: engine - 
Permit to enqueue all async requests
config: x86_64-rhel (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
git checkout 218d1cc1860c45b77f6814b44f6f0ffb9e40a82f
# save the attached .config to linux build tree
make ARCH=x86_64 

Note: the cryptodev/master HEAD a43a34845a156c9e1dae00e33595a508d53e0365 builds 
fine.
  It only hurts bisectibility.

All errors (new ones prefixed by >>):

   drivers/crypto/virtio/virtio_crypto_algs.c: In function 
'virtio_crypto_ablkcipher_encrypt':
>> drivers/crypto/virtio/virtio_crypto_algs.c:494:9: error: implicit 
>> declaration of function 'crypto_transfer_cipher_request_to_engine'; did you 
>> mean 'crypto_transfer_skcipher_request_to_engine'? 
>> [-Werror=implicit-function-declaration]
 return crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
^~~~
crypto_transfer_skcipher_request_to_engine
   drivers/crypto/virtio/virtio_crypto_algs.c: In function 
'virtio_crypto_ablkcipher_finalize_req':
>> drivers/crypto/virtio/virtio_crypto_algs.c:564:2: error: implicit 
>> declaration of function 'crypto_finalize_cipher_request'; did you mean 
>> 'crypto_finalize_skcipher_request'? [-Werror=implicit-function-declaration]
 crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
 ^~
 crypto_finalize_skcipher_request
   cc1: some warnings being treated as errors
--
   drivers/crypto/virtio/virtio_crypto_core.c: In function 
'virtcrypto_find_vqs':
>> drivers/crypto/virtio/virtio_crypto_core.c:115:24: error: 'struct 
>> crypto_engine' has no member named 'cipher_one_request'
  vi->data_vq[i].engine->cipher_one_request =
   ^~

vim +494 drivers/crypto/virtio/virtio_crypto_algs.c

dbaf0624 Gonglei  2016-12-15  476  
dbaf0624 Gonglei  2016-12-15  477  static int 
virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req)
dbaf0624 Gonglei  2016-12-15  478  {
dbaf0624 Gonglei  2016-12-15  479   struct crypto_ablkcipher *atfm 
= crypto_ablkcipher_reqtfm(req);
dbaf0624 Gonglei  2016-12-15  480   struct 
virtio_crypto_ablkcipher_ctx *ctx = crypto_ablkcipher_ctx(atfm);
d31e7123 Zeng, Xin2017-06-23  481   struct 
virtio_crypto_sym_request *vc_sym_req =
d31e7123 Zeng, Xin2017-06-23  482   
ablkcipher_request_ctx(req);
d31e7123 Zeng, Xin2017-06-23  483   struct virtio_crypto_request 
*vc_req = _sym_req->base;
dbaf0624 Gonglei  2016-12-15  484   struct virtio_crypto *vcrypto = 
ctx->vcrypto;
dbaf0624 Gonglei  2016-12-15  485   /* Use the first data virtqueue 
as default */
dbaf0624 Gonglei  2016-12-15  486   struct data_queue *data_vq = 
>data_vq[0];
dbaf0624 Gonglei  2016-12-15  487  
d79b5d0b Gonglei \(Arei\  2016-12-27  488)  vc_req->dataq = data_vq;
d31e7123 Zeng, Xin2017-06-23  489   vc_req->alg_cb = 
virtio_crypto_dataq_sym_callback;
d31e7123 Zeng, Xin2017-06-23  490   vc_sym_req->ablkcipher_ctx = 
ctx;
d31e7123 Zeng, Xin2017-06-23  491   vc_sym_req->ablkcipher_req = 
req;
d31e7123 Zeng, Xin2017-06-23  492   vc_sym_req->encrypt = true;
dbaf0624 Gonglei  2016-12-15  493  
d79b5d0b Gonglei \(Arei\  2016-12-27 @494)  return 
crypto_transfer_cipher_request_to_engine(data_vq->engine, req);
dbaf0624 Gonglei  2016-12-15  495  }
dbaf0624 Gonglei  2016-12-15  496  
dbaf0624 Gonglei  2016-12-15  497  static int 
virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req)
dbaf0624 Gonglei  2016-12-15  498  {
dbaf0624 Gonglei  2016-12-15  499   struct crypto_ablkcipher *atfm 
= crypto_ablkcipher_reqtfm(req);
dbaf0624 Gonglei  2016-12-15  500   struct 
virtio_crypto_ablkcipher_ctx *ctx = crypto_ablkcipher_ctx(atfm);
d31e7123 Zeng, Xin2017-06-23  501   struct 
virtio_crypto_sym_request *vc_sym_req =
d31e7123 Zeng, Xin2017-06-23  502   
ablkcipher_request_ctx(req);
d31e7123 Zeng, Xin2017-06-23  503   struct virtio_crypto_request 
*vc_req = _sym_req->base;
dbaf0624 Gonglei  2016-12-15  504   struct virtio_crypto *vcrypto = 
ctx->vcrypto;
dbaf0624 Gonglei  2016-12-15  505   /* Use the first data virtqueue 
as default */
dbaf0624 Gonglei  2016-12-15  506   struct data_queue *data_vq = 
>data_vq[0];
dbaf0624 Gonglei  2016-12-15  507  
d79b5d0b Gonglei \(Arei\  2016-12-27  508)  vc_req->dataq = data_vq;
d31e7123 Zeng, Xin2017-06-23  509   vc_req->alg_cb = 
virtio_crypto_dataq_sym_callback;
d31e7123 Zeng, Xin

[PATCH 1/2] crypto: ccp: Fix sparse, use plain integer as NULL pointer

2018-02-15 Thread Brijesh Singh
Fix sparse warning: Using plain integer as NULL pointer. Replaces
assignment of 0 to pointer with NULL assignment.

Fixes: 200664d5237f (Add Secure Encrypted Virtualization ...)
Cc: Borislav Petkov 
Cc: Herbert Xu 
Cc: Gary Hook 
Cc: Tom Lendacky 
Signed-off-by: Brijesh Singh 
---
 drivers/crypto/ccp/psp-dev.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
index fcfa5b1eae61..b3afb6cc9d72 100644
--- a/drivers/crypto/ccp/psp-dev.c
+++ b/drivers/crypto/ccp/psp-dev.c
@@ -211,7 +211,7 @@ static int __sev_platform_shutdown_locked(int *error)
 {
int ret;
 
-   ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, 0, error);
+   ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error);
if (ret)
return ret;
 
@@ -271,7 +271,7 @@ static int sev_ioctl_do_reset(struct sev_issue_cmd *argp)
return rc;
}
 
-   return __sev_do_cmd_locked(SEV_CMD_FACTORY_RESET, 0, >error);
+   return __sev_do_cmd_locked(SEV_CMD_FACTORY_RESET, NULL, >error);
 }
 
 static int sev_ioctl_do_platform_status(struct sev_issue_cmd *argp)
@@ -299,7 +299,7 @@ static int sev_ioctl_do_pek_pdh_gen(int cmd, struct 
sev_issue_cmd *argp)
return rc;
}
 
-   return __sev_do_cmd_locked(cmd, 0, >error);
+   return __sev_do_cmd_locked(cmd, NULL, >error);
 }
 
 static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp)
@@ -624,7 +624,7 @@ EXPORT_SYMBOL_GPL(sev_guest_decommission);
 
 int sev_guest_df_flush(int *error)
 {
-   return sev_do_cmd(SEV_CMD_DF_FLUSH, 0, error);
+   return sev_do_cmd(SEV_CMD_DF_FLUSH, NULL, error);
 }
 EXPORT_SYMBOL_GPL(sev_guest_df_flush);
 
-- 
2.14.3



[PATCH 2/2] include: psp-sev: Capitalize invalid length enum

2018-02-15 Thread Brijesh Singh
Commit 1d57b17c60ff ("crypto: ccp: Define SEV userspace ioctl and command
id") added the invalid length enum but we missed capitalizing it.

Fixes: 1d57b17c60ff (crypto: ccp: Define SEV userspace ioctl ...)
Cc: Herbert Xu 
Cc: Borislav Petkov 
Cc: Tom Lendacky 
CC: Gary R Hook 
Signed-off-by: Brijesh Singh 
---
 include/uapi/linux/psp-sev.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/uapi/linux/psp-sev.h b/include/uapi/linux/psp-sev.h
index 3d77fe91239a..9008f31c7eb6 100644
--- a/include/uapi/linux/psp-sev.h
+++ b/include/uapi/linux/psp-sev.h
@@ -42,7 +42,7 @@ typedef enum {
SEV_RET_INVALID_PLATFORM_STATE,
SEV_RET_INVALID_GUEST_STATE,
SEV_RET_INAVLID_CONFIG,
-   SEV_RET_INVALID_len,
+   SEV_RET_INVALID_LEN,
SEV_RET_ALREADY_OWNED,
SEV_RET_INVALID_CERTIFICATE,
SEV_RET_POLICY_FAILURE,
-- 
2.14.3



Re: [PATCH v3 0/5] crypto: ahash.c: Require export/import in ahash

2018-02-15 Thread Marek Vasut
On 02/15/2018 07:06 PM, Kamil Konieczny wrote:
> 
> 
> On 15.02.2018 18:06, Marek Vasut wrote:
>> On 02/15/2018 06:00 PM, Kamil Konieczny wrote:
>>>
>>>
>>> On 15.02.2018 17:27, Marek Vasut wrote:
 On 02/15/2018 04:41 PM, Herbert Xu wrote:
> On Thu, Jan 18, 2018 at 07:33:59PM +0100, Kamil Konieczny wrote:
>> First four patches add empty hash export and import functions to each 
>> driver,
>> with the same behaviour as in crypto framework. The last one drops them 
>> from
>> crypto framework. Last one for ahash.c depends on all previous.
>>
>> Changes in v3:
>> added change for bfin_crc.c
>> make this a patchset, instead of unreleated patches
>> make commit message more descriptive
>>
>> Kamil Konieczny (5):
>>   crypto: mxs-dcp: Add empty hash export and import
>>   crypto: n2_core: Add empty hash export and import
>>   crypto: ux500/hash: Add empty export and import
>>   crypto: bfin_crc: Add empty hash export and import
>>   crypto: ahash.c: Require export/import in ahash
>>
>>  crypto/ahash.c| 18 ++
>>  drivers/crypto/bfin_crc.c | 12 
>>  drivers/crypto/mxs-dcp.c  | 14 ++
>>  drivers/crypto/n2_core.c  | 12 
>>  drivers/crypto/ux500/hash/hash_core.c | 18 ++
>>  5 files changed, 58 insertions(+), 16 deletions(-)
>
> All applied.  Thanks.

 This makes no sense, cfr my comment on 5/5

 Seems like if the driver doesn't implement those, the core can easily
 detect that and perform the necessary action. Moving the checks out of
 core seems like the wrong thing to do, rather you should enhance the
 checks in core if they're insufficient in my opinion.
>>>
>>> The bug can only be in driver which will not implement those two functions,
>>> but we already had all drivers with those due to patches 1..4
>>> All other drivers do have them.
>>
>> The core can very well check if these functions are not populated and
>> return ENOSYS
>>
>>> Additionally, with crypto we want minimize code and run as fast as possible.
>>
>> So you remove all NULL pointer checks ? Esp. in security-sensitive code?
>> What is the impact of this non-critical path code on performance?
>>
>> Come on ...
>>
> 
> Why you want checks for something that not exist ?
> 
> Those without them will not work and will do Oops in crypto testmgr,
> so such drivers should not be used nor accepted in drivers/crypto
> 
> Ask yourself why crypto do not check for NULL in ahash digest or other
> required ahash functions.

Are you suggesting that the kernel code should NOT perform NULL pointer
checks ?

Are you suggesting each driver should implement every single callback
available and if it is not implemented, return -ENOSYS ? This looks like
a MASSIVE code duplication.

>>> Moving checks out of core will impose on driver author need for implement
>>> those functions, or declare them empty, but in case of empty ones 
>>> crypto will not work properly with such driver.
>>
>> You can very well impose that in the core, except you don't duplicate
>> the code.
> 
> Now size of crypto core is reduced.

You implemented the same code thrice, it surely is not reduced.

-- 
Best regards,
Marek Vasut


Re: [PATCH v3 1/4] crypto: AF_ALG AIO - lock context IV

2018-02-15 Thread Stephan Mueller
Am Donnerstag, 15. Februar 2018, 14:04:53 CET schrieb Stephan Mueller:

Hi Stephan,

> Am Donnerstag, 15. Februar 2018, 13:45:53 CET schrieb Harsh Jain:
> 
> Hi Harsh,
> 
> > > Could you please elaborate what you mean with "partial tag" support?
> > 
> > Here is the catch, Calculation of tag depends on total payload length
> > atleast for shaX, gcm,ccm mode on which I have worked.
> > 
> > If we take an example of shaX. It appends 1 special block at the end of
> > user data which includes total input length in bit. Refer
> > "sha1_base_do_finalize" Suppose we have 32 byte and we break this in 2
> > IOCB
> > of 16 bytes each. Expected result : 32 encrypted bytes + sha auth tag
> > considering length 32 bytes. What we will  get : 16 bytes + sha auth tag
> > considering length 16 bytes + 16 encrypted bytes + another sha tag
> > considering 16 bytes.
> 
> As AF_ALG for AEAD is implemented, there is no stream support where the hash
> is calculated at the end. This is even not supported in the current AEAD
> API of the kernel crypto API as far as I see. The only "stream-like"
> support is that you can invoke multiple separate sendmsg calls to provide
> the input data for the AEAD. But once you call recvmsg, the ciphertext and
> the tag is calculated and thus the recvmsg is akin to a hash_final
> operation.
> 
> Complete parallel execution of multiple independent AEAD cipher operations
> with AIO is possible using the inline IV as suggested in the patch set.
> 
> The locking of the ctx->iv is only there to support a driver not to whack
> the IV buffer. However, if others also agree that the ctx->ivlock should
> not be taken in the AEAD code path because a potential whacking the IV
> buffer by a driver does not need to be prevented, I would change the patch.

I was about to change the implementation. However, I already found an 
implementation where the driver changes the IV of the request buffer: 
crypto_ccm_init_crypt modifies the original buffer of the IV. I expect that 
there are more.

Therefore, I would think that keeping the patch set as it is right now is the 
right thing to do. Thus, we should cover the AEAD ciphers with the lock and 
thus serialize the AEAD request. This guarantees that the IV buffer is at 
least not mangled while the cipher operation is ongoing.

Again, if a particular AEAD driver guarantees the IV is not mangled, it can 
very well set CRYPTO_ALG_SERIALIZES_IV_ACCESS in its flag set. This would 
imply the lock is not set.

Ciao
Stephan




Re: [PATCH v3 0/5] crypto: ahash.c: Require export/import in ahash

2018-02-15 Thread Kamil Konieczny


On 15.02.2018 18:06, Marek Vasut wrote:
> On 02/15/2018 06:00 PM, Kamil Konieczny wrote:
>>
>>
>> On 15.02.2018 17:27, Marek Vasut wrote:
>>> On 02/15/2018 04:41 PM, Herbert Xu wrote:
 On Thu, Jan 18, 2018 at 07:33:59PM +0100, Kamil Konieczny wrote:
> First four patches add empty hash export and import functions to each 
> driver,
> with the same behaviour as in crypto framework. The last one drops them 
> from
> crypto framework. Last one for ahash.c depends on all previous.
>
> Changes in v3:
> added change for bfin_crc.c
> make this a patchset, instead of unreleated patches
> make commit message more descriptive
>
> Kamil Konieczny (5):
>   crypto: mxs-dcp: Add empty hash export and import
>   crypto: n2_core: Add empty hash export and import
>   crypto: ux500/hash: Add empty export and import
>   crypto: bfin_crc: Add empty hash export and import
>   crypto: ahash.c: Require export/import in ahash
>
>  crypto/ahash.c| 18 ++
>  drivers/crypto/bfin_crc.c | 12 
>  drivers/crypto/mxs-dcp.c  | 14 ++
>  drivers/crypto/n2_core.c  | 12 
>  drivers/crypto/ux500/hash/hash_core.c | 18 ++
>  5 files changed, 58 insertions(+), 16 deletions(-)

 All applied.  Thanks.
>>>
>>> This makes no sense, cfr my comment on 5/5
>>>
>>> Seems like if the driver doesn't implement those, the core can easily
>>> detect that and perform the necessary action. Moving the checks out of
>>> core seems like the wrong thing to do, rather you should enhance the
>>> checks in core if they're insufficient in my opinion.
>>
>> The bug can only be in driver which will not implement those two functions,
>> but we already had all drivers with those due to patches 1..4
>> All other drivers do have them.
> 
> The core can very well check if these functions are not populated and
> return ENOSYS
> 
>> Additionally, with crypto we want minimize code and run as fast as possible.
> 
> So you remove all NULL pointer checks ? Esp. in security-sensitive code?
> What is the impact of this non-critical path code on performance?
> 
> Come on ...
> 

Why you want checks for something that not exist ?

Those without them will not work and will do Oops in crypto testmgr,
so such drivers should not be used nor accepted in drivers/crypto

Ask yourself why crypto do not check for NULL in ahash digest or other
required ahash functions.

>> Moving checks out of core will impose on driver author need for implement
>> those functions, or declare them empty, but in case of empty ones 
>> crypto will not work properly with such driver.
> 
> You can very well impose that in the core, except you don't duplicate
> the code.

Now size of crypto core is reduced.

-- 
Best regards,
Kamil Konieczny
Samsung R Institute Poland



Re: [PATCH v3 0/5] crypto: ahash.c: Require export/import in ahash

2018-02-15 Thread Marek Vasut
On 02/15/2018 06:00 PM, Kamil Konieczny wrote:
> 
> 
> On 15.02.2018 17:27, Marek Vasut wrote:
>> On 02/15/2018 04:41 PM, Herbert Xu wrote:
>>> On Thu, Jan 18, 2018 at 07:33:59PM +0100, Kamil Konieczny wrote:
 First four patches add empty hash export and import functions to each 
 driver,
 with the same behaviour as in crypto framework. The last one drops them 
 from
 crypto framework. Last one for ahash.c depends on all previous.

 Changes in v3:
 added change for bfin_crc.c
 make this a patchset, instead of unreleated patches
 make commit message more descriptive

 Kamil Konieczny (5):
   crypto: mxs-dcp: Add empty hash export and import
   crypto: n2_core: Add empty hash export and import
   crypto: ux500/hash: Add empty export and import
   crypto: bfin_crc: Add empty hash export and import
   crypto: ahash.c: Require export/import in ahash

  crypto/ahash.c| 18 ++
  drivers/crypto/bfin_crc.c | 12 
  drivers/crypto/mxs-dcp.c  | 14 ++
  drivers/crypto/n2_core.c  | 12 
  drivers/crypto/ux500/hash/hash_core.c | 18 ++
  5 files changed, 58 insertions(+), 16 deletions(-)
>>>
>>> All applied.  Thanks.
>>
>> This makes no sense, cfr my comment on 5/5
>>
>> Seems like if the driver doesn't implement those, the core can easily
>> detect that and perform the necessary action. Moving the checks out of
>> core seems like the wrong thing to do, rather you should enhance the
>> checks in core if they're insufficient in my opinion.
> 
> The bug can only be in driver which will not implement those two functions,
> but we already had all drivers with those due to patches 1..4
> All other drivers do have them.

The core can very well check if these functions are not populated and
return ENOSYS

> Additionally, with crypto we want minimize code and run as fast as possible.

So you remove all NULL pointer checks ? Esp. in security-sensitive code?
What is the impact of this non-critical path code on performance?

Come on ...

> Moving checks out of core will impose on driver author need for implement
> those functions, or declare them empty, but in case of empty ones 
> crypto will not work properly with such driver.

You can very well impose that in the core, except you don't duplicate
the code.

-- 
Best regards,
Marek Vasut


Re: [PATCH v3 0/5] crypto: ahash.c: Require export/import in ahash

2018-02-15 Thread Kamil Konieczny


On 15.02.2018 17:27, Marek Vasut wrote:
> On 02/15/2018 04:41 PM, Herbert Xu wrote:
>> On Thu, Jan 18, 2018 at 07:33:59PM +0100, Kamil Konieczny wrote:
>>> First four patches add empty hash export and import functions to each 
>>> driver,
>>> with the same behaviour as in crypto framework. The last one drops them from
>>> crypto framework. Last one for ahash.c depends on all previous.
>>>
>>> Changes in v3:
>>> added change for bfin_crc.c
>>> make this a patchset, instead of unreleated patches
>>> make commit message more descriptive
>>>
>>> Kamil Konieczny (5):
>>>   crypto: mxs-dcp: Add empty hash export and import
>>>   crypto: n2_core: Add empty hash export and import
>>>   crypto: ux500/hash: Add empty export and import
>>>   crypto: bfin_crc: Add empty hash export and import
>>>   crypto: ahash.c: Require export/import in ahash
>>>
>>>  crypto/ahash.c| 18 ++
>>>  drivers/crypto/bfin_crc.c | 12 
>>>  drivers/crypto/mxs-dcp.c  | 14 ++
>>>  drivers/crypto/n2_core.c  | 12 
>>>  drivers/crypto/ux500/hash/hash_core.c | 18 ++
>>>  5 files changed, 58 insertions(+), 16 deletions(-)
>>
>> All applied.  Thanks.
> 
> This makes no sense, cfr my comment on 5/5
> 
> Seems like if the driver doesn't implement those, the core can easily
> detect that and perform the necessary action. Moving the checks out of
> core seems like the wrong thing to do, rather you should enhance the
> checks in core if they're insufficient in my opinion.

The bug can only be in driver which will not implement those two functions,
but we already had all drivers with those due to patches 1..4
All other drivers do have them.

Additionally, with crypto we want minimize code and run as fast as possible.

Moving checks out of core will impose on driver author need for implement
those functions, or declare them empty, but in case of empty ones 
crypto will not work properly with such driver.

-- 
Best regards,
Kamil Konieczny
Samsung R Institute Poland



[PATCH] crypto: fix memdup.cocci warnings

2018-02-15 Thread kbuild test robot
From: Fengguang Wu 

drivers/crypto/ccree/cc_cipher.c:629:15-22: WARNING opportunity for kmemdep

 Use kmemdup rather than duplicating its implementation

Generated by: scripts/coccinelle/api/memdup.cocci

Fixes: 63ee04c8b491 ("crypto: ccree - add skcipher support")
CC: Gilad Ben-Yossef 
Signed-off-by: Fengguang Wu 
---

 cc_cipher.c |3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

--- a/drivers/crypto/ccree/cc_cipher.c
+++ b/drivers/crypto/ccree/cc_cipher.c
@@ -626,12 +626,11 @@ static int cc_cipher_process(struct skci
/* The IV we are handed may be allocted from the stack so
 * we must copy it to a DMAable buffer before use.
 */
-   req_ctx->iv = kmalloc(ivsize, flags);
+   req_ctx->iv = kmemdup(iv, ivsize, flags);
if (!req_ctx->iv) {
rc = -ENOMEM;
goto exit_process;
}
-   memcpy(req_ctx->iv, iv, ivsize);
 
/*For CTS in case of data size aligned to 16 use CBC mode*/
if (((nbytes % AES_BLOCK_SIZE) == 0) &&


[cryptodev:master 9/38] drivers/crypto/ccree/cc_cipher.c:629:15-22: WARNING opportunity for kmemdep

2018-02-15 Thread kbuild test robot
tree:   
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
head:   a43a34845a156c9e1dae00e33595a508d53e0365
commit: 63ee04c8b491ee148489347e7da9fbfd982ca2bb [9/38] crypto: ccree - add 
skcipher support


coccinelle warnings: (new ones prefixed by >>)

>> drivers/crypto/ccree/cc_cipher.c:629:15-22: WARNING opportunity for kmemdep

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


Re: [PATCH v2 0/7] crypto: ccree: Introduce Arm TrustZone CryptoCell

2018-02-15 Thread Herbert Xu
On Mon, Jan 22, 2018 at 09:26:58AM +, Gilad Ben-Yossef wrote:
> Arm TrustZone CryptoCell is a security hardware IP that
> includes support for hardware based hash, digest, cipher
> and AEAD operations. This driver provides support for
> these as part of the Linux Crypto sub-system.
> 
> The driver spent some time now in the staging tree being
> cleaned up and is now submitted for review for the
> purpose of moving into the crypto tree. The first patch
> therefore renames its copy in the staging directory
> Kconfig define and marks it broken, otherwise there is a
> build failure due to global symbols collisions.
> 
> Please note that the driver include stubs for yet
> unexposed functionality (ivgen and secure HW keys),
> which will be added later.
> 
> Signed-off-by: Gilad Ben-Yossef 
> 
> Changes from v1:
> - Use KConfig directive to stop staging tree version to not
>   collide during link time as opposed to deleting it as indicated
>   by Greg KH.
> - Switched from legacy ablkcipher to skcipher interface as
>   indicated by Corentin Labbe.
> - Removed unused zero_buff struct as indicated by Corentin Labbe.
> - Moved to kzfree for all IV/key buffers as indicated by
>   Corentin Labbe.
> - Moved to using __des3_ede_setkey() in lieu of home grown
>   version as indicated by Stephan Mueller.
> - Fixed multiple small coding style from Dan Carpenter and others.
> - Fixed pointer signedness sparse warnings as indicated by Jeremy
>   Sowden.
> - Rename all registered algs driver name to -ccree prefix
> 
> Gilad Ben-Yossef (7):
>   staging: ccree: rename staging ver and mark as broken
>   crypto: ccree: introduce CryptoCell driver
>   crypto: ccree: add skcipher support
>   crypto: ccree: add ahash support
>   crypto: ccree: add AEAD support
>   crypto: ccree: add FIPS support
>   MAINTAINERS: update ccree entry

All applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


RE: [Crypto v5 03/12] support for inline tls

2018-02-15 Thread Atul Gupta
> > > @@ -401,6 +430,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, 
> > > char __user *optval,
> > >   goto out;
> > >   }
> > >  
> > > + rc = get_tls_offload_dev(sk);
> > > + if (rc) {
> > > + goto out;
> > > + } else {
> > > + /* Retain HW unhash for cleanup and move to SW Tx */
> > > + sk->sk_prot[TLS_BASE_TX].unhash =
> > > + sk->sk_prot[TLS_FULL_HW].unhash;
> > 
> > Isn't sk->sk_prot a pointer to a global shared struct here still?  It looks 
> > like this would actually modify the global struct, and not just for this sk.
> Yes, its global. It require add on check to modify only when tls_offload dev 
> list has an entry. I will revisit and correct. 
> 
> Can you look through other changes please?

I looked through 1,2,3,11 (the tls-related ones) and don't have any other code 
comments.  Patch 11 commit message still mentions ULP, could use updating / 
clarification.

Thank you, I will collate and clean for v6.



Re: [PATCH v3 0/5] crypto: ahash.c: Require export/import in ahash

2018-02-15 Thread Herbert Xu
On Thu, Jan 18, 2018 at 07:33:59PM +0100, Kamil Konieczny wrote:
> First four patches add empty hash export and import functions to each driver,
> with the same behaviour as in crypto framework. The last one drops them from
> crypto framework. Last one for ahash.c depends on all previous.
> 
> Changes in v3:
> added change for bfin_crc.c
> make this a patchset, instead of unreleated patches
> make commit message more descriptive
> 
> Kamil Konieczny (5):
>   crypto: mxs-dcp: Add empty hash export and import
>   crypto: n2_core: Add empty hash export and import
>   crypto: ux500/hash: Add empty export and import
>   crypto: bfin_crc: Add empty hash export and import
>   crypto: ahash.c: Require export/import in ahash
> 
>  crypto/ahash.c| 18 ++
>  drivers/crypto/bfin_crc.c | 12 
>  drivers/crypto/mxs-dcp.c  | 14 ++
>  drivers/crypto/n2_core.c  | 12 
>  drivers/crypto/ux500/hash/hash_core.c | 18 ++
>  5 files changed, 58 insertions(+), 16 deletions(-)

All applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH -next] hwrng: make symbol imx_rngc_pm_ops static

2018-02-15 Thread Herbert Xu
On Tue, Jan 23, 2018 at 02:08:56AM +, Wei Yongjun wrote:
> Fixes the following sparse warnings:
> 
> drivers/char/hw_random/imx-rngc.c:303:1: warning:
>  symbol 'imx_rngc_pm_ops' was not declared. Should it be static?
> 
> Signed-off-by: Wei Yongjun 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH] crypto: mcryptd - remove pointless wrapper functions

2018-02-15 Thread Herbert Xu
On Wed, Jan 24, 2018 at 07:09:07PM -0800, Eric Biggers wrote:
> From: Eric Biggers 
> 
> There is no need for ahash_mcryptd_{update,final,finup,digest}(); we
> should just call crypto_ahash_*() directly.
> 
> Signed-off-by: Eric Biggers 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v3 0/5] crypto: ahash.c: Require export/import in ahash

2018-02-15 Thread Marek Vasut
On 02/15/2018 04:41 PM, Herbert Xu wrote:
> On Thu, Jan 18, 2018 at 07:33:59PM +0100, Kamil Konieczny wrote:
>> First four patches add empty hash export and import functions to each driver,
>> with the same behaviour as in crypto framework. The last one drops them from
>> crypto framework. Last one for ahash.c depends on all previous.
>>
>> Changes in v3:
>> added change for bfin_crc.c
>> make this a patchset, instead of unreleated patches
>> make commit message more descriptive
>>
>> Kamil Konieczny (5):
>>   crypto: mxs-dcp: Add empty hash export and import
>>   crypto: n2_core: Add empty hash export and import
>>   crypto: ux500/hash: Add empty export and import
>>   crypto: bfin_crc: Add empty hash export and import
>>   crypto: ahash.c: Require export/import in ahash
>>
>>  crypto/ahash.c| 18 ++
>>  drivers/crypto/bfin_crc.c | 12 
>>  drivers/crypto/mxs-dcp.c  | 14 ++
>>  drivers/crypto/n2_core.c  | 12 
>>  drivers/crypto/ux500/hash/hash_core.c | 18 ++
>>  5 files changed, 58 insertions(+), 16 deletions(-)
> 
> All applied.  Thanks.

This makes no sense, cfr my comment on 5/5

Seems like if the driver doesn't implement those, the core can easily
detect that and perform the necessary action. Moving the checks out of
core seems like the wrong thing to do, rather you should enhance the
checks in core if they're insufficient in my opinion.

-- 
Best regards,
Marek Vasut


Re: [PATCH] crypto: sha1-mb - remove HASH_FIRST flag

2018-02-15 Thread Herbert Xu
On Wed, Jan 24, 2018 at 07:10:08PM -0800, Eric Biggers wrote:
> From: Eric Biggers 
> 
> The HASH_FIRST flag is never set.  Remove it.
> 
> Signed-off-by: Eric Biggers 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [Crypto v5 03/12] support for inline tls

2018-02-15 Thread Dave Watson
On 02/15/18 04:10 PM, Atul Gupta wrote:
> > -Original Message-
> > From: Dave Watson [mailto:davejwat...@fb.com] 
> > Sent: Thursday, February 15, 2018 9:22 PM
> > To: Atul Gupta 
> > Cc: da...@davemloft.net; herb...@gondor.apana.org.au; s...@queasysnail.net; 
> > linux-crypto@vger.kernel.org; net...@vger.kernel.org; Ganesh GR 
> > 
> > Subject: Re: [Crypto v5 03/12] support for inline tls
> > 
> > On 02/15/18 12:24 PM, Atul Gupta wrote:
> > > @@ -401,6 +430,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, 
> > > char __user *optval,
> > >   goto out;
> > >   }
> > >  
> > > + rc = get_tls_offload_dev(sk);
> > > + if (rc) {
> > > + goto out;
> > > + } else {
> > > + /* Retain HW unhash for cleanup and move to SW Tx */
> > > + sk->sk_prot[TLS_BASE_TX].unhash =
> > > + sk->sk_prot[TLS_FULL_HW].unhash;
> > 
> > Isn't sk->sk_prot a pointer to a global shared struct here still?  It looks 
> > like this would actually modify the global struct, and not just for this sk.
> Yes, its global. It require add on check to modify only when tls_offload dev 
> list has an entry. I will revisit and correct. 
> 
> Can you look through other changes please?

I looked through 1,2,3,11 (the tls-related ones) and don't have any
other code comments.  Patch 11 commit message still mentions ULP,
could use updating / clarification.



Re: [PATCH] crypto: crypto_user: Replace GFP_ATOMIC with GFP_KERNEL in crypto_report

2018-02-15 Thread Herbert Xu
On Thu, Jan 25, 2018 at 06:06:02PM +0800, Jia-Ju Bai wrote:
> After checking all possible call chains to crypto_report here,
> my tool finds that crypto_report is never called in atomic context.
> And crypto_report calls crypto_alg_match which calls down_read, 
> thus it proves again that crypto_report can call functions which may sleep.
> Thus GFP_ATOMIC is not necessary, and it can be replaced with GFP_KERNEL.
> 
> This is found by a static analysis tool named DCNS written by myself.
> 
> Signed-off-by: Jia-Ju Bai 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH] crypto: rsa-pkcs1pad: Replace GFP_ATOMIC with GFP_KERNEL in pkcs1pad_encrypt_sign_complete

2018-02-15 Thread Herbert Xu
On Thu, Jan 25, 2018 at 05:57:54PM +0800, Jia-Ju Bai wrote:
> After checking all possible call chains to kzalloc here,
> my tool finds that this kzalloc is never called in atomic context.
> Thus GFP_ATOMIC is not necessary, and it can be replaced with GFP_KERNEL.
> 
> This is found by a static analysis tool named DCNS written by myself.
> 
> Signed-off-by: Jia-Ju Bai 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


RE: [Crypto v5 03/12] support for inline tls

2018-02-15 Thread Atul Gupta


-Original Message-
From: Dave Watson [mailto:davejwat...@fb.com] 
Sent: Thursday, February 15, 2018 9:22 PM
To: Atul Gupta 
Cc: da...@davemloft.net; herb...@gondor.apana.org.au; s...@queasysnail.net; 
linux-crypto@vger.kernel.org; net...@vger.kernel.org; Ganesh GR 

Subject: Re: [Crypto v5 03/12] support for inline tls

On 02/15/18 12:24 PM, Atul Gupta wrote:
> @@ -401,6 +430,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, char 
> __user *optval,
>   goto out;
>   }
>  
> + rc = get_tls_offload_dev(sk);
> + if (rc) {
> + goto out;
> + } else {
> + /* Retain HW unhash for cleanup and move to SW Tx */
> + sk->sk_prot[TLS_BASE_TX].unhash =
> + sk->sk_prot[TLS_FULL_HW].unhash;

Isn't sk->sk_prot a pointer to a global shared struct here still?  It looks 
like this would actually modify the global struct, and not just for this sk.
Yes, its global. It require add on check to modify only when tls_offload dev 
list has an entry. I will revisit and correct. Can you look through other 
changes please?

Thanks
Atul



[RFC PATCH] KEYS: Use individual pages in big_key for crypto buffers

2018-02-15 Thread David Howells
kmalloc() can't always allocate large enough buffers for big_key to use for
crypto (1MB + some metadata) so we cannot use that to allocate the buffer.
Further, vmalloc'd pages can't be passed to sg_init_one() and the aead
crypto accessors cannot be called progressively and must be passed all the
data in one go (which means we can't pass the data in one block at a time).

Fix this by allocating the buffer pages individually and passing them
through a multientry scatterlist to the crypto layer.  This has the bonus
advantage that we don't have to allocate a contiguous series of pages.

We then vmap() the page list and pass that through to the VFS read/write
routines.

This can trigger a warning:

WARNING: CPU: 0 PID: 60912 at mm/page_alloc.c:3883 
__alloc_pages_nodemask+0xb7c/0x15f8
([<002acbb6>] __alloc_pages_nodemask+0x1ee/0x15f8)
 [<002dd356>] kmalloc_order+0x46/0x90
 [<002dd3e0>] kmalloc_order_trace+0x40/0x1f8
 [<00326a10>] __kmalloc+0x430/0x4c0
 [<004343e4>] big_key_preparse+0x7c/0x210
 [<0042c040>] key_create_or_update+0x128/0x420
 [<0042e52c>] SyS_add_key+0x124/0x220
 [<007bba2c>] system_call+0xc4/0x2b0

from the keyctl/padd/useradd test of the keyutils testsuite on s390x.

Note that it might be better to shovel data through in page-sized lumps
instead as there's no particular need to use a monolithic buffer unless the
kernel itself wants to access the data.

Fixes: 13100a72f40f ("Security: Keys: Big keys stored encrypted")
Reported-by: Paul Bunyan 
Signed-off-by: David Howells 
cc: Kirill Marinushkin 
---

 security/keys/big_key.c |  108 +--
 1 file changed, 85 insertions(+), 23 deletions(-)

diff --git a/security/keys/big_key.c b/security/keys/big_key.c
index 929e14978c42..0ffea9601619 100644
--- a/security/keys/big_key.c
+++ b/security/keys/big_key.c
@@ -22,6 +22,13 @@
 #include 
 #include 
 
+struct big_key_buf {
+   int nr_pages;
+   void*virt;
+   struct scatterlist  *sg;
+   struct page *pages[];
+};
+
 /*
  * Layout of key payload words.
  */
@@ -91,10 +98,9 @@ static DEFINE_MUTEX(big_key_aead_lock);
 /*
  * Encrypt/decrypt big_key data
  */
-static int big_key_crypt(enum big_key_op op, u8 *data, size_t datalen, u8 *key)
+static int big_key_crypt(enum big_key_op op, struct big_key_buf *buf, size_t 
datalen, u8 *key)
 {
int ret;
-   struct scatterlist sgio;
struct aead_request *aead_req;
/* We always use a zero nonce. The reason we can get away with this is
 * because we're using a different randomly generated key for every
@@ -109,8 +115,7 @@ static int big_key_crypt(enum big_key_op op, u8 *data, 
size_t datalen, u8 *key)
return -ENOMEM;
 
memset(zero_nonce, 0, sizeof(zero_nonce));
-   sg_init_one(, data, datalen + (op == BIG_KEY_ENC ? 
ENC_AUTHTAG_SIZE : 0));
-   aead_request_set_crypt(aead_req, , , datalen, zero_nonce);
+   aead_request_set_crypt(aead_req, buf->sg, buf->sg, datalen, zero_nonce);
aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, 
NULL);
aead_request_set_ad(aead_req, 0);
 
@@ -130,21 +135,76 @@ static int big_key_crypt(enum big_key_op op, u8 *data, 
size_t datalen, u8 *key)
 }
 
 /*
+ * Free up the buffer.
+ */
+static void big_key_free_buffer(struct big_key_buf *buf)
+{
+   unsigned int i;
+
+   vunmap(buf->virt);
+   for (i = 0; i < buf->nr_pages; i++)
+   __free_page(buf->pages[i]);
+
+   kfree(buf);
+}
+
+/*
+ * Allocate a buffer consisting of a set of pages with a virtual mapping
+ * applied over them.
+ */
+static void *big_key_alloc_buffer(size_t len)
+{
+   struct big_key_buf *buf;
+   unsigned int npg = (len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+   unsigned int i, l;
+
+   buf = kzalloc(sizeof(struct big_key_buf) +
+ sizeof(struct page) * npg +
+ sizeof(struct scatterlist) * npg,
+ GFP_KERNEL);
+   if (!buf)
+   return NULL;
+
+   buf->nr_pages = npg;
+   buf->sg = (void *)(buf->pages + npg);
+   sg_init_table(buf->sg, npg);
+
+   for (i = 0; i < buf->nr_pages; i++) {
+   buf->pages[i] = alloc_page(GFP_KERNEL);
+   if (!buf->pages[i])
+   goto nomem;
+
+   l = min(len, PAGE_SIZE);
+   sg_set_page(>sg[i], buf->pages[i], l, 0);
+   len -= l;
+   }
+
+   buf->virt = vmap(buf->pages, buf->nr_pages, VM_MAP, PAGE_KERNEL);
+   if (!buf->virt)
+   goto nomem;
+
+   return buf;
+
+nomem:
+   big_key_free_buffer(buf);
+   return NULL;
+}
+
+/*
  * Preparse a big key
  */
 int big_key_preparse(struct 

Re: [PATCH 0/2] crypto: stm32/cryp - add AEAD cipher algorithms

2018-02-15 Thread Herbert Xu
On Wed, Feb 07, 2018 at 02:08:53PM +0100, Fabien Dessenne wrote:
> This patchset depends on "crypto: engine - Permit to enqueue all async 
> requests"
> proposed by Corentin Labbe [https://lkml.org/lkml/2018/1/26/608].
> stm32-cryp uses this updated crpyto engine to handle AEAD cipher algortihms.
> 
> Fabien Dessenne (2):
>   crypto: stm32/cryp - add aes gcm / ccm support
>   crypto: stm32/cryp - add stm32mp1 support

All applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v3] crypto: s5p-sss.c: Fix kernel Oops in AES-ECB mode

2018-02-15 Thread Herbert Xu
On Wed, Feb 07, 2018 at 04:52:09PM +0100, Kamil Konieczny wrote:
> In AES-ECB mode crypt is done with key only, so any use of IV
> can cause kernel Oops. Use IV only in AES-CBC and AES-CTR.
> 
> Signed-off-by: Kamil Konieczny 
> Reported-by: Anand Moon 
> Reviewed-by: Krzysztof Kozlowski 
> Tested-by: Anand Moon 
> Cc: sta...@vger.kernel.org # can be applied after commit 8f9702aad138
> 
> ---
> version 3:
> Change commit message: drop 'Fixes: 8f9702aad138' as the error was
> introduced earlier.

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH] crypto: qat - Make several functions static

2018-02-15 Thread Herbert Xu
On Tue, Feb 06, 2018 at 11:36:42PM +, Colin King wrote:
> From: Colin Ian King 
> 
> Functions qat_rsa_set_n, qat_rsa_set_e and qat_rsa_set_n are local to
> the source and do not need to be in global scope, so make them static.
> 
> Cleans up sparse warnings:
> drivers/crypto/qat/qat_common/qat_asym_algs.c:972:5: warning: symbol
> 'qat_rsa_set_n' was not declared. Should it be static?
> drivers/crypto/qat/qat_common/qat_asym_algs.c:1003:5: warning: symbol
> 'qat_rsa_set_e' was not declared. Should it be static?
> drivers/crypto/qat/qat_common/qat_asym_algs.c:1027:5: warning: symbol
> 'qat_rsa_set_d' was not declared. Should it be static?
> 
> Signed-off-by: Colin Ian King 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH] crypto: s5p-sss.c: Fix kernel Oops in AES-ECB mode

2018-02-15 Thread Herbert Xu
On Mon, Feb 05, 2018 at 06:40:20PM +0100, Kamil Konieczny wrote:
> 
> In AES-ECB mode crypt is done with key only, so any use of IV
> can cause kernel Oops, as reported by Anand Moon.
> Fixed it by using IV only in AES-CBC and AES-CTR.
> 
> Signed-off-by: Kamil Konieczny 
> Reported-by: Anand Moon 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [Crypto v5 03/12] support for inline tls

2018-02-15 Thread Dave Watson
On 02/15/18 12:24 PM, Atul Gupta wrote:
> @@ -401,6 +430,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, char 
> __user *optval,
>   goto out;
>   }
>  
> + rc = get_tls_offload_dev(sk);
> + if (rc) {
> + goto out;
> + } else {
> + /* Retain HW unhash for cleanup and move to SW Tx */
> + sk->sk_prot[TLS_BASE_TX].unhash =
> + sk->sk_prot[TLS_FULL_HW].unhash;

Isn't sk->sk_prot a pointer to a global shared struct here still?  It
looks like this would actually modify the global struct, and not just
for this sk.



Re: [PATCH 0/3] crypto: stm32/hash: Correction to improve robustness

2018-02-15 Thread Herbert Xu
On Mon, Jan 29, 2018 at 03:28:08PM +0100, Lionel Debieve wrote:
> From: Lionel Debieve 
> 
> Hi,
> 
> This patch serie will improve global robustness for stm32-hash driver. 
> 
> Patch #1 is fixing dma-burst issue when configuration is not set.
> Patch #2 solves issue that occurs when irq append during final req processing.
> Patch #3 is fixing an issue that have been introduced while managing padding 
> but
> breaking the padding length calculation by hardware to generate correct hash.
> 
> Regards,
> 
> Lionel Debieve (3):
>   crypto: stm32/hash: avoid error if maxburst not defined
>   crypto: stm32/hash: fix performance issues
>   crypto: stm32/hash: rework padding length

All applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH 1/3] crypto: caam/qi - return -EBADMSG for ICV check failure

2018-02-15 Thread Herbert Xu
On Mon, Jan 29, 2018 at 10:38:35AM +0200, Horia Geantă wrote:
> Crypto drivers are expected to return -EBADMSG in case of
> ICV check (authentication) failure.
> 
> In this case it also makes sense to suppress the error message
> in the QI dequeue callback.
> 
> Signed-off-by: Horia Geantă 

All applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests

2018-02-15 Thread Herbert Xu
On Fri, Jan 26, 2018 at 08:15:28PM +0100, Corentin Labbe wrote:
> Hello
> 
> The current crypto_engine support only ahash and ablkcipher request.
> My first patch which try to add skcipher was Nacked, it will add too many 
> functions
> and adding other algs(aead, asymetric_key) will make the situation worst.
> 
> This patchset remove all algs specific stuff and now only process generic 
> crypto_async_request.
> 
> The requests handler function pointer are now moved out of struct engine and
> are now stored directly in a crypto_engine_reqctx.
> 
> The original proposal of Herbert [1] cannot be done completly since the 
> crypto_engine
> could only dequeue crypto_async_request and it is impossible to access any 
> request_ctx
> without knowing the underlying request type.
> 
> So I do something near that was requested: adding crypto_engine_reqctx in TFM 
> context.
> Note that the current implementation expect that crypto_engine_reqctx
> is the first member of the context.
> 
> The first patch is a try to document the crypto engine API.
> The second patch convert the crypto engine with the new way,
> while the following patchs convert the 4 existing users of crypto_engine.
> Note that this split break bisection, so probably the final commit will be 
> all merged.
> 
> Appart from virtio, all 4 latest patch were compile tested only.
> But the crypto engine is tested with my new sun8i-ce driver.
> 
> Regards
> 
> [1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
> 
> Changes since V1:
> - renamed crypto_engine_reqctx to crypto_engine_ctx
> - indentation fix in function parameter
> - do not export crypto_transfer_request
> - Add aead support
> - crypto_finalize_request is now static
> 
> Changes since RFC:
> - Added a documentation patch
> - Added patch for stm32-cryp
> - Changed parameter of all crypto_engine_op functions from
>   crypto_async_request to void*
> - Reintroduced crypto_transfer_xxx_request_to_engine functions
> 
> Corentin Labbe (6):
>   Documentation: crypto: document crypto engine API
>   crypto: engine - Permit to enqueue all async requests
>   crypto: omap: convert to new crypto engine API
>   crypto: virtio: convert to new crypto engine API
>   crypto: stm32-hash: convert to the new crypto engine API
>   crypto: stm32-cryp: convert to the new crypto engine API

All applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH] crypto: sha256-mb - remove HASH_FIRST flag

2018-02-15 Thread Herbert Xu
On Wed, Jan 24, 2018 at 07:10:15PM -0800, Eric Biggers wrote:
> From: Eric Biggers 
> 
> The HASH_FIRST flag is never set.  Remove it.
> 
> Signed-off-by: Eric Biggers 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


[PATCH v2] crypto: add zBeWalgo compression for zram

2018-02-15 Thread Benjamin Warnke
Currently ZRAM uses compression-algorithms from the crypto-api. ZRAM
compresses each page individually. As a result the compression algorithm is
forced to use a very small sliding window. None of the available compression
algorithms is designed to achieve high compression ratios with small inputs.

This patch adds a new compression algorithm 'zBeWalgo' to the crypto api. This
algorithm focusses on increasing the capacity of the compressed block-device
created by ZRAM. The choice of compression algorithms is always a tradeoff
between speed and compression ratio.

If faster algorithms like 'lz4' are chosen the compression ratio is often
lower than the ratio of zBeWalgo as shown in the following benchmarks. Due to
the lower compression ratio, ZRAM needs to fall back to backing_devices
earlier. If backing_devices are required, the effective speed of ZRAM is a
weighted average of de/compression time and writing/reading from the
backing_device. This should be considered when comparing the speeds in the
benchmarks.

There are different kinds of backing_devices, each with its own drawbacks.
1. HDDs: This kind of backing device is very slow. If the compression ratio 
of an algorithm is much lower than the ratio of zBeWalgo, it might be faster
to use zBewalgo instead.
2. SSDs: I tested a swap partition on my NVME-SSD. The speed is even higher
than zram with lz4, but after about 5 Minutes the SSD is blocking all
read/write requests due to overheating. This is definitly not an option.



zBeWalgo is a completely new algorithm - Currently it is not published
somewhere else right now, googleing it would not show up any results. The
following section describes how the algorithm works.

zBeWalgo itself is a container compression algorithm, which can execute
multiple different compression and transformation algorithms after each other.
The execution of different compression algorithms after each other will be
called 'combination' in this description and in the code. Additionally to be
able to execute combinations of algorithms, zBeWalgo can try different
combinations on the same input. This allows high compression ratios on
completely different datasets, which would otherwise require its own
algorithm each. Executing all known combinations on each input page would be
very slow. Therefore the data is compressed at first with that combination,
which was already successful on the last input page. If the compressed data
size of the current page is similar to that of the last page, the compressed
data is returned immediately without even trying the other combinations. Even
if there is no guarantee that consecutive calls to the algorithm belong to
each other, the speed improvement is obvious.

ZRAM uses zsmalloc for the management of the compressed pages. The largest
size-class in zsmalloc is 3264 Bytes. If the compressed data is larger than
that threshold, ZRAM ignores the compression and writes the uncompressed page
instead. As a consequence it is useless to continue compression, if the
algorithm detects, that the data can not be compressed using the current
combination. The threshold for aborting compression can be changed via sysfs at
any time, even if the algorithm is currently in use. If a combination fails to
compress the data, zBeWalgo tries the next combination. If no combination is
able to reduce the data in size, zBeWalgo returns a negative value.

Each combination consists of up to 7 compression and transformation steps.
Combinations can be added and removed at any time via sysfs. Already compressed
Data can always be decompressed, even if the combination used to produce it
does not exist anymore. Technically the user could add up to 256 combinations
concurrently, but that would be very time consuming if the data can not be
compressed.

To be able to build combinations and call different algorithms, all those
algorithms are implementing the same interface. This enables the user to
specify additional combinations while ZRAM is running.

Within the combinations many different algorithms can be used. Some of those
algorithms are published. This patch adds the following algorithms to be used
within the combinations:
- bwt: The Burrows-Wheeler-Transformation was published by 'M. Burrows' and
   'D. J. Wheeler' in 1994. This implementation uses counting sort for
   sorting the data. Their original paper is online available at:
   http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-124.pdf
- mtf: The Move-To-Front algorithm as described by 'M. Burrows' and
   'D. J. Wheeler' in the same paper as bwt.
- jbe: j-bit-encoding as proposed by 'I Made Agus Dwi Suarjaya' in 2012.
   https://arxiv.org/pdf/1209.1045.pdf
- jbe2: A minor modification of jbe. Swapping groups of 4 Bit in consecutive
   Bytes can increase the compression ratio, if for example the first
   4 Bits of each Byte are zero. If jbe2 is called after mtf, this
   happens ofthen.
- rle: Run Length Encoding
- huffman: Huffman 

Re: [PATCH v3 1/4] crypto: AF_ALG AIO - lock context IV

2018-02-15 Thread Jeffrey Walton
On Thu, Feb 15, 2018 at 8:04 AM, Stephan Mueller  wrote:
> Am Donnerstag, 15. Februar 2018, 13:45:53 CET schrieb Harsh Jain:
>
>> > Could you please elaborate what you mean with "partial tag" support?
>>
>> Here is the catch, Calculation of tag depends on total payload length
>> atleast for shaX, gcm,ccm mode on which I have worked.
>>
>> If we take an example of shaX. It appends 1 special block at the end of user
>> data which includes total input length in bit. Refer
>> "sha1_base_do_finalize" Suppose we have 32 byte and we break this in 2 IOCB
>> of 16 bytes each. Expected result : 32 encrypted bytes + sha auth tag
>> considering length 32 bytes. What we will  get : 16 bytes + sha auth tag
>> considering length 16 bytes + 16 encrypted bytes + another sha tag
>> considering 16 bytes.
>
> As AF_ALG for AEAD is implemented, there is no stream support where the hash
> is calculated at the end. This is even not supported in the current AEAD API
> of the kernel crypto API as far as I see. The only "stream-like" support is
> that you can invoke multiple separate sendmsg calls to provide the input data
> for the AEAD. But once you call recvmsg, the ciphertext and the tag is
> calculated and thus the recvmsg is akin to a hash_final operation.

If you follow Bernstein's protocol design philosophy, then messages
should be no larger than about 4K in size. From
https://nacl.cr.yp.to/valid.html:

This is one of several reasons [1] that callers should (1) split
all data into packets sent through the network; (2) put a
small global limit on packet length; and (3) separately
encrypt and authenticate each packet.

With the [1] link being
https://groups.google.com/forum/#!original/boring-crypto/BpUmNMXKMYQ/EEwAIeQdjacJ

Jeff


[PATCH Resend 5/5] hwrng: stm32 - rework read timeout calculation

2018-02-15 Thread Lionel Debieve
Increase timeout delay to support longer timing linked
to rng initialization. Measurement is based on timer instead
of instructions per iteration which is not powerful on all
targets.

Signed-off-by: Lionel Debieve 
---
 drivers/char/hw_random/stm32-rng.c | 25 ++---
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/drivers/char/hw_random/stm32-rng.c 
b/drivers/char/hw_random/stm32-rng.c
index 709a8d061be3..0d2328da3b76 100644
--- a/drivers/char/hw_random/stm32-rng.c
+++ b/drivers/char/hw_random/stm32-rng.c
@@ -16,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -35,15 +36,6 @@
 
 #define RNG_DR 0x08
 
-/*
- * It takes 40 cycles @ 48MHz to generate each random number (e.g. <1us).
- * At the time of writing STM32 parts max out at ~200MHz meaning a timeout
- * of 500 leaves us a very comfortable margin for error. The loop to which
- * the timeout applies takes at least 4 instructions per iteration so the
- * timeout is enough to take us up to multi-GHz parts!
- */
-#define RNG_TIMEOUT 500
-
 struct stm32_rng_private {
struct hwrng rng;
void __iomem *base;
@@ -63,13 +55,16 @@ static int stm32_rng_read(struct hwrng *rng, void *data, 
size_t max, bool wait)
 
while (max > sizeof(u32)) {
sr = readl_relaxed(priv->base + RNG_SR);
+   /* Manage timeout which is based on timer and take */
+   /* care of initial delay time when enabling rng */
if (!sr && wait) {
-   unsigned int timeout = RNG_TIMEOUT;
-
-   do {
-   cpu_relax();
-   sr = readl_relaxed(priv->base + RNG_SR);
-   } while (!sr && --timeout);
+   retval = readl_relaxed_poll_timeout_atomic(priv->base
+  + RNG_SR,
+  sr, sr,
+  10, 5);
+   if (retval)
+   dev_err((struct device *)priv->rng.priv,
+   "%s: timeout %x!\n", __func__, sr);
}
 
/* If error detected or data not ready... */
-- 
2.15.1



[PATCH Resend 2/5] dt-bindings: rng: add reset node for stm32

2018-02-15 Thread Lionel Debieve
Adding optional resets property for rng.

Signed-off-by: Lionel Debieve 
---
 Documentation/devicetree/bindings/rng/st,stm32-rng.txt | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/Documentation/devicetree/bindings/rng/st,stm32-rng.txt 
b/Documentation/devicetree/bindings/rng/st,stm32-rng.txt
index 47f04176f93b..cb7ca78135ff 100644
--- a/Documentation/devicetree/bindings/rng/st,stm32-rng.txt
+++ b/Documentation/devicetree/bindings/rng/st,stm32-rng.txt
@@ -11,6 +11,9 @@ Required properties:
 - interrupts : The designated IRQ line for the RNG
 - clocks : The clock needed to enable the RNG
 
+Optional properties:
+- resets : The reset to properly start RNG
+
 Example:
 
rng: rng@50060800 {
-- 
2.15.1



[PATCH Resend 3/5] hwrng: stm32 - allow disable clock error detection

2018-02-15 Thread Lionel Debieve
Add a new property that allow to disable the clock error
detection which is required when the clock source selected
is out of specification (which is not mandatory).

Signed-off-by: Lionel Debieve 
---
 drivers/char/hw_random/stm32-rng.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/char/hw_random/stm32-rng.c 
b/drivers/char/hw_random/stm32-rng.c
index 83c695938a2d..709a8d061be3 100644
--- a/drivers/char/hw_random/stm32-rng.c
+++ b/drivers/char/hw_random/stm32-rng.c
@@ -26,6 +26,7 @@
 
 #define RNG_CR 0x00
 #define RNG_CR_RNGEN BIT(2)
+#define RNG_CR_CED BIT(5)
 
 #define RNG_SR 0x04
 #define RNG_SR_SEIS BIT(6)
@@ -48,6 +49,7 @@ struct stm32_rng_private {
void __iomem *base;
struct clk *clk;
struct reset_control *rst;
+   bool ced;
 };
 
 static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
@@ -101,7 +103,11 @@ static int stm32_rng_init(struct hwrng *rng)
if (err)
return err;
 
-   writel_relaxed(RNG_CR_RNGEN, priv->base + RNG_CR);
+   if (priv->ced)
+   writel_relaxed(RNG_CR_RNGEN, priv->base + RNG_CR);
+   else
+   writel_relaxed(RNG_CR_RNGEN | RNG_CR_CED,
+  priv->base + RNG_CR);
 
/* clear error indicators */
writel_relaxed(0, priv->base + RNG_SR);
@@ -149,6 +155,8 @@ static int stm32_rng_probe(struct platform_device *ofdev)
reset_control_deassert(priv->rst);
}
 
+   priv->ced = of_property_read_bool(np, "clock-error-detect");
+
dev_set_drvdata(dev, priv);
 
priv->rng.name = dev_driver_string(dev),
-- 
2.15.1



Re: [PATCH v3 1/4] crypto: AF_ALG AIO - lock context IV

2018-02-15 Thread Stephan Mueller
Am Donnerstag, 15. Februar 2018, 13:45:53 CET schrieb Harsh Jain:

Hi Harsh,

> > Could you please elaborate what you mean with "partial tag" support?
> 
> Here is the catch, Calculation of tag depends on total payload length
> atleast for shaX, gcm,ccm mode on which I have worked.
> 
> If we take an example of shaX. It appends 1 special block at the end of user
> data which includes total input length in bit. Refer
> "sha1_base_do_finalize" Suppose we have 32 byte and we break this in 2 IOCB
> of 16 bytes each. Expected result : 32 encrypted bytes + sha auth tag
> considering length 32 bytes. What we will  get : 16 bytes + sha auth tag
> considering length 16 bytes + 16 encrypted bytes + another sha tag
> considering 16 bytes.

As AF_ALG for AEAD is implemented, there is no stream support where the hash 
is calculated at the end. This is even not supported in the current AEAD API 
of the kernel crypto API as far as I see. The only "stream-like" support is 
that you can invoke multiple separate sendmsg calls to provide the input data 
for the AEAD. But once you call recvmsg, the ciphertext and the tag is 
calculated and thus the recvmsg is akin to a hash_final operation.

Complete parallel execution of multiple independent AEAD cipher operations 
with AIO is possible using the inline IV as suggested in the patch set.

The locking of the ctx->iv is only there to support a driver not to whack the 
IV buffer. However, if others also agree that the ctx->ivlock should not be 
taken in the AEAD code path because a potential whacking the IV buffer by a 
driver does not need to be prevented, I would change the patch.

Ciao
Stephan




[PATCH Resend 4/5] dt-bindings: rng: add clock detection error for stm32

2018-02-15 Thread Lionel Debieve
Add optional property to enable the clock detection error
on rng block. It is used to allow slow clock source which
give correct entropy for rng.

Signed-off-by: Lionel Debieve 
---
 Documentation/devicetree/bindings/rng/st,stm32-rng.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Documentation/devicetree/bindings/rng/st,stm32-rng.txt 
b/Documentation/devicetree/bindings/rng/st,stm32-rng.txt
index cb7ca78135ff..1dfa7d51e006 100644
--- a/Documentation/devicetree/bindings/rng/st,stm32-rng.txt
+++ b/Documentation/devicetree/bindings/rng/st,stm32-rng.txt
@@ -13,6 +13,7 @@ Required properties:
 
 Optional properties:
 - resets : The reset to properly start RNG
+- clock-error-detect : Enable the clock detection management
 
 Example:
 
-- 
2.15.1



[PATCH Resend 1/5] hwrng: stm32 - add reset during probe

2018-02-15 Thread Lionel Debieve
Avoid issue when probing the RNG without
reset if bad status has been detected previously

Signed-off-by: Lionel Debieve 
---
 drivers/char/hw_random/stm32-rng.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/drivers/char/hw_random/stm32-rng.c 
b/drivers/char/hw_random/stm32-rng.c
index 63d84e6f1891..83c695938a2d 100644
--- a/drivers/char/hw_random/stm32-rng.c
+++ b/drivers/char/hw_random/stm32-rng.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #define RNG_CR 0x00
@@ -46,6 +47,7 @@ struct stm32_rng_private {
struct hwrng rng;
void __iomem *base;
struct clk *clk;
+   struct reset_control *rst;
 };
 
 static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
@@ -140,6 +142,13 @@ static int stm32_rng_probe(struct platform_device *ofdev)
if (IS_ERR(priv->clk))
return PTR_ERR(priv->clk);
 
+   priv->rst = devm_reset_control_get(>dev, NULL);
+   if (!IS_ERR(priv->rst)) {
+   reset_control_assert(priv->rst);
+   udelay(2);
+   reset_control_deassert(priv->rst);
+   }
+
dev_set_drvdata(dev, priv);
 
priv->rng.name = dev_driver_string(dev),
-- 
2.15.1



[PATCH Resend 0/5] hwrng: stm32 - Improvement for stm32-rng

2018-02-15 Thread Lionel Debieve
This set of patches add extended functionalities for stm32 rng
driver.
Patch #1 includes a reset during probe to avoid any error status
which can occur during bootup process and keep safe rng integrity.

Patch #3 adds a new property to manage the clock error detection
feature which can be disabled on specific target.

Patch #5 rework the timeout calculation for read value that was
previously defined based on loop operation and is now based on
timer.

Lionel Debieve (5):
  hwrng: stm32 - add reset during probe
  dt-bindings: rng: add reset node for stm32
  hwrng: stm32 - allow disable clock error detection
  dt-bindings: rng: add clock detection error for stm32
  hwrng: stm32 - rework read timeout calculation

 .../devicetree/bindings/rng/st,stm32-rng.txt   |  4 ++
 drivers/char/hw_random/stm32-rng.c | 44 ++
 2 files changed, 32 insertions(+), 16 deletions(-)

-- 
2.15.1



Re: [PATCH v3 1/4] crypto: AF_ALG AIO - lock context IV

2018-02-15 Thread Harsh Jain


On 15-02-2018 17:15, Stephan Mueller wrote:
> Am Donnerstag, 15. Februar 2018, 12:38:12 CET schrieb Harsh Jain:
>
> Hi Harsh,
>
>> On 15-02-2018 12:47, Stephan Mueller wrote:
>>> Am Donnerstag, 15. Februar 2018, 08:03:20 CET schrieb Harsh Jain:
>>>
>>> Hi Harsh,
>>>
 Even after guarantee of serialization, In the end we will get wrong
 result
 as mentioned above. which destination side cannot decrypt it. What I feel
 is scenario of sending 2 of more IOCB in case of AEAD itself is wrong.
>>> Without the inline IV handling, I would concur.
>> Even with Inline IV, We will have 2 Auth Tag. can we authenticate the data
>> with 2 Auth Tags?
> The AAD and the tag are both sent to the kernel like in the inline IV case as 
> part of the data via sendmsg.
>
> Thus, when you use inline IV support, an entire self-contained AEAD cipher 
> operation can be defined with one IOCB. Thus, if you have multiple IOCBs, 
> inline IV allow fully parallel execution of different AEAD requests.
>
> See the following kernel comment:
>
> /*
>  * Encryption operation - The in-place cipher operation is
>  * achieved by the following operation:
>  *
>  * TX SGL: AAD || PT
>  *  |  |
>  *  | copy |
>  *  v  v
>  * RX SGL: AAD || PT || Tag
>
> /*
>  * Decryption operation - To achieve an in-place cipher
>  * operation, the following  SGL structure is used:
>  *
>  * TX SGL: AAD || CT || Tag
>  *  |  | ^
>  *  | copy | | Create SGL link.
>  *  v  v |
>  * RX SGL: AAD || CT +
>  */
>
> Note, the TX SGL is what the caller submitted via sendmsg and the RX SGL is 
> the data the caller obtains via recvmsg.
>
> Hence, in inline IV support, the caller sends the following:
>
> encryption: IV || AAD || PT
>
> decryption: IV || AAD || CT || Tag
>
 We
 should not allow this type of requests for AEAD.
>>> "Not allow" as in "technically block"? As a user would only shoot itself
>>> when he does that not knowing the consequences, I am not in favor of such
>>> an artificial block.
>> Agreed, It may not be right thing to do, but if we allowed it, What he will
>> do with Auth( each processed with final Block) tags received in each
>> request.
> Each tag is returned as part of the recvmsg data. Thus, AEAD cipher 
> operations 
> can commence fully parallel if inline IV handling is used.
>> I personally feels AEAD IV serialization logic is incomplete without partial
>> tag support.
> Could you please elaborate what you mean with "partial tag" support?
Here is the catch, Calculation of tag depends on total payload length atleast 
for shaX, gcm,ccm mode on which I have worked.

If we take an example of shaX. It appends 1 special block at the end of user 
data which includes total input length in bit. Refer "sha1_base_do_finalize"
Suppose we have 32 byte and we break this in 2 IOCB of 16 bytes each.
Expected result : 32 encrypted bytes + sha auth tag considering length 32 bytes.
What we will  get : 16 bytes + sha auth tag considering length 16 bytes + 16 
encrypted bytes + another sha tag considering 16 bytes.



>
> Ciao
> Stephan
>
>



Re: [Crypto v4 12/12] Makefile Kconfig

2018-02-15 Thread kbuild test robot
Hi Atul,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on cryptodev/master]
[cannot apply to net/master net-next/master v4.16-rc1 next-20180214]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/Atul-Gupta/Chelsio-Inline-TLS/20180215-072600
base:   
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
reproduce:
# apt-get install sparse
make ARCH=x86_64 allmodconfig
make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)

>> drivers/crypto/chelsio/chtls/chtls_main.c:139:19: sparse: symbol 
>> 'chtls_netdev' was not declared. Should it be
>> drivers/crypto/chelsio/chtls/chtls_main.c:152:6: sparse: symbol 
>> 'chtls_update_prot' was not declared. Should it be
>> drivers/crypto/chelsio/chtls/chtls_main.c:157:5: sparse: symbol 
>> 'chtls_inline_feature' was not declared. Should it be
>> drivers/crypto/chelsio/chtls/chtls_main.c:171:5: sparse: symbol 
>> 'chtls_create_hash' was not declared. Should it be
>> drivers/crypto/chelsio/chtls/chtls_main.c:178:6: sparse: symbol 
>> 'chtls_destroy_hash' was not declared. Should it be
--
>> drivers/crypto/chelsio/chtls/chtls_cm.c:373:27: sparse: incorrect type in 
>> assignment (different address spaces) @@ expected struct socket_wq @@ got @@
   drivers/crypto/chelsio/chtls/chtls_cm.c:373:27: expected struct socket_wq
   drivers/crypto/chelsio/chtls/chtls_cm.c:373:27: got struct socket_wq
>> drivers/crypto/chelsio/chtls/chtls_cm.c:395:23: sparse: incompatible types 
>> in comparison expression (different address spaces)
   drivers/crypto/chelsio/chtls/chtls_cm.c:714:6: sparse: symbol 'free_atid' 
was not declared. Should it be
>> drivers/crypto/chelsio/chtls/chtls_cm.h:150:35: sparse: incorrect type in 
>> argument 1 (different address spaces) @@ expected struct socket_wq @@ got 
>> struct socket_wq struct socket_wq @@
   drivers/crypto/chelsio/chtls/chtls_cm.h:150:35: expected struct socket_wq
   drivers/crypto/chelsio/chtls/chtls_cm.h:150:35: got struct socket_wq
>> drivers/crypto/chelsio/chtls/chtls_cm.c:1165:37: sparse: incorrect type in 
>> argument 2 (different base types) @@ expected unsigned int local_ip @@ got 
>> ed int local_ip @@
   drivers/crypto/chelsio/chtls/chtls_cm.c:1165:37: expected unsigned int 
local_ip
   drivers/crypto/chelsio/chtls/chtls_cm.c:1165:37: got restricted __be32 daddr
>> drivers/crypto/chelsio/chtls/chtls_cm.c:1165:49: sparse: incorrect type in 
>> argument 3 (different base types) @@ expected unsigned int peer_ip @@ got ed 
>> int peer_ip @@
   drivers/crypto/chelsio/chtls/chtls_cm.c:1165:49: expected unsigned int 
peer_ip
   drivers/crypto/chelsio/chtls/chtls_cm.c:1165:49: got restricted __be32 saddr
>> drivers/crypto/chelsio/chtls/chtls_cm.h:173:37: sparse: incorrect type in 
>> assignment (different base types) @@ expected restricted __be32 
>> skc_rcv_saddr @@ got unsignrestricted __be32 skc_rcv_saddr @@
   drivers/crypto/chelsio/chtls/chtls_cm.h:173:37: expected restricted __be32 
skc_rcv_saddr
   drivers/crypto/chelsio/chtls/chtls_cm.h:173:37: got unsigned int local_ip
>> drivers/crypto/chelsio/chtls/chtls_cm.h:174:37: sparse: incorrect type in 
>> assignment (different base types) @@ expected restricted __be32 skc_daddr @@ 
>> got unsignrestricted __be32 skc_daddr @@
   drivers/crypto/chelsio/chtls/chtls_cm.h:174:37: expected restricted __be32 
skc_daddr
   drivers/crypto/chelsio/chtls/chtls_cm.h:174:37: got unsigned int peer_ip
>> drivers/crypto/chelsio/chtls/chtls_cm.c:1243:23: sparse: incorrect type in 
>> argument 1 (different address spaces) @@ expected struct dst_entry @@ got 
>> struct dst_entry struct dst_entry @@
   drivers/crypto/chelsio/chtls/chtls_cm.c:1243:23: expected struct dst_entry
   drivers/crypto/chelsio/chtls/chtls_cm.c:1243:23: got struct dst_entry
>> drivers/crypto/chelsio/chtls/chtls_cm.c:1539:24: sparse: cast to restricted 
>> __be16
>> drivers/crypto/chelsio/chtls/chtls_cm.c:1539:24: sparse: cast to restricted 
>> __be16
>> drivers/crypto/chelsio/chtls/chtls_cm.c:1539:24: sparse: cast to restricted 
>> __be16
>> drivers/crypto/chelsio/chtls/chtls_cm.c:1539:24: sparse: cast to restricted 
>> __be16
   drivers/crypto/chelsio/chtls/chtls_cm.c:1540:31: sparse: cast to restricted 
__be16
   drivers/crypto/chelsio/chtls/chtls_cm.c:1540:31: sparse: cast to restricted 
__be16
   drivers/crypto/chelsio/chtls/chtls_cm.c:1540:31: sparse: cast to restricted 
__be16
   drivers/crypto/chelsio/chtls/chtls_cm.c:1540:31: sparse: cast to restricted 
__be16
   drivers/crypto/chelsio/chtls/chtls_cm.c:1664:31: sparse: incorrect type in 
argument 1 (different address spac

[RFC PATCH] chtls_netdev() can be static

2018-02-15 Thread kbuild test robot

Fixes: 5995a3b59239 ("Makefile Kconfig")
Signed-off-by: Fengguang Wu 
---
 chtls_main.c |   12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/chelsio/chtls/chtls_main.c 
b/drivers/crypto/chelsio/chtls/chtls_main.c
index 58efb4a..3452f44 100644
--- a/drivers/crypto/chelsio/chtls/chtls_main.c
+++ b/drivers/crypto/chelsio/chtls/chtls_main.c
@@ -136,8 +136,8 @@ static int chtls_stop_listen(struct sock *sk)
return 0;
 }
 
-struct net_device *chtls_netdev(struct tls_device *dev,
-   struct net_device *netdev)
+static struct net_device *chtls_netdev(struct tls_device *dev,
+  struct net_device *netdev)
 {
struct chtls_dev *cdev = to_chtls_dev(dev);
int i;
@@ -149,12 +149,12 @@ struct net_device *chtls_netdev(struct tls_device *dev,
return cdev->ports[i];
 }
 
-void chtls_update_prot(struct tls_device *dev, struct sock *sk)
+static void chtls_update_prot(struct tls_device *dev, struct sock *sk)
 {
sk->sk_prot = _base_prot;
 }
 
-int chtls_inline_feature(struct tls_device *dev)
+static int chtls_inline_feature(struct tls_device *dev)
 {
struct chtls_dev *cdev = to_chtls_dev(dev);
struct net_device *netdev;
@@ -168,14 +168,14 @@ int chtls_inline_feature(struct tls_device *dev)
return 1;
 }
 
-int chtls_create_hash(struct tls_device *dev, struct sock *sk)
+static int chtls_create_hash(struct tls_device *dev, struct sock *sk)
 {
if (sk->sk_state == TCP_LISTEN)
return chtls_start_listen(sk);
return 0;
 }
 
-void chtls_destroy_hash(struct tls_device *dev, struct sock *sk)
+static void chtls_destroy_hash(struct tls_device *dev, struct sock *sk)
 {
if (sk->sk_state == TCP_LISTEN)
chtls_stop_listen(sk);


[PATCH] crypto: atmel: Delete error messages for a failed memory allocation in six functions

2018-02-15 Thread SF Markus Elfring
From: Markus Elfring 
Date: Thu, 15 Feb 2018 11:38:30 +0100

Omit extra messages for a memory allocation failure in these functions.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring 
---
 drivers/crypto/atmel-aes.c  | 6 +-
 drivers/crypto/atmel-sha.c  | 9 ++---
 drivers/crypto/atmel-tdes.c | 9 ++---
 3 files changed, 5 insertions(+), 19 deletions(-)

diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
index 691c6465b71e..6fb24fc94b1f 100644
--- a/drivers/crypto/atmel-aes.c
+++ b/drivers/crypto/atmel-aes.c
@@ -2602,16 +2602,13 @@ static struct crypto_platform_data 
*atmel_aes_of_init(struct platform_device *pd
}
 
pdata = devm_kzalloc(>dev, sizeof(*pdata), GFP_KERNEL);
-   if (!pdata) {
-   dev_err(>dev, "could not allocate memory for pdata\n");
+   if (!pdata)
return ERR_PTR(-ENOMEM);
-   }
 
pdata->dma_slave = devm_kzalloc(>dev,
sizeof(*(pdata->dma_slave)),
GFP_KERNEL);
if (!pdata->dma_slave) {
-   dev_err(>dev, "could not allocate memory for 
dma_slave\n");
devm_kfree(>dev, pdata);
return ERR_PTR(-ENOMEM);
}
@@ -2649,7 +2646,6 @@ static int atmel_aes_probe(struct platform_device *pdev)
 
aes_dd = devm_kzalloc(>dev, sizeof(*aes_dd), GFP_KERNEL);
if (aes_dd == NULL) {
-   dev_err(dev, "unable to alloc data struct.\n");
err = -ENOMEM;
goto aes_dd_err;
}
diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c
index 8874aa5ca0f7..4d43081120db 100644
--- a/drivers/crypto/atmel-sha.c
+++ b/drivers/crypto/atmel-sha.c
@@ -2726,18 +2726,14 @@ static struct crypto_platform_data 
*atmel_sha_of_init(struct platform_device *pd
}
 
pdata = devm_kzalloc(>dev, sizeof(*pdata), GFP_KERNEL);
-   if (!pdata) {
-   dev_err(>dev, "could not allocate memory for pdata\n");
+   if (!pdata)
return ERR_PTR(-ENOMEM);
-   }
 
pdata->dma_slave = devm_kzalloc(>dev,
sizeof(*(pdata->dma_slave)),
GFP_KERNEL);
-   if (!pdata->dma_slave) {
-   dev_err(>dev, "could not allocate memory for 
dma_slave\n");
+   if (!pdata->dma_slave)
return ERR_PTR(-ENOMEM);
-   }
 
return pdata;
 }
@@ -2758,7 +2754,6 @@ static int atmel_sha_probe(struct platform_device *pdev)
 
sha_dd = devm_kzalloc(>dev, sizeof(*sha_dd), GFP_KERNEL);
if (sha_dd == NULL) {
-   dev_err(dev, "unable to alloc data struct.\n");
err = -ENOMEM;
goto sha_dd_err;
}
diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c
index 592124f8382b..97b0423efa7f 100644
--- a/drivers/crypto/atmel-tdes.c
+++ b/drivers/crypto/atmel-tdes.c
@@ -1312,18 +1312,14 @@ static struct crypto_platform_data 
*atmel_tdes_of_init(struct platform_device *p
}
 
pdata = devm_kzalloc(>dev, sizeof(*pdata), GFP_KERNEL);
-   if (!pdata) {
-   dev_err(>dev, "could not allocate memory for pdata\n");
+   if (!pdata)
return ERR_PTR(-ENOMEM);
-   }
 
pdata->dma_slave = devm_kzalloc(>dev,
sizeof(*(pdata->dma_slave)),
GFP_KERNEL);
-   if (!pdata->dma_slave) {
-   dev_err(>dev, "could not allocate memory for 
dma_slave\n");
+   if (!pdata->dma_slave)
return ERR_PTR(-ENOMEM);
-   }
 
return pdata;
 }
@@ -1344,7 +1340,6 @@ static int atmel_tdes_probe(struct platform_device *pdev)
 
tdes_dd = devm_kmalloc(>dev, sizeof(*tdes_dd), GFP_KERNEL);
if (tdes_dd == NULL) {
-   dev_err(dev, "unable to alloc data struct.\n");
err = -ENOMEM;
goto tdes_dd_err;
}
-- 
2.16.1



Re: [PATCH v3 1/4] crypto: AF_ALG AIO - lock context IV

2018-02-15 Thread Stephan Mueller
Am Donnerstag, 15. Februar 2018, 12:38:12 CET schrieb Harsh Jain:

Hi Harsh,

> On 15-02-2018 12:47, Stephan Mueller wrote:
> > Am Donnerstag, 15. Februar 2018, 08:03:20 CET schrieb Harsh Jain:
> > 
> > Hi Harsh,
> > 
> >> Even after guarantee of serialization, In the end we will get wrong
> >> result
> >> as mentioned above. which destination side cannot decrypt it. What I feel
> >> is scenario of sending 2 of more IOCB in case of AEAD itself is wrong.
> > 
> > Without the inline IV handling, I would concur.
> 
> Even with Inline IV, We will have 2 Auth Tag. can we authenticate the data
> with 2 Auth Tags?

The AAD and the tag are both sent to the kernel like in the inline IV case as 
part of the data via sendmsg.

Thus, when you use inline IV support, an entire self-contained AEAD cipher 
operation can be defined with one IOCB. Thus, if you have multiple IOCBs, 
inline IV allow fully parallel execution of different AEAD requests.

See the following kernel comment:

/*
 * Encryption operation - The in-place cipher operation is
 * achieved by the following operation:
 *
 * TX SGL: AAD || PT
 *  |  |
 *  | copy |
 *  v  v
 * RX SGL: AAD || PT || Tag

/*
 * Decryption operation - To achieve an in-place cipher
 * operation, the following  SGL structure is used:
 *
 * TX SGL: AAD || CT || Tag
 *  |  | ^
 *  | copy | | Create SGL link.
 *  v  v |
 * RX SGL: AAD || CT +
 */

Note, the TX SGL is what the caller submitted via sendmsg and the RX SGL is 
the data the caller obtains via recvmsg.

Hence, in inline IV support, the caller sends the following:

encryption: IV || AAD || PT

decryption: IV || AAD || CT || Tag

> >> We
> >> should not allow this type of requests for AEAD.
> > 
> > "Not allow" as in "technically block"? As a user would only shoot itself
> > when he does that not knowing the consequences, I am not in favor of such
> > an artificial block.
> 
> Agreed, It may not be right thing to do, but if we allowed it, What he will
> do with Auth( each processed with final Block) tags received in each
> request.

Each tag is returned as part of the recvmsg data. Thus, AEAD cipher operations 
can commence fully parallel if inline IV handling is used.
> 
> I personally feels AEAD IV serialization logic is incomplete without partial
> tag support.

Could you please elaborate what you mean with "partial tag" support?

Ciao
Stephan




Re: [PATCH v3 1/4] crypto: AF_ALG AIO - lock context IV

2018-02-15 Thread Harsh Jain


On 15-02-2018 12:47, Stephan Mueller wrote:
> Am Donnerstag, 15. Februar 2018, 08:03:20 CET schrieb Harsh Jain:
>
> Hi Harsh,
>
>> Even after guarantee of serialization, In the end we will get wrong result
>> as mentioned above. which destination side cannot decrypt it. What I feel
>> is scenario of sending 2 of more IOCB in case of AEAD itself is wrong.
> Without the inline IV handling, I would concur.
Even with Inline IV, We will have 2 Auth Tag. can we authenticate the data with 
2 Auth Tags?
>
>> We
>> should not allow this type of requests for AEAD.
> "Not allow" as in "technically block"? As a user would only shoot itself when 
> he does that not knowing the consequences, I am not in favor of such an 
> artificial block.
Agreed, It may not be right thing to do, but if we allowed it, What he will do 
with Auth( each processed with final Block) tags received in each request.

I personally feels AEAD IV serialization logic is incomplete without partial 
tag support.

>> Can you think of any use
>> case it is going to solve?
> Well, I could fathom a use case of this. In FIPS 140-2 (yes, a term not well 
> received by some here), NIST insists for GCM that the IV is handled by the 
> cryptographic implementation.
>
> So, when using GCM for TLS, for example, the GCM implementation would know a 
> bit about how the IV is updated as a session ID. I.e. after the end of one 
> AEAD operation, the IV is written back but modified such to comply with the 
> rules of some higher level proto. Thus, if such a scenarios is implemented by 
> a driver here, multiple IOCBs could be used with such "TLSified" GCM, for 
> example.
>
> And such "TLSification" could be as simple as implementing an IV generator 
> that can be used with every (AEAD) cipher implementation.
>
>> Can receiver decrypt(with 2 IOCB) the same request successfully without
>> knowing  sender has done the operation in 2 request with size "x" each?
>>> Ciao
>>> Stephan
>
>
> Ciao
> Stephan
>
>



Re: [PATCH 0/5] hwrng: stm32 - Improvement for stm32-rng

2018-02-15 Thread Herbert Xu
On Mon, Jan 29, 2018 at 06:05:16PM +0100, Lionel Debieve wrote:
> This set of patches add extended functionalities for stm32 rng
> driver.
> Patch #1 includes a reset during probe to avoid any error status
> which can occur during bootup process and keep safe rng integrity.
> 
> Patch #3 adds a new property to manage the clock error detection
> feature which can be disabled on specific target.
> 
> Patch #5 rework the timeout calculation for read value that was
> previously defined based on loop operation and is now based on
> timer.
> 
> Lionel Debieve (5):
>   hwrng: stm32 - add reset during probe
>   dt-bindings: rng: add reset node for stm32
>   hwrng: stm32 - allow disable clock error detection
>   dt-bindings: rng: add clock detection error for stm32
>   hwrng: stm32 - rework read timeout calculation
> 
>  .../devicetree/bindings/rng/st,stm32-rng.txt   |  4 ++
>  drivers/char/hw_random/stm32-rng.c | 44 
> ++
>  2 files changed, 32 insertions(+), 16 deletions(-)

I never received patch 5.

Which trees are patch 2 and 4 meant to go through?

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v3 2/3] MIPS: crypto: Add crc32 and crc32c hw accelerated module

2018-02-15 Thread Herbert Xu
On Fri, Feb 09, 2018 at 10:11:06PM +, James Hogan wrote:
> From: Marcin Nowakowski 
> 
> This module registers crc32 and crc32c algorithms that use the
> optional CRC32[bhwd] and CRC32C[bhwd] instructions in MIPSr6 cores.
> 
> Signed-off-by: Marcin Nowakowski 
> Signed-off-by: James Hogan 
> Cc: Ralf Baechle 
> Cc: Herbert Xu 
> Cc: "David S. Miller" 
> Cc: linux-m...@linux-mips.org
> Cc: linux-crypto@vger.kernel.org
> ---
> Changes in v3:
>  - Convert to using assembler macros to support CRC instructions on
>older toolchains, using the helpers merged for 4.16. This removes the
>need to hardcode either rt or rs (i.e. as $v0 (CRC_REGISTER) and
>$at), and drops the C "register" keywords sprinkled everywhere.
>  - Minor whitespace rearrangement of _CRC32 macro.
>  - Add SPDX-License-Identifier to crc32-mips.c and the crypo Makefile.
>  - Update copyright from ImgTec to MIPS Tech, LLC.
>  - Update imgtec.com email addresses to mips.com.
> 
> Changes in v2:
>  - minor code refactoring as suggested by JamesH which produces
>a better assembly output for 32-bit builds
> ---
>  arch/mips/Kconfig |   4 +-
>  arch/mips/Makefile|   3 +-
>  arch/mips/crypto/Makefile |   6 +-
>  arch/mips/crypto/crc32-mips.c | 346 +++-
>  crypto/Kconfig|   9 +-
>  5 files changed, 368 insertions(+)
>  create mode 100644 arch/mips/crypto/Makefile
>  create mode 100644 arch/mips/crypto/crc32-mips.c

Acked-by: Herbert Xu 
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH 2/2] crypto: caam: Use common error handling code in four functions

2018-02-15 Thread Horia Geantă
On 2/14/2018 8:32 PM, SF Markus Elfring wrote:
> From: Markus Elfring 
> Date: Wed, 14 Feb 2018 19:14:49 +0100
> 
> Add jump targets so that a bit of exception handling can be better reused
> at the end of these functions.
> 
> Signed-off-by: Markus Elfring 
[snip]
> @@ -1096,6 +1092,7 @@ static int ahash_digest(struct ahash_request *req)
>   if (!ret) {
>   ret = -EINPROGRESS;
>   } else {
> +unmap_hash:
>   ahash_unmap(jrdev, edesc, req, digestsize);
>   kfree(edesc);
>   }
> 
I understand jumps are a necessary evil for dealing with shortcomings of C,
however please avoid jumping in an if/else branch.

Code could be rewritten as:

if (!ret)
return -EINPROGRESS;

unmap_hash:
ahash_unmap(jrdev, edesc, req, digestsize);
kfree(edesc);

Thanks,
Horia