Re: XFRM Stats

2017-06-21 Thread Raj Ammanur
oops yes, completely forgot the lifetime stats. Thanks Herbert.

I will check this out, but after a rekey, are the stats still preserved?

thanks
--Raj

On Wed, Jun 21, 2017 at 7:42 PM, Herbert Xu  wrote:
> Raj Ammanur  wrote:
>> Hi Crypto/Xfrm Team,
>>
>> I was wondering if there has been any discussion in the past
>> about adding stats in Xfrm to count the packets going in/out of
>> this sub-system? Right now we only have error stats.
>
> Have you looked at ip -s x s?
>
> Cheers,
> --
> Email: Herbert Xu 
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH 3/4] crypto: ccp - Add support for RSA on the CCP

2017-06-21 Thread Stephan Müller
Am Donnerstag, 22. Juni 2017, 00:48:01 CEST schrieb Gary R Hook:

Hi Gary,

> Wire up the v3 CCP as a cipher provider.
> 
> Signed-off-by: Gary R Hook 
> ---
>  drivers/crypto/ccp/Makefile  |1
>  drivers/crypto/ccp/ccp-crypto-main.c |   21 ++
>  drivers/crypto/ccp/ccp-crypto-rsa.c  |  286
> ++ drivers/crypto/ccp/ccp-crypto.h  |  
> 31 
>  drivers/crypto/ccp/ccp-debugfs.c |1
>  drivers/crypto/ccp/ccp-dev.c |1
>  drivers/crypto/ccp/ccp-ops.c |2
>  include/linux/ccp.h  |1
>  8 files changed, 341 insertions(+), 3 deletions(-)
>  create mode 100644 drivers/crypto/ccp/ccp-crypto-rsa.c
> 
> diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
> index 59493fd3a751..439bc2fcb464 100644
> --- a/drivers/crypto/ccp/Makefile
> +++ b/drivers/crypto/ccp/Makefile
> @@ -15,4 +15,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
>  ccp-crypto-aes-xts.o \
>  ccp-crypto-aes-galois.o \
>  ccp-crypto-des3.o \
> +ccp-crypto-rsa.o \
>  ccp-crypto-sha.o
> diff --git a/drivers/crypto/ccp/ccp-crypto-main.c
> b/drivers/crypto/ccp/ccp-crypto-main.c index 8dccbddabef1..dd7d00c680e7
> 100644
> --- a/drivers/crypto/ccp/ccp-crypto-main.c
> +++ b/drivers/crypto/ccp/ccp-crypto-main.c
> @@ -17,6 +17,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
> 
>  #include "ccp-crypto.h"
> 
> @@ -37,10 +38,15 @@
>  module_param(des3_disable, uint, 0444);
>  MODULE_PARM_DESC(des3_disable, "Disable use of 3DES - any non-zero value");
> 
> +static unsigned int rsa_disable;
> +module_param(rsa_disable, uint, 0444);
> +MODULE_PARM_DESC(rsa_disable, "Disable use of RSA - any non-zero value");
> +
>  /* List heads for the supported algorithms */
>  static LIST_HEAD(hash_algs);
>  static LIST_HEAD(cipher_algs);
>  static LIST_HEAD(aead_algs);
> +static LIST_HEAD(akcipher_algs);
> 
>  /* For any tfm, requests for that tfm must be returned on the order
>   * received.  With multiple queues available, the CCP can process more
> @@ -358,6 +364,14 @@ static int ccp_register_algs(void)
>   return ret;
>   }
> 
> + if (!rsa_disable) {
> + ret = ccp_register_rsa_algs(_algs);
> + if (ret) {
> + rsa_disable = 1;
> + return ret;
> + }
> + }
> +
>   return 0;
>  }
> 
> @@ -366,6 +380,7 @@ static void ccp_unregister_algs(void)
>   struct ccp_crypto_ahash_alg *ahash_alg, *ahash_tmp;
>   struct ccp_crypto_ablkcipher_alg *ablk_alg, *ablk_tmp;
>   struct ccp_crypto_aead *aead_alg, *aead_tmp;
> + struct ccp_crypto_akcipher_alg *akc_alg, *akc_tmp;
> 
>   list_for_each_entry_safe(ahash_alg, ahash_tmp, _algs, entry) {
>   crypto_unregister_ahash(_alg->alg);
> @@ -384,6 +399,12 @@ static void ccp_unregister_algs(void)
>   list_del(_alg->entry);
>   kfree(aead_alg);
>   }
> +
> + list_for_each_entry_safe(akc_alg, akc_tmp, _algs, entry) {
> + crypto_unregister_akcipher(_alg->alg);
> + list_del(_alg->entry);
> + kfree(akc_alg);
> + }
>  }
> 
>  static int ccp_crypto_init(void)
> diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c
> b/drivers/crypto/ccp/ccp-crypto-rsa.c new file mode 100644
> index ..4a2a71463594
> --- /dev/null
> +++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
> @@ -0,0 +1,286 @@
> +/*
> + * AMD Cryptographic Coprocessor (CCP) RSA crypto API support
> + *
> + * Copyright (C) 2016 Advanced Micro Devices, Inc.
> + *
> + * Author: Gary R Hook 
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "ccp-crypto.h"
> +
> +static inline struct akcipher_request *akcipher_request_cast(
> + struct crypto_async_request *req)
> +{
> + return container_of(req, struct akcipher_request, base);
> +}
> +
> +static int ccp_rsa_complete(struct crypto_async_request *async_req, int
> ret) +{
> + struct akcipher_request *req = akcipher_request_cast(async_req);
> + struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
> +
> + if (!ret)
> + req->dst_len = rctx->cmd.u.rsa.key_size >> 3;
> +
> + ret = 0;
> +
> + return ret;
> +}
> +
> +static unsigned int ccp_rsa_maxsize(struct crypto_akcipher *tfm)
> +{
> + return CCP_RSA_MAXMOD;
> +}
> +
> +static int ccp_rsa_crypt(struct akcipher_request *req, bool encrypt)
> +{
> + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
> + struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm);
> + struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
> + 

[PATCH v6 0/2] IV Generation algorithms for dm-crypt

2017-06-21 Thread Binoy Jayan
===
dm-crypt optimization for larger block sizes
===

Currently, the iv generation algorithms are implemented in dm-crypt.c. The goal
is to move these algorithms from the dm layer to the kernel crypto layer by
implementing them as template ciphers so they can be used in relation with
algorithms like aes, and with multiple modes like cbc, ecb etc. As part of this
patchset, the iv-generation code is moved from the dm layer to the crypto layer
and adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
at a time. Each bio contains the in memory representation of physically
contiguous disk blocks. Since the bio itself may not be contiguous in main
memory, the dm layer sets up a chained scatterlist of these blocks split into
physically contiguous segments in memory so that DMA can be performed.

One challenge in doing so is that the IVs are generated based on a 512-byte
sector number. This infact limits the block sizes to 512 bytes. But this should
not be a problem if a hardware with iv generation support is used. The geniv
itself splits the segments into sectors so it could choose the IV based on
sector number. But it could be modelled in hardware effectively by not
splitting up the segments in the bio.

Another challenge faced is that dm-crypt has an option to use multiple keys.
The key selection is done based on the sector number. If the whole bio is
encrypted / decrypted with the same key, the encrypted volumes will not be
compatible with the original dm-crypt [without the changes]. So, the key
selection code is moved to crypto layer so the neighboring sectors are
encrypted with a different key.

The dm layer allocates space for iv. The hardware drivers can choose to make
use of this space to generate their IVs sequentially or allocate it on their
own. This can be moved to crypto layer too. Postponing this decision until
the requirement to integrate milan's changes are clear.

Interface to the crypto layer - include/crypto/geniv.h

More information on test procedure can be found in v1.
Results of performance tests with software crypto in v5.

The patch 'crypto: Multikey template for essiv' depends on
the following patches by Gilad:
 MAINTAINERS: add Gilad BY as maintainer for ccree
 staging: ccree: add devicetree bindings
 staging: ccree: add TODO list
 staging: add ccree crypto driver

Revisions:
--

v1: https://patchwork.kernel.org/patch/9439175
v2: https://patchwork.kernel.org/patch/9471923
v3: https://lkml.org/lkml/2017/1/18/170
v4: https://patchwork.kernel.org/patch/9559665
v5: https://patchwork.kernel.org/patch/9669237

v5 --> v6:
--

1. Moved allocation of initialization vectors to the iv-generator
2. Few consmetic changes as the consequence of the above
3. Few logical to boolean expressions for faster calculation
4. Included multikey template for splitting keys.
   This needs testing with real hardware (juno with ccree)
   and also modification. It is only for testing and not
   for inclusion upstream.

v4 --> v5
--

1. Fix for the multiple instance issue in /proc/crypto
2. Few cosmetic changes including struct alignment
3. Simplified 'struct geniv_req_info'

v3 --> v4
--
Fix for the bug reported by Gilad Ben-Yossef.
The element '__ctx' in 'struct skcipher_request req' overflowed into the
element 'struct scatterlist src' which immediately follows 'req' in
'struct geniv_subreq' and corrupted src.

v2 --> v3
--

1. Moved iv algorithms in dm-crypt.c for control
2. Key management code moved from dm layer to cryto layer
   so that cipher instance selection can be made depending on key_index
3. The revision v2 had scatterlist nodes created for every sector in the bio.
   It is modified to create only once scatterlist node to reduce memory
   foot print. Synchronous requests are processed sequentially. Asynchronous
   requests are processed in parallel and is freed in the async callback.
4. Changed allocation for sub-requests using mempool

v1 --> v2
--

1. dm-crypt changes to process larger block sizes (one segment in a bio)
2. Incorporated changes w.r.t. comments from Herbert.


Binoy Jayan (2):
  crypto: Add IV generation algorithms
  crypto: Multikey template for essiv

 drivers/md/dm-crypt.c| 1940 +++---
 drivers/staging/ccree/Makefile   |2 +-
 drivers/staging/ccree/essiv.c|  777 +++
 drivers/staging/ccree/essiv_sw.c | 1040 
 include/crypto/geniv.h   |   46 +
 5 files changed, 3251 insertions(+), 554 deletions(-)
 create mode 100644 drivers/staging/ccree/essiv.c
 create mode 100644 drivers/staging/ccree/essiv_sw.c
 create mode 100644 include/crypto/geniv.h

-- 
Binoy Jayan



[PATCH v6 2/2] crypto: Multikey template for essiv

2017-06-21 Thread Binoy Jayan
Just for reference and to get the performance numbers.
Not for merging.

Depends on the following patches by Gilad:
 MAINTAINERS: add Gilad BY as maintainer for ccree
 staging: ccree: add devicetree bindings
 staging: ccree: add TODO list
 staging: add ccree crypto driver

A multi key template implementation which calls the underlying
iv generator 'essiv-aes-du512-dx' cum crypto algorithm. This
template sits on top of the underlying IV generator and accepts
a key length that is a multiple of the underlying key length.
This has not been tested on Juno with the CryptoCell accelerator
for which it was written for.

The underlying IV generator 'essiv-aes-du512-dx' generates IV for
every 512 byte blocks.

Signed-off-by: Binoy Jayan 
---
 drivers/md/dm-crypt.c|5 +-
 drivers/staging/ccree/Makefile   |2 +-
 drivers/staging/ccree/essiv.c|  777 
 drivers/staging/ccree/essiv_sw.c | 1040 ++
 4 files changed, 1821 insertions(+), 3 deletions(-)
 create mode 100644 drivers/staging/ccree/essiv.c
 create mode 100644 drivers/staging/ccree/essiv_sw.c

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index bef54f5..32f75dd 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -1555,7 +1555,8 @@ static int __init geniv_register_algs(void)
if (err)
goto out_undo_plain;
 
-   err = crypto_register_template(_essiv_tmpl);
+   err = 0;
+   // err = crypto_register_template(_essiv_tmpl);
if (err)
goto out_undo_plain64;
 
@@ -1594,7 +1595,7 @@ static void __exit geniv_deregister_algs(void)
 {
crypto_unregister_template(_plain_tmpl);
crypto_unregister_template(_plain64_tmpl);
-   crypto_unregister_template(_essiv_tmpl);
+   // crypto_unregister_template(_essiv_tmpl);
crypto_unregister_template(_benbi_tmpl);
crypto_unregister_template(_null_tmpl);
crypto_unregister_template(_lmk_tmpl);
diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index 44f3e3e..524e930 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,3 +1,3 @@
 obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
-ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o 
ssi_cipher.o ssi_hash.o ssi_aead.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o 
ssi_pm_ext.o
+ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o 
ssi_cipher.o ssi_hash.o ssi_aead.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o 
ssi_pm_ext.o essiv.o
 ccree-$(CCREE_FIPS_SUPPORT) += ssi_fips.o ssi_fips_ll.o ssi_fips_ext.o 
ssi_fips_local.o
diff --git a/drivers/staging/ccree/essiv.c b/drivers/staging/ccree/essiv.c
new file mode 100644
index 000..719b8bf
--- /dev/null
+++ b/drivers/staging/ccree/essiv.c
@@ -0,0 +1,777 @@
+/*
+ * Copyright (C) 2003 Jana Saout 
+ * Copyright (C) 2004 Clemens Fruhwirth 
+ * Copyright (C) 2006-2015 Red Hat, Inc. All rights reserved.
+ * Copyright (C) 2013 Milan Broz 
+ *
+ * This file is released under the GPL.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define DM_MSG_PREFIX  "crypt"
+#define MAX_SG_LIST(BIO_MAX_PAGES * 8)
+#define MIN_IOS64
+#define LMK_SEED_SIZE  64 /* hash + 0 */
+#define TCW_WHITENING_SIZE 16
+
+struct geniv_ctx;
+struct geniv_req_ctx;
+
+/* Sub request for each of the skcipher_request's for a segment */
+struct geniv_subreq {
+   struct scatterlist src;
+   struct scatterlist dst;
+   struct geniv_req_ctx *rctx;
+   struct skcipher_request req CRYPTO_MINALIGN_ATTR;
+};
+
+struct geniv_req_ctx {
+   struct geniv_subreq *subreq;
+   int is_write;
+   sector_t iv_sector;
+   unsigned int nents;
+   struct completion restart;
+   atomic_t req_pending;
+   struct skcipher_request *req;
+};
+
+struct crypt_iv_operations {
+   int (*ctr)(struct geniv_ctx *ctx);
+   void (*dtr)(struct geniv_ctx *ctx);
+   int (*init)(struct geniv_ctx *ctx);
+   int (*wipe)(struct geniv_ctx *ctx);
+   int (*generator)(struct geniv_ctx *ctx,
+struct geniv_req_ctx *rctx,
+struct geniv_subreq *subreq, u8 *iv);
+   int (*post)(struct geniv_ctx *ctx,
+   struct geniv_req_ctx *rctx,
+   struct geniv_subreq *subreq, u8 *iv);
+};
+
+struct geniv_ctx {
+   unsigned int tfms_count;
+   struct crypto_skcipher *child;
+   struct crypto_skcipher **tfms;
+   char *ivmode;
+   unsigned int iv_size;
+   unsigned int iv_start;

[PATCH v6 1/2] crypto: Add IV generation algorithms

2017-06-21 Thread Binoy Jayan
Just for reference. Not for merging.

Currently, the iv generation algorithms are implemented in dm-crypt.c.
The goal is to move these algorithms from the dm layer to the kernel
crypto layer by implementing them as template ciphers so they can be
implemented in hardware for performance. As part of this patchset, the
iv-generation code is moved from the dm layer to the crypto layer and
adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
at a time. Each bio contains an in memory representation of physically
contiguous disk blocks. The dm layer sets up a chained scatterlist of
these blocks split into physically contiguous segments in memory so that
DMA can be performed. Also, the key management code is moved from dm layer
to the cryto layer since the key selection for encrypting neighboring
sectors depend on the keycount.

Synchronous crypto requests to encrypt/decrypt a sector are processed
sequentially. Asynchronous requests if processed in parallel, are freed
in the async callback. The storage space for the initialization vectors
are allocated in the iv generator implementations.

Interface to the crypto layer - include/crypto/geniv.h

Signed-off-by: Binoy Jayan 
---
 drivers/md/dm-crypt.c  | 1939 ++--
 include/crypto/geniv.h |   46 ++
 2 files changed, 1432 insertions(+), 553 deletions(-)
 create mode 100644 include/crypto/geniv.h

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 389a363..bef54f5 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -32,170 +32,120 @@
 #include 
 #include 
 #include 
-
 #include 
-
-#define DM_MSG_PREFIX "crypt"
-
-/*
- * context holding the current state of a multi-part conversion
- */
-struct convert_context {
-   struct completion restart;
-   struct bio *bio_in;
-   struct bio *bio_out;
-   struct bvec_iter iter_in;
-   struct bvec_iter iter_out;
-   sector_t cc_sector;
-   atomic_t cc_pending;
-   struct skcipher_request *req;
+#include 
+#include 
+#include 
+#include 
+
+#define DM_MSG_PREFIX  "crypt"
+#define MAX_SG_LIST(BIO_MAX_PAGES * 8)
+#define MIN_IOS64
+#define LMK_SEED_SIZE  64 /* hash + 0 */
+#define TCW_WHITENING_SIZE 16
+
+struct geniv_ctx;
+struct geniv_req_ctx;
+
+/* Sub request for each of the skcipher_request's for a segment */
+struct geniv_subreq {
+   struct scatterlist src;
+   struct scatterlist dst;
+   struct geniv_req_ctx *rctx;
+   struct skcipher_request req CRYPTO_MINALIGN_ATTR;
 };
 
-/*
- * per bio private data
- */
-struct dm_crypt_io {
-   struct crypt_config *cc;
-   struct bio *base_bio;
-   struct work_struct work;
-
-   struct convert_context ctx;
-
-   atomic_t io_pending;
-   int error;
-   sector_t sector;
-
-   struct rb_node rb_node;
-} CRYPTO_MINALIGN_ATTR;
-
-struct dm_crypt_request {
-   struct convert_context *ctx;
-   struct scatterlist sg_in;
-   struct scatterlist sg_out;
+struct geniv_req_ctx {
+   struct geniv_subreq *subreq;
+   int is_write;
sector_t iv_sector;
+   unsigned int nents;
+   struct completion restart;
+   atomic_t req_pending;
+   struct skcipher_request *req;
 };
 
-struct crypt_config;
-
 struct crypt_iv_operations {
-   int (*ctr)(struct crypt_config *cc, struct dm_target *ti,
-  const char *opts);
-   void (*dtr)(struct crypt_config *cc);
-   int (*init)(struct crypt_config *cc);
-   int (*wipe)(struct crypt_config *cc);
-   int (*generator)(struct crypt_config *cc, u8 *iv,
-struct dm_crypt_request *dmreq);
-   int (*post)(struct crypt_config *cc, u8 *iv,
-   struct dm_crypt_request *dmreq);
+   int (*ctr)(struct geniv_ctx *ctx);
+   void (*dtr)(struct geniv_ctx *ctx);
+   int (*init)(struct geniv_ctx *ctx);
+   int (*wipe)(struct geniv_ctx *ctx);
+   int (*generator)(struct geniv_ctx *ctx,
+struct geniv_req_ctx *rctx,
+struct geniv_subreq *subreq, u8 *iv);
+   int (*post)(struct geniv_ctx *ctx,
+   struct geniv_req_ctx *rctx,
+   struct geniv_subreq *subreq, u8 *iv);
 };
 
-struct iv_essiv_private {
+struct geniv_essiv_private {
struct crypto_ahash *hash_tfm;
u8 *salt;
 };
 
-struct iv_benbi_private {
+struct geniv_benbi_private {
int shift;
 };
 
-#define LMK_SEED_SIZE 64 /* hash + 0 */
-struct iv_lmk_private {
+struct geniv_lmk_private {
struct crypto_shash *hash_tfm;
u8 *seed;
 };
 
-#define TCW_WHITENING_SIZE 16
-struct iv_tcw_private {
+struct geniv_tcw_private {
struct crypto_shash *crc32_tfm;
u8 *iv_seed;
u8 *whitening;
 };
 
-/*
- * Crypt: maps a linear range of a block device
- * and encrypts / decrypts at the same time.
- */
-enum flags { 

Re: XFRM Stats

2017-06-21 Thread Herbert Xu
Raj Ammanur  wrote:
> Hi Crypto/Xfrm Team,
> 
> I was wondering if there has been any discussion in the past
> about adding stats in Xfrm to count the packets going in/out of
> this sub-system? Right now we only have error stats.

Have you looked at ip -s x s?

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [RFC PATCH] gcm - fix setkey cache coherence issues

2017-06-21 Thread Herbert Xu
On Wed, Jun 21, 2017 at 05:29:21PM +0300, Radu Solea wrote:
> Generic GCM is likely to end up using a hardware accelerator to do
> part of the job. Allocating hash, iv and result in a contiguous memory
> area increases the risk of dma mapping multiple ranges on the same
> cacheline. Also having dma and cpu written data on the same cacheline
> will cause coherence issues.
> 
> Signed-off-by: Radu Solea 
> ---
> Hi!
> 
> I've encountered cache coherence issues when using GCM with CAAM and this was
> one way of fixing them but it has its drawbacks. Another would be to allocate
> each element instead of all at once, but that only decreases the likelyhood of
> this happening. Does anyone know of a better way of fixing this?

I don't get it.  You're modifying the software version of GCM, and
the function is marked as static.  How can this patch have any effect
on CAAM?

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH v2 6/6] ima: Support module-style appended signatures for appraisal

2017-06-21 Thread Mimi Zohar
On Wed, 2017-06-21 at 14:45 -0300, Thiago Jung Bauermann wrote:
> Hello Mimi,
> 
> Thanks for your review, and for queuing the other patches in this series.
> 
> Mimi Zohar  writes:
> > On Wed, 2017-06-07 at 22:49 -0300, Thiago Jung Bauermann wrote:
> >> This patch introduces the modsig keyword to the IMA policy syntax to
> >> specify that a given hook should expect the file to have the IMA signature
> >> appended to it.
> >
> > Thank you, Thiago. Appended signatures seem to be working proper now
> > with multiple keys on the IMA keyring.
> 
> Great news!
> 
> > The length of this patch description is a good indication that this
> > patch needs to be broken up for easier review. A few
> > comments/suggestions inline below.
> 
> Ok, I will try to break it up, and also patch 5 as you suggested.
> 
> >> diff --git a/security/integrity/digsig.c b/security/integrity/digsig.c
> >> index 06554c448dce..9190c9058f4f 100644
> >> --- a/security/integrity/digsig.c
> >> +++ b/security/integrity/digsig.c
> >> @@ -48,11 +48,10 @@ static bool init_keyring __initdata;
> >>  #define restrict_link_to_ima restrict_link_by_builtin_trusted
> >>  #endif
> >> 
> >> -int integrity_digsig_verify(const unsigned int id, const char *sig, int 
> >> siglen,
> >> -  const char *digest, int digestlen)
> >> +struct key *integrity_keyring_from_id(const unsigned int id)
> >>  {
> >> -  if (id >= INTEGRITY_KEYRING_MAX || siglen < 2)
> >> -  return -EINVAL;
> >> +  if (id >= INTEGRITY_KEYRING_MAX)
> >> +  return ERR_PTR(-EINVAL);
> >> 
> >
> > When splitting up this patch, the addition of this new function could
> > be a separate patch. The patch description would explain the need for
> > a new function.
> 
> Ok, will do for v3.
> 
> >> @@ -229,10 +234,14 @@ int ima_appraise_measurement(enum ima_hooks func,
> >>goto out;
> >>}
> >> 
> >> -  status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value, rc, iint);
> >> -  if ((status != INTEGRITY_PASS) && (status != INTEGRITY_UNKNOWN)) {
> >> -  if ((status == INTEGRITY_NOLABEL)
> >> -  || (status == INTEGRITY_NOXATTRS))
> >> +  /* Appended signatures aren't protected by EVM. */
> >> +  status = evm_verifyxattr(dentry, XATTR_NAME_IMA,
> >> +   xattr_value->type == IMA_MODSIG ?
> >> +   NULL : xattr_value, rc, iint);
> >> +  if (status != INTEGRITY_PASS && status != INTEGRITY_UNKNOWN &&
> >> +  !(xattr_value->type == IMA_MODSIG &&
> >> +(status == INTEGRITY_NOLABEL || status == INTEGRITY_NOXATTRS))) {
> >
> > This was messy to begin with, and now it is even more messy. For
> > appended signatures, we're only interested in INTEGRITY_FAIL. Maybe
> > leave the existing "if" clause alone and define a new "if" clause.
> 
> Ok, is this what you had in mind?
> 
> @@ -229,8 +237,14 @@ int ima_appraise_measurement(enum ima_hooks func,
>   goto out;
>   }
> 
> - status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value, rc, iint);
> - if ((status != INTEGRITY_PASS) && (status != INTEGRITY_UNKNOWN)) {
> + /* Appended signatures aren't protected by EVM. */
> + status = evm_verifyxattr(dentry, XATTR_NAME_IMA,
> +  xattr_value->type == IMA_MODSIG ?
> +  NULL : xattr_value, rc, iint);

Yes, maybe add a comment here indicating only verifying other security
xattrs, if they exist.

> + if (xattr_value->type == IMA_MODSIG && status == INTEGRITY_FAIL) {
> + cause = "invalid-HMAC";
> + goto out;
> + } else if (status != INTEGRITY_PASS && status != INTEGRITY_UNKNOWN) {
>   if ((status == INTEGRITY_NOLABEL)
>   || (status == INTEGRITY_NOXATTRS))
>   cause = "missing-HMAC";

> 
> >> @@ -267,11 +276,18 @@ int ima_appraise_measurement(enum ima_hooks func,
> >>status = INTEGRITY_PASS;
> >>break;
> >>case EVM_IMA_XATTR_DIGSIG:
> >> +  case IMA_MODSIG:
> >>iint->flags |= IMA_DIGSIG;
> >> -  rc = integrity_digsig_verify(INTEGRITY_KEYRING_IMA,
> >> -   (const char *)xattr_value, rc,
> >> -   iint->ima_hash->digest,
> >> -   iint->ima_hash->length);
> >> +
> >> +  if (xattr_value->type == EVM_IMA_XATTR_DIGSIG)
> >> +  rc = integrity_digsig_verify(INTEGRITY_KEYRING_IMA,
> >> +   (const char *)xattr_value,
> >> +   rc, iint->ima_hash->digest,
> >> +   iint->ima_hash->length);
> >> +  else
> >> +  rc = ima_modsig_verify(INTEGRITY_KEYRING_IMA,
> >> + xattr_value);
> >> +
> >
> > Perhaps allowing IMA_MODSIG to flow into EVM_IMA_XATTR_DIGSIG 

Re: [kernel-hardening] [PATCH] random: warn when kernel uses unseeded randomness

2017-06-21 Thread Jason A. Donenfeld
Hi Ted,

On Wed, Jun 21, 2017 at 10:38 PM, Theodore Ts'o  wrote:
> I agree completely with all of this.  The following patch replaces the
> current topmost patch on the random.git tree:
> For developers who want to work on improving this situation,
> CONFIG_WARN_UNSEEDED_RANDOM has been renamed to
> CONFIG_WARN_ALL_UNSEEDED_RANDOM.  By default the kernel will always
> print the first use of unseeded randomness.  This way, hopefully the
> security obsessed will be happy that there is _some_ indication when
> the kernel boots there may be a potential issue with that architecture
> or subarchitecture.  To see all uses of unseeded randomness,
> developers can enable CONFIG_WARN_ALL_UNSEEDED_RANDOM.

Seems fine to me.

Acked-by: Jason A. Donenfeld 

Jason


Re: [PATCH] random: silence compiler warnings and fix race

2017-06-21 Thread Jeffrey Walton
On Tue, Jun 20, 2017 at 7:38 PM, Theodore Ts'o  wrote:
> On Tue, Jun 20, 2017 at 11:49:07AM +0200, Jason A. Donenfeld wrote:
>> ...
>>> I more or less agree with you that we should just turn this on for all
>>> users and they'll just have to live with the spam and report odd
>>> entries, and overtime we'll fix all the violations.
>
> There seems to be a fundamental misapprehension that it will be easy
> to "fix all the violations".  For certain hardware types, this is
> not easy, and the "eh, let them get spammed until we get around to
> fixing it" attitude is precisely what I was pushing back against.

I can't speak for others, but for me: I think they will fall into
three categories:

 1. easy to fix
 2. difficult to fix
 3. unable to fix

(1) is low hanging fruit and they will probably (hopefully?) be
cleared easily.  Like systemd on x86_64 with rdrand and rdseed.
There's no reason for systemd to find itself starved of entropy on
that platform. (cf., http://github.com/systemd/systemd/issues/4167).

Organizations that find themselves in (3) can choose to use a board or
server and accept the risk, or they can choose to remediate it in
another way. The "other way" may include a capital expenditure and a
hardware refresh.

The central point is, they know about the risk and they can make the decision.

Jeff


Re: [PATCH] crypto: sun4i-ss: support the Security System PRNG

2017-06-21 Thread Herbert Xu
On Wed, Jun 21, 2017 at 08:48:55AM +0200, Maxime Ripard wrote:
> On Tue, Jun 20, 2017 at 01:45:36PM +0200, Corentin Labbe wrote:
> > On Tue, Jun 20, 2017 at 11:59:47AM +0200, Maxime Ripard wrote:
> > > Hi,
> > > 
> > > On Tue, Jun 20, 2017 at 10:58:19AM +0200, Corentin Labbe wrote:
> > > > The Security System have a PRNG, this patch add support for it via
> > > > crypto_rng.
> > > 
> > > This might be a dumb question, but is the CRYPTO_RNG code really
> > > supposed to be used with PRNG?
> > > 
> > 
> > Yes, see recently added drivers/crypto/exynos-rng.c
> 
> It's still not really clear from the commit log (if you're talking
> about c46ea13f55b6) why and if using the RNG code for a PRNG is a good
> idea.

The hwrng interface is meant for true hardware RNGs.  The crypto API
rng interface is primarily intended for PRNGs.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


[RESEND,PATCH v4 2/5] crypto : stm32 - Add STM32F4 CRC32 support

2017-06-21 Thread Cosar Dindar
This patch adds CRC (CRC32 Crypto) support for STM32F4 series.

As an hardware limitation polynomial and key setting are not supported.
They are fixed as 0x4C11DB7 (poly) and 0x (key).
CRC32C Castagnoli algorithm is not used.

Signed-off-by: Cosar Dindar 
Reviewed-by: Fabien Dessenne 
---

Changes in v4:
- Add Fabien Dessenne's Reviewed-by tag.

 drivers/crypto/stm32/stm32_crc32.c | 68 --
 1 file changed, 58 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/stm32/stm32_crc32.c 
b/drivers/crypto/stm32/stm32_crc32.c
index ec83b1e..12fbd98 100644
--- a/drivers/crypto/stm32/stm32_crc32.c
+++ b/drivers/crypto/stm32/stm32_crc32.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #include 
@@ -39,6 +40,9 @@ struct stm32_crc {
struct clk   *clk;
u8   pending_data[sizeof(u32)];
size_t   nb_pending_bytes;
+   bool key_support;
+   bool poly_support;
+   bool reverse_support;
 };
 
 struct stm32_crc_list {
@@ -106,13 +110,31 @@ static int stm32_crc_init(struct shash_desc *desc)
}
spin_unlock_bh(_list.lock);
 
-   /* Reset, set key, poly and configure in bit reverse mode */
-   writel(bitrev32(mctx->key), ctx->crc->regs + CRC_INIT);
-   writel(bitrev32(mctx->poly), ctx->crc->regs + CRC_POL);
-   writel(CRC_CR_RESET | CRC_CR_REVERSE, ctx->crc->regs + CRC_CR);
+   /* set key */
+   if (ctx->crc->key_support) {
+   writel(bitrev32(mctx->key), ctx->crc->regs + CRC_INIT);
+   } else if (mctx->key != CRC_INIT_DEFAULT) {
+   dev_err(ctx->crc->dev, "Unsupported key value! Should be: 
0x%x\n",
+   CRC_INIT_DEFAULT);
+   return -EINVAL;
+   }
+
+   /* set poly */
+   if (ctx->crc->poly_support)
+   writel(bitrev32(mctx->poly), ctx->crc->regs + CRC_POL);
+
+   /* reset and configure in bit reverse mode if supported */
+   if (ctx->crc->reverse_support)
+   writel(CRC_CR_RESET | CRC_CR_REVERSE, ctx->crc->regs + CRC_CR);
+   else
+   writel(CRC_CR_RESET, ctx->crc->regs + CRC_CR);
+
+   /* store partial result */
+   if (!ctx->crc->reverse_support)
+   ctx->partial = bitrev32(readl(crc->regs + CRC_DR));
+   else
+   ctx->partial = readl(ctx->crc->regs + CRC_DR);
 
-   /* Store partial result */
-   ctx->partial = readl(ctx->crc->regs + CRC_DR);
ctx->crc->nb_pending_bytes = 0;
 
return 0;
@@ -135,7 +157,12 @@ static int stm32_crc_update(struct shash_desc *desc, const 
u8 *d8,
 
if (crc->nb_pending_bytes == sizeof(u32)) {
/* Process completed pending data */
-   writel(*(u32 *)crc->pending_data, crc->regs + CRC_DR);
+   if (!ctx->crc->reverse_support)
+   writel(bitrev32(*(u32 *)crc->pending_data),
+  crc->regs + CRC_DR);
+   else
+   writel(*(u32 *)crc->pending_data,
+  crc->regs + CRC_DR);
crc->nb_pending_bytes = 0;
}
}
@@ -143,10 +170,16 @@ static int stm32_crc_update(struct shash_desc *desc, 
const u8 *d8,
d32 = (u32 *)d8;
for (i = 0; i < length >> 2; i++)
/* Process 32 bits data */
-   writel(*(d32++), crc->regs + CRC_DR);
+   if (!ctx->crc->reverse_support)
+   writel(bitrev32(*(d32++)), crc->regs + CRC_DR);
+   else
+   writel(*(d32++), crc->regs + CRC_DR);
 
/* Store partial result */
-   ctx->partial = readl(crc->regs + CRC_DR);
+   if (!ctx->crc->reverse_support)
+   ctx->partial = bitrev32(readl(crc->regs + CRC_DR));
+   else
+   ctx->partial = readl(crc->regs + CRC_DR);
 
/* Check for pending data (non 32 bits) */
length &= 3;
@@ -243,6 +276,7 @@ static int stm32_crc_probe(struct platform_device *pdev)
struct stm32_crc *crc;
struct resource *res;
int ret;
+   int algs_size;
 
crc = devm_kzalloc(dev, sizeof(*crc), GFP_KERNEL);
if (!crc)
@@ -269,13 +303,26 @@ static int stm32_crc_probe(struct platform_device *pdev)
return ret;
}
 
+   /* set key, poly and reverse support if device is of F7 series */
+   if (of_device_is_compatible(crc->dev->of_node, "st,stm32f7-crc")) {
+   crc->key_support = true;
+   crc->poly_support = true;
+   crc->reverse_support = true;
+   }
+
platform_set_drvdata(pdev, crc);
 
spin_lock(_list.lock);
list_add(>list, _list.dev_list);
spin_unlock(_list.lock);
 
-   ret = 

[RESEND,PATCH v4 1/5] dt-bindings : Document the STM32F4 CRC32 binding

2017-06-21 Thread Cosar Dindar
Add device tree binding for STM32F4.

Signed-off-by: Cosar Dindar 
---

 Changes in V4:
- Edited binding explanations.

 Documentation/devicetree/bindings/crypto/st,stm32-crc.txt | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/Documentation/devicetree/bindings/crypto/st,stm32-crc.txt 
b/Documentation/devicetree/bindings/crypto/st,stm32-crc.txt
index 3ba92a5..1e9af69 100644
--- a/Documentation/devicetree/bindings/crypto/st,stm32-crc.txt
+++ b/Documentation/devicetree/bindings/crypto/st,stm32-crc.txt
@@ -1,7 +1,9 @@
 * STMicroelectronics STM32 CRC
 
 Required properties:
-- compatible: Should be "st,stm32f7-crc".
+- compatible: Should be one of the following string.
+ "st,stm32f7-crc"
+ "st,stm32f4-crc"
 - reg: The address and length of the peripheral registers space
 - clocks: The input clock of the CRC instance
 
-- 
2.7.4



[RESEND,PATCH v4 0/5] Add support for the STM32F4 CRC32

2017-06-21 Thread Cosar Dindar
This patch series add hardware CRC32 ("Ethernet") calculation support
for STMicroelectronics STM32F429.

Polynomial and key setting are not supported, key is fixed as 0x4C11DB7
and poly is 0x.

Module is tested on STM32F429-disco board with crypto testmgr using
cases within the key 0x.

Changes in v4:
  - Edited patch summary.

Cosar Dindar (5):
  dt-bindings : Document the STM32F4 CRC32 binding
  crypto : stm32 - Add STM32F4 CRC32 support
  ARM: dts: stm32: Add CRC support to stm32f429 (Merged-by Alexander TORGUE)
  ARM: dts: stm32: enable CRC32 on stm32429-disco board (Merged-by Alexander 
TORGUE)
  ARM: dts: stm32: enable CRC32 on stm32429i-eval board (Merged-by Alexander 
TORGUE)

 .../devicetree/bindings/crypto/st,stm32-crc.txt|  4 +-
 arch/arm/boot/dts/stm32429i-eval.dts   |  4 ++
 arch/arm/boot/dts/stm32f429-disco.dts  |  4 ++
 arch/arm/boot/dts/stm32f429.dtsi   |  7 +++
 drivers/crypto/stm32/stm32_crc32.c | 68 ++
 5 files changed, 75 insertions(+), 12 deletions(-)

-- 
2.7.4



AW: Wir geben jährlich Darlehen für 2% Zinsen aus

2017-06-21 Thread Bernhard Stöckl

Wir vergeben Kredite mit einem Zinssatz von jährlich 2%.

Die Bearbeitung des Antrags erfolgt rasch, wir verlangen keine Gebühren, was 
sie beantragen werden wir annehmen. Wir bewilligen Kredite von bis zu 40 
Millionen Euro und von mindestens 15.000 Euro. Sie können einen geschäftlichen 
oder privaten Kredit beantrage, wobei Sie bei Geschäftskrediten eine 
Zahlungsfrist von einem Jahr erhalten.

Anfragen bitte per Mail an: sunrisefundin...@hotmail.com 


[PATCH 4/4] crypto: ccp - Expand RSA support for a v5 ccp

2017-06-21 Thread Gary R Hook
A V5 device can accommodate larger keys, as well as read the keys
directly from memory instead of requiring them to be in a local
storage block.


Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-crypto-rsa.c |5 -
 drivers/crypto/ccp/ccp-crypto.h |1 +
 drivers/crypto/ccp/ccp-dev-v3.c |1 +
 drivers/crypto/ccp/ccp-dev-v5.c |2 ++
 drivers/crypto/ccp/ccp-dev.h|2 ++
 drivers/crypto/ccp/ccp-ops.c|3 ++-
 6 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c 
b/drivers/crypto/ccp/ccp-crypto-rsa.c
index 4a2a71463594..93e6b00ce34d 100644
--- a/drivers/crypto/ccp/ccp-crypto-rsa.c
+++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
@@ -43,7 +43,10 @@ static int ccp_rsa_complete(struct crypto_async_request 
*async_req, int ret)
 
 static unsigned int ccp_rsa_maxsize(struct crypto_akcipher *tfm)
 {
-   return CCP_RSA_MAXMOD;
+   if (ccp_version() > CCP_VERSION(3, 0))
+   return CCP5_RSA_MAXMOD;
+   else
+   return CCP_RSA_MAXMOD;
 }
 
 static int ccp_rsa_crypt(struct akcipher_request *req, bool encrypt)
diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h
index 5d592ecc9af5..40598894113b 100644
--- a/drivers/crypto/ccp/ccp-crypto.h
+++ b/drivers/crypto/ccp/ccp-crypto.h
@@ -255,6 +255,7 @@ struct ccp_rsa_req_ctx {
 };
 
 #defineCCP_RSA_MAXMOD  (4 * 1024 / 8)
+#defineCCP5_RSA_MAXMOD (16 * 1024 / 8)
 
 /* Common Context Structure */
 struct ccp_ctx {
diff --git a/drivers/crypto/ccp/ccp-dev-v3.c b/drivers/crypto/ccp/ccp-dev-v3.c
index 367c2e30656f..9b159b0a891e 100644
--- a/drivers/crypto/ccp/ccp-dev-v3.c
+++ b/drivers/crypto/ccp/ccp-dev-v3.c
@@ -592,4 +592,5 @@ static void ccp_destroy(struct ccp_device *ccp)
.perform = _actions,
.bar = 2,
.offset = 0x2,
+   .rsamax = CCP_RSA_MAX_WIDTH,
 };
diff --git a/drivers/crypto/ccp/ccp-dev-v5.c b/drivers/crypto/ccp/ccp-dev-v5.c
index 632518efd685..6043552322fd 100644
--- a/drivers/crypto/ccp/ccp-dev-v5.c
+++ b/drivers/crypto/ccp/ccp-dev-v5.c
@@ -1115,6 +1115,7 @@ static void ccp5other_config(struct ccp_device *ccp)
.perform = _actions,
.bar = 2,
.offset = 0x0,
+   .rsamax = CCP5_RSA_MAX_WIDTH,
 };
 
 const struct ccp_vdata ccpv5b = {
@@ -1124,4 +1125,5 @@ static void ccp5other_config(struct ccp_device *ccp)
.perform = _actions,
.bar = 2,
.offset = 0x0,
+   .rsamax = CCP5_RSA_MAX_WIDTH,
 };
diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
index a70154ac7405..8242cf54d90f 100644
--- a/drivers/crypto/ccp/ccp-dev.h
+++ b/drivers/crypto/ccp/ccp-dev.h
@@ -200,6 +200,7 @@
 #define CCP_SHA_SB_COUNT   1
 
 #define CCP_RSA_MAX_WIDTH  4096
+#define CCP5_RSA_MAX_WIDTH 16384
 
 #define CCP_PASSTHRU_BLOCKSIZE 256
 #define CCP_PASSTHRU_MASKSIZE  32
@@ -677,6 +678,7 @@ struct ccp_vdata {
const struct ccp_actions *perform;
const unsigned int bar;
const unsigned int offset;
+   const unsigned int rsamax;
 };
 
 extern const struct ccp_vdata ccpv3;
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 2cdd15a92178..ea5e4ede1eed 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -1737,7 +1737,8 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
unsigned int key_size_bytes;
int ret;
 
-   if (rsa->key_size > CCP_RSA_MAX_WIDTH)
+   /* Check against the maximum allowable size, in bits */
+   if (rsa->key_size > cmd_q->ccp->vdata->rsamax)
return -EINVAL;
 
if (!rsa->exp || !rsa->mod || !rsa->src || !rsa->dst)



[PATCH 3/4] crypto: ccp - Add support for RSA on the CCP

2017-06-21 Thread Gary R Hook
Wire up the v3 CCP as a cipher provider.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/ccp-crypto-main.c |   21 ++
 drivers/crypto/ccp/ccp-crypto-rsa.c  |  286 ++
 drivers/crypto/ccp/ccp-crypto.h  |   31 
 drivers/crypto/ccp/ccp-debugfs.c |1 
 drivers/crypto/ccp/ccp-dev.c |1 
 drivers/crypto/ccp/ccp-ops.c |2 
 include/linux/ccp.h  |1 
 8 files changed, 341 insertions(+), 3 deletions(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-rsa.c

diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 59493fd3a751..439bc2fcb464 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -15,4 +15,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
   ccp-crypto-aes-xts.o \
   ccp-crypto-aes-galois.o \
   ccp-crypto-des3.o \
+  ccp-crypto-rsa.o \
   ccp-crypto-sha.o
diff --git a/drivers/crypto/ccp/ccp-crypto-main.c 
b/drivers/crypto/ccp/ccp-crypto-main.c
index 8dccbddabef1..dd7d00c680e7 100644
--- a/drivers/crypto/ccp/ccp-crypto-main.c
+++ b/drivers/crypto/ccp/ccp-crypto-main.c
@@ -17,6 +17,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "ccp-crypto.h"
 
@@ -37,10 +38,15 @@
 module_param(des3_disable, uint, 0444);
 MODULE_PARM_DESC(des3_disable, "Disable use of 3DES - any non-zero value");
 
+static unsigned int rsa_disable;
+module_param(rsa_disable, uint, 0444);
+MODULE_PARM_DESC(rsa_disable, "Disable use of RSA - any non-zero value");
+
 /* List heads for the supported algorithms */
 static LIST_HEAD(hash_algs);
 static LIST_HEAD(cipher_algs);
 static LIST_HEAD(aead_algs);
+static LIST_HEAD(akcipher_algs);
 
 /* For any tfm, requests for that tfm must be returned on the order
  * received.  With multiple queues available, the CCP can process more
@@ -358,6 +364,14 @@ static int ccp_register_algs(void)
return ret;
}
 
+   if (!rsa_disable) {
+   ret = ccp_register_rsa_algs(_algs);
+   if (ret) {
+   rsa_disable = 1;
+   return ret;
+   }
+   }
+
return 0;
 }
 
@@ -366,6 +380,7 @@ static void ccp_unregister_algs(void)
struct ccp_crypto_ahash_alg *ahash_alg, *ahash_tmp;
struct ccp_crypto_ablkcipher_alg *ablk_alg, *ablk_tmp;
struct ccp_crypto_aead *aead_alg, *aead_tmp;
+   struct ccp_crypto_akcipher_alg *akc_alg, *akc_tmp;
 
list_for_each_entry_safe(ahash_alg, ahash_tmp, _algs, entry) {
crypto_unregister_ahash(_alg->alg);
@@ -384,6 +399,12 @@ static void ccp_unregister_algs(void)
list_del(_alg->entry);
kfree(aead_alg);
}
+
+   list_for_each_entry_safe(akc_alg, akc_tmp, _algs, entry) {
+   crypto_unregister_akcipher(_alg->alg);
+   list_del(_alg->entry);
+   kfree(akc_alg);
+   }
 }
 
 static int ccp_crypto_init(void)
diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c 
b/drivers/crypto/ccp/ccp-crypto-rsa.c
new file mode 100644
index ..4a2a71463594
--- /dev/null
+++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
@@ -0,0 +1,286 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) RSA crypto API support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Gary R Hook 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ccp-crypto.h"
+
+static inline struct akcipher_request *akcipher_request_cast(
+   struct crypto_async_request *req)
+{
+   return container_of(req, struct akcipher_request, base);
+}
+
+static int ccp_rsa_complete(struct crypto_async_request *async_req, int ret)
+{
+   struct akcipher_request *req = akcipher_request_cast(async_req);
+   struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
+
+   if (!ret)
+   req->dst_len = rctx->cmd.u.rsa.key_size >> 3;
+
+   ret = 0;
+
+   return ret;
+}
+
+static unsigned int ccp_rsa_maxsize(struct crypto_akcipher *tfm)
+{
+   return CCP_RSA_MAXMOD;
+}
+
+static int ccp_rsa_crypt(struct akcipher_request *req, bool encrypt)
+{
+   struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+   struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm);
+   struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
+   int ret = 0;
+
+   memset(>cmd, 0, sizeof(rctx->cmd));
+   INIT_LIST_HEAD(>cmd.entry);
+   rctx->cmd.engine = CCP_ENGINE_RSA;
+
+   rctx->cmd.u.rsa.key_size = ctx->u.rsa.key_len; /* in bits */
+   if (encrypt) {
+   rctx->cmd.u.rsa.exp = 

[PATCH 1/4] crypto: ccp - Fix base RSA function for version 5 CCPs

2017-06-21 Thread Gary R Hook
Version 5 devices have requirements for buffer lengths, as well as
parameter format (e.g. bits vs. bytes). Fix the base CCP driver
code to meet requirements all supported versions.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-dev-v5.c |   10 ++--
 drivers/crypto/ccp/ccp-ops.c|   95 ---
 2 files changed, 64 insertions(+), 41 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-dev-v5.c b/drivers/crypto/ccp/ccp-dev-v5.c
index b10d2d2075cb..632518efd685 100644
--- a/drivers/crypto/ccp/ccp-dev-v5.c
+++ b/drivers/crypto/ccp/ccp-dev-v5.c
@@ -469,7 +469,7 @@ static int ccp5_perform_rsa(struct ccp_op *op)
CCP5_CMD_PROT() = 0;
 
function.raw = 0;
-   CCP_RSA_SIZE() = op->u.rsa.mod_size >> 3;
+   CCP_RSA_SIZE() = (op->u.rsa.mod_size + 7) >> 3;
CCP5_CMD_FUNCTION() = function.raw;
 
CCP5_CMD_LEN() = op->u.rsa.input_len;
@@ -484,10 +484,10 @@ static int ccp5_perform_rsa(struct ccp_op *op)
CCP5_CMD_DST_HI() = ccp_addr_hi(>dst.u.dma);
CCP5_CMD_DST_MEM() = CCP_MEMTYPE_SYSTEM;
 
-   /* Exponent is in LSB memory */
-   CCP5_CMD_KEY_LO() = op->sb_key * LSB_ITEM_SIZE;
-   CCP5_CMD_KEY_HI() = 0;
-   CCP5_CMD_KEY_MEM() = CCP_MEMTYPE_SB;
+   /* Key (Exponent) is in external memory */
+   CCP5_CMD_KEY_LO() = ccp_addr_lo(>exp.u.dma);
+   CCP5_CMD_KEY_HI() = ccp_addr_hi(>exp.u.dma);
+   CCP5_CMD_KEY_MEM() = CCP_MEMTYPE_SYSTEM;
 
return ccp5_do_cmd(, op->cmd_q);
 }
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index c0dfdacbdff5..11155e52c52c 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -1731,10 +1731,10 @@ static int ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
 static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
 {
struct ccp_rsa_engine *rsa = >u.rsa;
-   struct ccp_dm_workarea exp, src;
-   struct ccp_data dst;
+   struct ccp_dm_workarea exp, src, dst;
struct ccp_op op;
unsigned int sb_count, i_len, o_len;
+   unsigned int key_size_bytes;
int ret;
 
if (rsa->key_size > CCP_RSA_MAX_WIDTH)
@@ -1743,31 +1743,41 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
if (!rsa->exp || !rsa->mod || !rsa->src || !rsa->dst)
return -EINVAL;
 
-   /* The RSA modulus must precede the message being acted upon, so
-* it must be copied to a DMA area where the message and the
-* modulus can be concatenated.  Therefore the input buffer
-* length required is twice the output buffer length (which
-* must be a multiple of 256-bits).
-*/
-   o_len = ((rsa->key_size + 255) / 256) * 32;
-   i_len = o_len * 2;
-
-   sb_count = o_len / CCP_SB_BYTES;
-
memset(, 0, sizeof(op));
op.cmd_q = cmd_q;
-   op.jobid = ccp_gen_jobid(cmd_q->ccp);
-   op.sb_key = cmd_q->ccp->vdata->perform->sballoc(cmd_q, sb_count);
+   op.jobid = CCP_NEW_JOBID(cmd_q->ccp);
 
-   if (!op.sb_key)
-   return -EIO;
+   /* Compute o_len, i_len in bytes. */
+   if (cmd_q->ccp->vdata->version < CCP_VERSION(5, 0)) {
+   /* The RSA modulus must precede the message being acted upon, so
+* it must be copied to a DMA area where the message and the
+* modulus can be concatenated.  Therefore the input buffer
+* length required is twice the output buffer length (which
+* must be a multiple of 256-bits). sb_count is the
+* number of storage block slots required for the modulus
+*/
+   key_size_bytes = (rsa->key_size + 7) >> 3;
+   o_len = ((rsa->key_size + 255) / 256) * CCP_SB_BYTES;
+   i_len = key_size_bytes * 2;
+
+   sb_count = o_len / CCP_SB_BYTES;
+
+   op.sb_key = cmd_q->ccp->vdata->perform->sballoc(cmd_q,
+   sb_count);
+   if (!op.sb_key)
+   return -EIO;
+   } else {
+   /* A version 5 device allows a modulus size that will not fit
+* in the LSB, so the command will transfer it from memory.
+* But more importantly, the buffer sizes must be a multiple
+* of 32 bytes; rounding up may be required.
+*/
+   key_size_bytes = 32 * ((rsa->key_size + 255) / 256);
+   o_len = key_size_bytes;
+   i_len = o_len * 2; /* bytes */
+   op.sb_key = cmd_q->sb_key;
+   }
 
-   /* The RSA exponent may span multiple (32-byte) SB entries and must
-* be in little endian format. Reverse copy each 32-byte chunk
-* of the exponent (En chunk to E0 chunk, E(n-1) chunk to E1 chunk)
-* and each byte within that chunk and do not 

[PATCH 2/4] crypto: Add akcipher_set_reqsize() function

2017-06-21 Thread Gary R Hook
Signed-off-by: Gary R Hook 
---
 include/crypto/internal/akcipher.h |6 ++
 1 file changed, 6 insertions(+)

diff --git a/include/crypto/internal/akcipher.h 
b/include/crypto/internal/akcipher.h
index 479a0078f0f7..805686ba2be4 100644
--- a/include/crypto/internal/akcipher.h
+++ b/include/crypto/internal/akcipher.h
@@ -38,6 +38,12 @@ static inline void *akcipher_request_ctx(struct 
akcipher_request *req)
return req->__ctx;
 }
 
+static inline void akcipher_set_reqsize(struct crypto_akcipher *akcipher,
+   unsigned int reqsize)
+{
+   crypto_akcipher_alg(akcipher)->reqsize = reqsize;
+}
+
 static inline void *akcipher_tfm_ctx(struct crypto_akcipher *tfm)
 {
return tfm->base.__crt_ctx;



[PATCH 0/4] Enable full RSA support on CCPs

2017-06-21 Thread Gary R Hook
The following series enables RSA operations on version 5 devices,
adds a set-reqsize function (to provide uniformity with other cipher
APIs), implements akcipher enablement in the crypto layer, and 
makes a tweak for expanded v5 device capabilities.

---

Gary R Hook (4):
  crypto: ccp - Fix base RSA function for version 5 CCPs
  crypto: Add akcipher_set_reqsize() function
  crypto: ccp - Add support for RSA on the CCP
  crypto: ccp - Expand RSA support for a v5 ccp


 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/ccp-crypto-main.c |   21 ++
 drivers/crypto/ccp/ccp-crypto-rsa.c  |  289 ++
 drivers/crypto/ccp/ccp-crypto.h  |   32 
 drivers/crypto/ccp/ccp-debugfs.c |1 
 drivers/crypto/ccp/ccp-dev-v3.c  |1 
 drivers/crypto/ccp/ccp-dev-v5.c  |   12 +
 drivers/crypto/ccp/ccp-dev.c |1 
 drivers/crypto/ccp/ccp-dev.h |2 
 drivers/crypto/ccp/ccp-ops.c |   98 +++-
 include/crypto/internal/akcipher.h   |6 +
 include/linux/ccp.h  |1 
 12 files changed, 421 insertions(+), 44 deletions(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-rsa.c

--
Signature


[PATCH] crypto: ccp - Provide a roll-back method for debugfs setup

2017-06-21 Thread Gary R Hook
Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-debugfs.c |   18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-debugfs.c b/drivers/crypto/ccp/ccp-debugfs.c
index 3cd6c83754e0..99aba1622613 100644
--- a/drivers/crypto/ccp/ccp-debugfs.c
+++ b/drivers/crypto/ccp/ccp-debugfs.c
@@ -291,6 +291,7 @@ void ccp5_debugfs_setup(struct ccp_device *ccp)
struct dentry *debugfs_q_instance;
struct dentry *debugfs_q_stats;
unsigned long flags;
+   int rc = 0;
int i;
 
if (!debugfs_initialized())
@@ -305,19 +306,19 @@ void ccp5_debugfs_setup(struct ccp_device *ccp)
 
ccp->debugfs_instance = debugfs_create_dir(ccp->name, ccp_debugfs_dir);
if (!ccp->debugfs_instance)
-   return;
+   goto err;
 
debugfs_info = debugfs_create_file("info", 0400,
   ccp->debugfs_instance, ccp,
   _debugfs_info_ops);
if (!debugfs_info)
-   return;
+   goto err;
 
debugfs_stats = debugfs_create_file("stats", 0600,
ccp->debugfs_instance, ccp,
_debugfs_stats_ops);
if (!debugfs_stats)
-   return;
+   goto err;
 
for (i = 0; i < ccp->cmd_q_count; i++) {
cmd_q = >cmd_q[i];
@@ -327,15 +328,22 @@ void ccp5_debugfs_setup(struct ccp_device *ccp)
debugfs_q_instance =
debugfs_create_dir(name, ccp->debugfs_instance);
if (!debugfs_q_instance)
-   return;
+   goto err;
 
debugfs_q_stats =
debugfs_create_file("stats", 0600,
debugfs_q_instance, cmd_q,
_debugfs_queue_ops);
if (!debugfs_q_stats)
-   return;
+   goto err;
}
+   return;
+
+err:
+   write_lock_irqsave(_debugfs_lock, flags);
+   debugfs_remove_recursive(ccp_debugfs_dir);
+   ccp_debugfs_dir = NULL;
+   write_unlock_irqrestore(_debugfs_lock, flags);
 }
 
 void ccp5_debugfs_destroy(void)



Re: [kernel-hardening] [PATCH] random: warn when kernel uses unseeded randomness

2017-06-21 Thread Theodore Ts'o
On Wed, Jun 21, 2017 at 04:06:49PM +1000, Michael Ellerman wrote:
> All the distro kernels I'm aware of have DEBUG_KERNEL=y.
> 
> Where all includes at least RHEL, SLES, Fedora, Ubuntu & Debian.
> 
> So it's still essentially default y.
> 
> Emitting *one* warning by default would be reasonable. That gives users
> who are interested something to chase, they can then turn on the option
> to get the full story.
> 
> Filling the dmesg buffer with repeated warnings is really not helpful.

I agree completely with all of this.  The following patch replaces the
current topmost patch on the random.git tree:


>From 25b683ee9bd5536807f813efbd19809333461f89 Mon Sep 17 00:00:00 2001
From: Theodore Ts'o 
Date: Thu, 8 Jun 2017 04:16:59 -0400
Subject: [PATCH] random: suppress spammy warnings about unseeded randomness

Unfortunately, on some models of some architectures getting a fully
seeded CRNG is extremely difficult, and so this can result in dmesg
getting spammed for a surprisingly long time.  This is really bad from
a security perspective, and so architecture maintainers needed to do
what they can to get the CRNG seeded sooner after the system is
booted.  However, users can't do anything actionble to address this,
and spamming the kernel messages log will only just annoy people.

For developers who want to work on improving this situation,
CONFIG_WARN_UNSEEDED_RANDOM has been renamed to
CONFIG_WARN_ALL_UNSEEDED_RANDOM.  By default the kernel will always
print the first use of unseeded randomness.  This way, hopefully the
security obsessed will be happy that there is _some_ indication when
the kernel boots there may be a potential issue with that architecture
or subarchitecture.  To see all uses of unseeded randomness,
developers can enable CONFIG_WARN_ALL_UNSEEDED_RANDOM.

Signed-off-by: Theodore Ts'o 
---
 drivers/char/random.c | 45 ++---
 lib/Kconfig.debug | 24 ++--
 2 files changed, 48 insertions(+), 21 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index fa5bbd5a7ca0..7405c914bbcf 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1466,6 +1466,30 @@ static ssize_t extract_entropy_user(struct entropy_store 
*r, void __user *buf,
return ret;
 }
 
+#define warn_unseeded_randomness(previous) \
+   _warn_unseeded_randomness(__func__, (void *) _RET_IP_, (previous))
+
+static void _warn_unseeded_randomness(const char *func_name, void *caller,
+ void **previous)
+{
+#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM
+   const bool print_once = false;
+#else
+   static bool print_once __read_mostly;
+#endif
+
+   if (print_once ||
+   crng_ready() ||
+   (previous && (caller == READ_ONCE(*previous
+   return;
+   WRITE_ONCE(*previous, caller);
+#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM
+   print_once = true;
+#endif
+   pr_notice("random: %s called from %pF with crng_init=%d\n",
+ func_name, caller, crng_init);
+}
+
 /*
  * This function is the exported kernel interface.  It returns some
  * number of good random numbers, suitable for key generation, seeding
@@ -1479,12 +1503,9 @@ static ssize_t extract_entropy_user(struct entropy_store 
*r, void __user *buf,
 void get_random_bytes(void *buf, int nbytes)
 {
__u8 tmp[CHACHA20_BLOCK_SIZE];
+   static void *previous;
 
-#ifdef CONFIG_WARN_UNSEEDED_RANDOM
-   if (!crng_ready())
-   printk(KERN_NOTICE "random: %pF get_random_bytes called "
-  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
-#endif
+   warn_unseeded_randomness();
trace_get_random_bytes(nbytes, _RET_IP_);
 
while (nbytes >= CHACHA20_BLOCK_SIZE) {
@@ -2064,6 +2085,7 @@ u64 get_random_u64(void)
bool use_lock = READ_ONCE(crng_init) < 2;
unsigned long flags = 0;
struct batched_entropy *batch;
+   static void *previous;
 
 #if BITS_PER_LONG == 64
if (arch_get_random_long((unsigned long *)))
@@ -2074,11 +2096,7 @@ u64 get_random_u64(void)
return ret;
 #endif
 
-#ifdef CONFIG_WARN_UNSEEDED_RANDOM
-   if (!crng_ready())
-   printk(KERN_NOTICE "random: %pF get_random_u64 called "
-  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
-#endif
+   warn_unseeded_randomness();
 
batch = _cpu_var(batched_entropy_u64);
if (use_lock)
@@ -2102,15 +2120,12 @@ u32 get_random_u32(void)
bool use_lock = READ_ONCE(crng_init) < 2;
unsigned long flags = 0;
struct batched_entropy *batch;
+   static void *previous;
 
if (arch_get_random_int())
return ret;
 
-#ifdef CONFIG_WARN_UNSEEDED_RANDOM
-   if (!crng_ready())
-   printk(KERN_NOTICE "random: %pF get_random_u32 called "
-  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);

[PATCH v10 0/2] crypto: AF_ALG memory management fix

2017-06-21 Thread Stephan Müller
Hi Herbert,

Changes v10:
- remove hunk in *_poll
- *recvmsg: only return error in case of -EIOCBQUEUED and -EBADMSG
  -- for any other processing error during recvmsg, the processed number of
  bytes are returned and the processing is terminated

With the changes, you will see a lot of code duplication now
as I deliberately tried to use the same struct and variable names,
the same function names and even the same oder of functions.
If you agree to this patch, I volunteer to provide a followup
patch that will extract the code duplication into common
functions.

Please find attached memory management updates to

- simplify the code: the old AIO memory management is very
  complex and seemingly very fragile -- the update now
  eliminates all reported bugs in the skcipher and AEAD
  interfaces which allowed the kernel to be crashed by
  an unprivileged user

- streamline the code: there is one code path for AIO and sync
  operation; the code between algif_skcipher and algif_aead
  is very similar (if that patch set is accepted, I volunteer
  to reduce code duplication by moving service operations
  into af_alg.c and to further unify the TX SGL handling)

- unify the AIO and sync operation which only differ in the
  kernel crypto API callback and whether to wait for the
  crypto operation or not

- fix all reported bugs regarding the handling of multiple
  IOCBs.

The following testing was performed:

- stress testing to verify that no memleaks exist

- testing using Tadeusz Struck AIO test tool (see
  https://github.com/tstruk/afalg_async_test) -- the AEAD test
  is not applicable any more due to the changed user space
  interface; the skcipher test works once the user space
  interface change is honored in the test code

- using the libkcapi test suite, all tests including the
  originally failing ones (AIO with multiple IOCBs) work now --
  the current libkcapi code artificially limits the AEAD
  operation to one IOCB. After altering the libkcapi code
  to allow multiple IOCBs, the testing works flawless.

Stephan Mueller (2):
  crypto: skcipher AF_ALG - overhaul memory management
  crypto: aead AF_ALG - overhaul memory management

 crypto/algif_aead.c | 768 
 crypto/algif_skcipher.c | 565 ++-
 2 files changed, 726 insertions(+), 607 deletions(-)

-- 
2.9.4




[PATCH v10 1/2] crypto: skcipher AF_ALG - overhaul memory management

2017-06-21 Thread Stephan Müller
The updated memory management is described in the top part of the code.
As one benefit of the changed memory management, the AIO and synchronous
operation is now implemented in one common function. The AF_ALG
operation uses the async kernel crypto API interface for each cipher
operation. Thus, the only difference between the AIO and sync operation
types visible from user space is:

1. the callback function to be invoked when the asynchronous operation
   is completed

2. whether to wait for the completion of the kernel crypto API operation
   or not

In addition, the code structure is adjusted to match the structure of
algif_aead for easier code assessment.

The user space interface changed slightly as follows: the old AIO
operation returned zero upon success and < 0 in case of an error to user
space. As all other AF_ALG interfaces (including the sync skcipher
interface) returned the number of processed bytes upon success and < 0
in case of an error, the new skcipher interface (regardless of AIO or
sync) returns the number of processed bytes in case of success.

Signed-off-by: Stephan Mueller 
---
 crypto/algif_skcipher.c | 565 
 1 file changed, 282 insertions(+), 283 deletions(-)

diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 43839b0..015d84b 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -10,6 +10,21 @@
  * Software Foundation; either version 2 of the License, or (at your option)
  * any later version.
  *
+ * The following concept of the memory management is used:
+ *
+ * The kernel maintains two SGLs, the TX SGL and the RX SGL. The TX SGL is
+ * filled by user space with the data submitted via sendpage/sendmsg. Filling
+ * up the TX SGL does not cause a crypto operation -- the data will only be
+ * tracked by the kernel. Upon receipt of one recvmsg call, the caller must
+ * provide a buffer which is tracked with the RX SGL.
+ *
+ * During the processing of the recvmsg operation, the cipher request is
+ * allocated and prepared. As part of the recvmsg operation, the processed
+ * TX buffers are extracted from the TX SGL into a separate SGL.
+ *
+ * After the completion of the crypto operation, the RX SGL and the cipher
+ * request is released. The extracted TX SGL parts are released together with
+ * the RX SGL release.
  */
 
 #include 
@@ -24,109 +39,94 @@
 #include 
 #include 
 
-struct skcipher_sg_list {
+struct skcipher_tsgl {
struct list_head list;
-
int cur;
-
struct scatterlist sg[0];
 };
 
+struct skcipher_rsgl {
+   struct af_alg_sgl sgl;
+   struct list_head list;
+   size_t sg_num_bytes;
+};
+
+struct skcipher_async_req {
+   struct kiocb *iocb;
+   struct sock *sk;
+
+   struct skcipher_rsgl first_sgl;
+   struct list_head rsgl_list;
+
+   struct scatterlist *tsgl;
+   unsigned int tsgl_entries;
+
+   unsigned int areqlen;
+   struct skcipher_request req;
+};
+
 struct skcipher_tfm {
struct crypto_skcipher *skcipher;
bool has_key;
 };
 
 struct skcipher_ctx {
-   struct list_head tsgl;
-   struct af_alg_sgl rsgl;
+   struct list_head tsgl_list;
 
void *iv;
 
struct af_alg_completion completion;
 
-   atomic_t inflight;
size_t used;
+   size_t rcvused;
 
-   unsigned int len;
bool more;
bool merge;
bool enc;
 
-   struct skcipher_request req;
-};
-
-struct skcipher_async_rsgl {
-   struct af_alg_sgl sgl;
-   struct list_head list;
-};
-
-struct skcipher_async_req {
-   struct kiocb *iocb;
-   struct skcipher_async_rsgl first_sgl;
-   struct list_head list;
-   struct scatterlist *tsg;
-   atomic_t *inflight;
-   struct skcipher_request req;
+   unsigned int len;
 };
 
-#define MAX_SGL_ENTS ((4096 - sizeof(struct skcipher_sg_list)) / \
+#define MAX_SGL_ENTS ((4096 - sizeof(struct skcipher_tsgl)) / \
  sizeof(struct scatterlist) - 1)
 
-static void skcipher_free_async_sgls(struct skcipher_async_req *sreq)
+static inline int skcipher_sndbuf(struct sock *sk)
 {
-   struct skcipher_async_rsgl *rsgl, *tmp;
-   struct scatterlist *sgl;
-   struct scatterlist *sg;
-   int i, n;
-
-   list_for_each_entry_safe(rsgl, tmp, >list, list) {
-   af_alg_free_sg(>sgl);
-   if (rsgl != >first_sgl)
-   kfree(rsgl);
-   }
-   sgl = sreq->tsg;
-   n = sg_nents(sgl);
-   for_each_sg(sgl, sg, n, i)
-   put_page(sg_page(sg));
+   struct alg_sock *ask = alg_sk(sk);
+   struct skcipher_ctx *ctx = ask->private;
 
-   kfree(sreq->tsg);
+   return max_t(int, max_t(int, sk->sk_sndbuf & PAGE_MASK, PAGE_SIZE) -
+ ctx->used, 0);
 }
 
-static void skcipher_async_cb(struct crypto_async_request *req, int err)
+static inline bool skcipher_writable(struct sock *sk)
 {
-  

[PATCH v10 2/2] crypto: aead AF_ALG - overhaul memory management

2017-06-21 Thread Stephan Müller
The updated memory management is described in the top part of the code.
As one benefit of the changed memory management, the AIO and synchronous
operation is now implemented in one common function. The AF_ALG
operation uses the async kernel crypto API interface for each cipher
operation. Thus, the only difference between the AIO and sync operation
types visible from user space is:

1. the callback function to be invoked when the asynchronous operation
   is completed

2. whether to wait for the completion of the kernel crypto API operation
   or not

The change includes the overhaul of the TX and RX SGL handling. The TX
SGL holding the data sent from user space to the kernel is now dynamic
similar to algif_skcipher. This dynamic nature allows a continuous
operation of a thread sending data and a second thread receiving the
data. These threads do not need to synchronize as the kernel processes
as much data from the TX SGL to fill the RX SGL.

The caller reading the data from the kernel defines the amount of data
to be processed. Considering that the interface covers AEAD
authenticating ciphers, the reader must provide the buffer in the
correct size. Thus the reader defines the encryption size.

Signed-off-by: Stephan Mueller 
---
 crypto/algif_aead.c | 768 ++--
 1 file changed, 444 insertions(+), 324 deletions(-)

diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 8af664f..dba1f31 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -5,12 +5,26 @@
  *
  * This file provides the user-space API for AEAD ciphers.
  *
- * This file is derived from algif_skcipher.c.
- *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of the GNU General Public License as published by the Free
  * Software Foundation; either version 2 of the License, or (at your option)
  * any later version.
+ *
+ * The following concept of the memory management is used:
+ *
+ * The kernel maintains two SGLs, the TX SGL and the RX SGL. The TX SGL is
+ * filled by user space with the data submitted via sendpage/sendmsg. Filling
+ * up the TX SGL does not cause a crypto operation -- the data will only be
+ * tracked by the kernel. Upon receipt of one recvmsg call, the caller must
+ * provide a buffer which is tracked with the RX SGL.
+ *
+ * During the processing of the recvmsg operation, the cipher request is
+ * allocated and prepared. As part of the recvmsg operation, the processed
+ * TX buffers are extracted from the TX SGL into a separate SGL.
+ *
+ * After the completion of the crypto operation, the RX SGL and the cipher
+ * request is released. The extracted TX SGL parts are released together with
+ * the RX SGL release.
  */
 
 #include 
@@ -25,24 +39,32 @@
 #include 
 #include 
 
-struct aead_sg_list {
-   unsigned int cur;
-   struct scatterlist sg[ALG_MAX_PAGES];
+struct aead_tsgl {
+   struct list_head list;
+   unsigned int cur;   /* Last processed SG entry */
+   struct scatterlist sg[0];   /* Array of SGs forming the SGL */
 };
 
-struct aead_async_rsgl {
+struct aead_rsgl {
struct af_alg_sgl sgl;
struct list_head list;
+   size_t sg_num_bytes;/* Bytes of data in that SGL */
 };
 
 struct aead_async_req {
-   struct scatterlist *tsgl;
-   struct aead_async_rsgl first_rsgl;
-   struct list_head list;
struct kiocb *iocb;
struct sock *sk;
-   unsigned int tsgls;
-   char iv[];
+
+   struct aead_rsgl first_rsgl;/* First RX SG */
+   struct list_head rsgl_list; /* Track RX SGs */
+
+   struct scatterlist *tsgl;   /* priv. TX SGL of buffers to process */
+   unsigned int tsgl_entries;  /* number of entries in priv. TX SGL */
+
+   unsigned int outlen;/* Filled output buf length */
+
+   unsigned int areqlen;   /* Length of this data struct */
+   struct aead_request aead_req;   /* req ctx trails this struct */
 };
 
 struct aead_tfm {
@@ -51,25 +73,26 @@ struct aead_tfm {
 };
 
 struct aead_ctx {
-   struct aead_sg_list tsgl;
-   struct aead_async_rsgl first_rsgl;
-   struct list_head list;
+   struct list_head tsgl_list; /* Link to TX SGL */
 
void *iv;
+   size_t aead_assoclen;
 
-   struct af_alg_completion completion;
+   struct af_alg_completion completion;/* sync work queue */
 
-   unsigned long used;
+   size_t used;/* TX bytes sent to kernel */
+   size_t rcvused; /* total RX bytes to be processed by kernel */
 
-   unsigned int len;
-   bool more;
-   bool merge;
-   bool enc;
+   bool more;  /* More data to be expected? */
+   bool merge; /* Merge new data into existing SG */
+   bool enc;   /* Crypto operation: enc, dec */
 
-   size_t aead_assoclen;
-   struct aead_request 

XFRM Stats

2017-06-21 Thread Raj Ammanur
Hi Crypto/Xfrm Team,

I was wondering if there has been any discussion in the past
about adding stats in Xfrm to count the packets going in/out of
this sub-system? Right now we only have error stats.

thanks
--Raj


Re: [PATCH v2 6/6] ima: Support module-style appended signatures for appraisal

2017-06-21 Thread Thiago Jung Bauermann

Hello Mimi,

Thanks for your review, and for queuing the other patches in this series.

Mimi Zohar  writes:
> On Wed, 2017-06-07 at 22:49 -0300, Thiago Jung Bauermann wrote:
>> This patch introduces the modsig keyword to the IMA policy syntax to
>> specify that a given hook should expect the file to have the IMA signature
>> appended to it.
>
> Thank you, Thiago. Appended signatures seem to be working proper now
> with multiple keys on the IMA keyring.

Great news!

> The length of this patch description is a good indication that this
> patch needs to be broken up for easier review. A few
> comments/suggestions inline below.

Ok, I will try to break it up, and also patch 5 as you suggested.

>> diff --git a/security/integrity/digsig.c b/security/integrity/digsig.c
>> index 06554c448dce..9190c9058f4f 100644
>> --- a/security/integrity/digsig.c
>> +++ b/security/integrity/digsig.c
>> @@ -48,11 +48,10 @@ static bool init_keyring __initdata;
>>  #define restrict_link_to_ima restrict_link_by_builtin_trusted
>>  #endif
>> 
>> -int integrity_digsig_verify(const unsigned int id, const char *sig, int 
>> siglen,
>> -const char *digest, int digestlen)
>> +struct key *integrity_keyring_from_id(const unsigned int id)
>>  {
>> -if (id >= INTEGRITY_KEYRING_MAX || siglen < 2)
>> -return -EINVAL;
>> +if (id >= INTEGRITY_KEYRING_MAX)
>> +return ERR_PTR(-EINVAL);
>> 
>
> When splitting up this patch, the addition of this new function could
> be a separate patch. The patch description would explain the need for
> a new function.

Ok, will do for v3.

>> @@ -229,10 +234,14 @@ int ima_appraise_measurement(enum ima_hooks func,
>>  goto out;
>>  }
>> 
>> -status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value, rc, iint);
>> -if ((status != INTEGRITY_PASS) && (status != INTEGRITY_UNKNOWN)) {
>> -if ((status == INTEGRITY_NOLABEL)
>> -|| (status == INTEGRITY_NOXATTRS))
>> +/* Appended signatures aren't protected by EVM. */
>> +status = evm_verifyxattr(dentry, XATTR_NAME_IMA,
>> + xattr_value->type == IMA_MODSIG ?
>> + NULL : xattr_value, rc, iint);
>> +if (status != INTEGRITY_PASS && status != INTEGRITY_UNKNOWN &&
>> +!(xattr_value->type == IMA_MODSIG &&
>> +  (status == INTEGRITY_NOLABEL || status == INTEGRITY_NOXATTRS))) {
>
> This was messy to begin with, and now it is even more messy. For
> appended signatures, we're only interested in INTEGRITY_FAIL. Maybe
> leave the existing "if" clause alone and define a new "if" clause.

Ok, is this what you had in mind?

@@ -229,8 +237,14 @@ int ima_appraise_measurement(enum ima_hooks func,
goto out;
}
 
-   status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value, rc, iint);
-   if ((status != INTEGRITY_PASS) && (status != INTEGRITY_UNKNOWN)) {
+   /* Appended signatures aren't protected by EVM. */
+   status = evm_verifyxattr(dentry, XATTR_NAME_IMA,
+xattr_value->type == IMA_MODSIG ?
+NULL : xattr_value, rc, iint);
+   if (xattr_value->type == IMA_MODSIG && status == INTEGRITY_FAIL) {
+   cause = "invalid-HMAC";
+   goto out;
+   } else if (status != INTEGRITY_PASS && status != INTEGRITY_UNKNOWN) {
if ((status == INTEGRITY_NOLABEL)
|| (status == INTEGRITY_NOXATTRS))
cause = "missing-HMAC";

>> @@ -267,11 +276,18 @@ int ima_appraise_measurement(enum ima_hooks func,
>>  status = INTEGRITY_PASS;
>>  break;
>>  case EVM_IMA_XATTR_DIGSIG:
>> +case IMA_MODSIG:
>>  iint->flags |= IMA_DIGSIG;
>> -rc = integrity_digsig_verify(INTEGRITY_KEYRING_IMA,
>> - (const char *)xattr_value, rc,
>> - iint->ima_hash->digest,
>> - iint->ima_hash->length);
>> +
>> +if (xattr_value->type == EVM_IMA_XATTR_DIGSIG)
>> +rc = integrity_digsig_verify(INTEGRITY_KEYRING_IMA,
>> + (const char *)xattr_value,
>> + rc, iint->ima_hash->digest,
>> + iint->ima_hash->length);
>> +else
>> +rc = ima_modsig_verify(INTEGRITY_KEYRING_IMA,
>> +   xattr_value);
>> +
>
> Perhaps allowing IMA_MODSIG to flow into EVM_IMA_XATTR_DIGSIG on
> failure, would help restore process_measurements() to the way it was.
> Further explanation below.

It's not possible to simply flow into EVM_IMA_XATTR_DIGSIG on failure
because after calling ima_read_xattr we need to run again all the logic
before the switch 

[RFC PATCH] gcm - fix setkey cache coherence issues

2017-06-21 Thread Radu Solea
Generic GCM is likely to end up using a hardware accelerator to do
part of the job. Allocating hash, iv and result in a contiguous memory
area increases the risk of dma mapping multiple ranges on the same
cacheline. Also having dma and cpu written data on the same cacheline
will cause coherence issues.

Signed-off-by: Radu Solea 
---
Hi!

I've encountered cache coherence issues when using GCM with CAAM and this was
one way of fixing them but it has its drawbacks. Another would be to allocate
each element instead of all at once, but that only decreases the likelyhood of
this happening. Does anyone know of a better way of fixing this?

Thanks,
Radu.

 crypto/gcm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/crypto/gcm.c b/crypto/gcm.c
index b7ad808..657eefe 100644
--- a/crypto/gcm.c
+++ b/crypto/gcm.c
@@ -117,9 +117,9 @@ static int crypto_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
struct crypto_skcipher *ctr = ctx->ctr;
struct {
be128 hash;
-   u8 iv[16];
+   u8 iv[16] cacheline_aligned;
 
-   struct crypto_gcm_setkey_result result;
+   struct crypto_gcm_setkey_result result cacheline_aligned;
 
struct scatterlist sg[1];
struct skcipher_request req;
-- 
2.7.4



Re: [PATCH v9 1/2] crypto: skcipher AF_ALG - overhaul memory management

2017-06-21 Thread Stephan Müller
Am Dienstag, 20. Juni 2017, 05:10:42 CEST schrieb Herbert Xu:

Hi Herbert,

> > +   int err = _skcipher_recvmsg(sock, msg, ignored, flags);
> > +
> > +   /*
> > +* This error covers -EIOCBQUEUED which implies that we can
> > +* only handle one AIO request. If the caller wants to have
> > +* multiple AIO requests in parallel, he must make multiple
> > +* separate AIO calls.
> > +*/
> > +   if (err < 0) {
> > +   ret = err;
> > +   goto out;
> 
> This looks like a semantic change.  The previous code would return
> the number of bytes already successfully processed in case of a
> subsequent error.  With your new code you will always return the
> error.

In the current code, the synchronous returns the processed bytes whereas the 
async case returns EIOCBQUEUED if this error occurs without returning the 
processed bytes.

Thus, would you like to change that code into

if (err < 0) {
if (err == -EIOCBQUEUED)
ret = err;

goto out;
}

?


> 
> > @@ -724,10 +737,9 @@ static unsigned int skcipher_poll(struct file *file,
> > struct socket *sock,> 
> > struct sock *sk = sock->sk;
> > struct alg_sock *ask = alg_sk(sk);
> > struct skcipher_ctx *ctx = ask->private;
> > 
> > -   unsigned int mask;
> > +   unsigned int mask = 0;
> > 
> > sock_poll_wait(file, sk_sleep(sk), wait);
> > 
> > -   mask = 0;
> > 
> > if (ctx->used)
> > 
> > mask |= POLLIN | POLLRDNORM;
> 
> Please remove this hunk as it has nothing to do with this patch.

Removed.

Thanks

Ciao
Stephan


Re: [PATCH] crypto: sun4i-ss: support the Security System PRNG

2017-06-21 Thread Maxime Ripard
On Tue, Jun 20, 2017 at 01:45:36PM +0200, Corentin Labbe wrote:
> On Tue, Jun 20, 2017 at 11:59:47AM +0200, Maxime Ripard wrote:
> > Hi,
> > 
> > On Tue, Jun 20, 2017 at 10:58:19AM +0200, Corentin Labbe wrote:
> > > The Security System have a PRNG, this patch add support for it via
> > > crypto_rng.
> > 
> > This might be a dumb question, but is the CRYPTO_RNG code really
> > supposed to be used with PRNG?
> > 
> 
> Yes, see recently added drivers/crypto/exynos-rng.c

It's still not really clear from the commit log (if you're talking
about c46ea13f55b6) why and if using the RNG code for a PRNG is a good
idea.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com


signature.asc
Description: PGP signature


Re: [kernel-hardening] [PATCH] random: warn when kernel uses unseeded randomness

2017-06-21 Thread Michael Ellerman
"Jason A. Donenfeld"  writes:

> This enables an important dmesg notification about when drivers have
> used the crng without it being seeded first. Prior, these errors would
> occur silently, and so there hasn't been a great way of diagnosing these
> types of bugs for obscure setups. By adding this as a config option, we
> can leave it on by default, so that we learn where these issues happen,
> in the field, will still allowing some people to turn it off, if they
> really know what they're doing and do not want the log entries.
>
> However, we don't leave it _completely_ by default. An earlier version
> of this patch simply had `default y`. I'd really love that, but it turns
> out, this problem with unseeded randomness being used is really quite
> present and is going to take a long time to fix. Thus, as a compromise
> between log-messages-for-all and nobody-knows, this is `default y`,
> except it is also `depends on DEBUG_KERNEL`. This will ensure that the
> curious see the messages while others don't have to.

All the distro kernels I'm aware of have DEBUG_KERNEL=y.

Where all includes at least RHEL, SLES, Fedora, Ubuntu & Debian.

So it's still essentially default y.

Emitting *one* warning by default would be reasonable. That gives users
who are interested something to chase, they can then turn on the option
to get the full story.

Filling the dmesg buffer with repeated warnings is really not helpful.

cheers