[PATCH] crypto: caam - Fix key inlining in AEAD shared descriptors

2014-04-27 Thread Vakul Garg
The variable 'keys_fit_inline' is initialised correctly to avoid using
its stale value while creating shared descriptor for decryption and
given-iv-encryption.

Signed-off-by: Vakul Garg va...@freescale.com
---
 drivers/crypto/caam/caamalg.c | 15 +--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 5f89125..99fda94 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -209,7 +209,7 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
struct aead_tfm *tfm = aead-base.crt_aead;
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx-jrdev;
-   bool keys_fit_inline = false;
+   bool keys_fit_inline;
u32 *key_jump_cmd, *jump_cmd, *read_move_cmd, *write_move_cmd;
u32 *desc;
 
@@ -220,6 +220,8 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
if (DESC_AEAD_NULL_ENC_LEN + DESC_JOB_IO_LEN +
ctx-split_key_pad_len = CAAM_DESC_BYTES_MAX)
keys_fit_inline = true;
+   else
+   keys_fit_inline = false;
 
/* aead_encrypt shared descriptor */
desc = ctx-sh_desc_enc;
@@ -306,6 +308,8 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
if (DESC_AEAD_NULL_DEC_LEN + DESC_JOB_IO_LEN +
ctx-split_key_pad_len = CAAM_DESC_BYTES_MAX)
keys_fit_inline = true;
+   else
+   keys_fit_inline = false;
 
desc = ctx-sh_desc_dec;
 
@@ -399,7 +403,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
struct aead_tfm *tfm = aead-base.crt_aead;
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx-jrdev;
-   bool keys_fit_inline = false;
+   bool keys_fit_inline;
u32 geniv, moveiv;
u32 *desc;
 
@@ -418,6 +422,9 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ctx-split_key_pad_len + ctx-enckeylen =
CAAM_DESC_BYTES_MAX)
keys_fit_inline = true;
+   else
+   keys_fit_inline = false;
+
 
/* aead_encrypt shared descriptor */
desc = ctx-sh_desc_enc;
@@ -476,6 +483,8 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ctx-split_key_pad_len + ctx-enckeylen =
CAAM_DESC_BYTES_MAX)
keys_fit_inline = true;
+   else
+   keys_fit_inline = false;
 
/* aead_decrypt shared descriptor */
desc = ctx-sh_desc_dec;
@@ -531,6 +540,8 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ctx-split_key_pad_len + ctx-enckeylen =
CAAM_DESC_BYTES_MAX)
keys_fit_inline = true;
+   else
+   keys_fit_inline = false;
 
/* aead_givencrypt shared descriptor */
desc = ctx-sh_desc_givenc;
-- 
1.8.1.4

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC] /dev/random for in-kernel use

2014-04-27 Thread Stephan Mueller
Hi,

before I start, please allow me to point out that this email is not a 
discussion about entropy. There was already too much such discussion without 
any conclusion. This email shall just explore the pros and cons as well as an 
implementation of making the logic behind /dev/random available for in-kernel 
use.

With the heavy update of random.c during the 3.13 development, the re-seeding 
of the nonblocking_pool from the input_pool is now prevented for a duration of 
random_min_urandom_seed seconds. Furthermore, the nonblocking_pool can be read 
from user space such that it acts as a deterministic RNG due to the non-
blocking behavior if more data is pulled than is delivered by the noise 
sources. As the nonblocking_pool is used when a in-kernel user invokes 
get_random_bytes, the described deterministic behavior also applies to this 
function.

Now, get_random_bytes is the only function inside the kernel that can deliver 
entropy. For most use cases, the described approach is just fine. However, 
when using well defined deterministic RNGs, like the already existing ANSI 
X9.31 or the suggested SP800-90A DRBG, seeding these DRNGs from the DRNG of 
the nonblocking_pool is typically not a suggested way. This is visible in user 
space where /dev/random is preferred when seeding deterministic RNGs (see 
libgcrypt as an example).

Therefore may I propose the implementation of the blocking concept of 
/dev/random as a provider of entropy to in-kernel callers like DRNGs? I would 
recommend a function of

void get_blocking_bytes_nowait(void *buf, int nbytes,
void (*cb)(void *buf, int buflen))

which supplements get_random_bytes that implements the following:

- one work queue that collects data from blocking_pool (or directly from the 
input_pool?) with an identical concept as random_read from 
drivers/char/random.c

- the cb function is triggered when nbytes of data is gathered by the work 
queue. The cb function is implemented by the caller to obtain the requested 
data. The returned data is supplied with buf. The caller allocates that buffer 
and supplies the pointer to that buffer in the invocation of 
get_blocking_bytes_nowait.

- any invocation of get_blocking_bytes_nowait registers the request in a list. 
The work queue processes the list in a FIFO order and removes completed 
requests until the list is empty again. The list has a set length. If the list 
length is exceeded, the cb function is called right away with buflen set to 0.

Ciao
Stephan
-- 
| Cui bono? |
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] /dev/random for in-kernel use

2014-04-27 Thread Theodore Ts'o
On Sun, Apr 27, 2014 at 08:49:48PM +0200, Stephan Mueller wrote:
 With the heavy update of random.c during the 3.13 development, the re-seeding 
 of the nonblocking_pool from the input_pool is now prevented for a duration 
 of 
 random_min_urandom_seed seconds. Furthermore, the nonblocking_pool can be 
 read 
 from user space such that it acts as a deterministic RNG due to the non-
 blocking behavior if more data is pulled than is delivered by the noise 
 sources. As the nonblocking_pool is used when a in-kernel user invokes 
 get_random_bytes, the described deterministic behavior also applies to this 
 function.

This actually wasn't a major change.  If you are drawing sufficiently
heavily from /dev/urandom (and some crypto libraries draw _very_
heavily; the Chrome browser in particular draws from /dev/urandom
quite liberally), then you'll end up drawing from the input pool
faster than it can be filled from noise sources anyway.  So the rate
limiting wasn't making a material difference as far as the quality of
/dev/urandom and get_random_bytes() is concerned.  What it does do is
allow users of /dev/random (such as gpg key generaiton) to complete in
a reasonable time.

However, given that we're reseeding once a minute (or as needed), it's
actually not a deterministic RNG (per SP 800-90A, section 8.6.5, a
DRBG is forbidden to reseed itself automatically).

 Now, get_random_bytes is the only function inside the kernel that can deliver 
 entropy. For most use cases, the described approach is just fine. However, 
 when using well defined deterministic RNGs, like the already existing ANSI 
 X9.31 or the suggested SP800-90A DRBG, seeding these DRNGs from the DRNG of 
 the nonblocking_pool is typically not a suggested way. This is visible in 
 user 
 space where /dev/random is preferred when seeding deterministic RNGs (see 
 libgcrypt as an example).

Well, as far as SP800-90A DRBG is concerned, using either the logical
equivalent of /dev/random or /dev/urandom is not allowed, since
neither is a approved entropy source.  (Then again, given that NIST
approved Dual-EC, I'm not sure how much I'm impressed by a NIST
approval, but whatever.  :-) But if your goal is to get the NIST
certification, which IMHO should only matter if you are selling to the
US government, it really can really only use RDRAND on Intel platforms
--- i.e., get_random_bytes_arch().

That being said, if some kernel module really wants to get its hands
on entropy extracted from the blocking pool, I don't see any reason
why we shouldn't deny them that functionality.


 Therefore may I propose the implementation of the blocking concept of 
 /dev/random as a provider of entropy to in-kernel callers like DRNGs? I would 
 recommend a function of
 
 void get_blocking_bytes_nowait(void *buf, int nbytes,
   void (*cb)(void *buf, int buflen))

Let me make a counter proposal.  Let's instead provide a blocking
interface:

int get_blocking_random_bytes(void *buf, int nbytes);

and two helper functions:

void get_blocking_random_bytes_work(struct work_struct *work);
void get_blocking_random_bytes_cb(struct *workqueue_struct *wq,
  struct *random_work rw,
  void *buf, int nbytes,
  void (*cb)(void *buf, int buflen));

If a caller really needs to use the non-blocking variant, it can do
this:

static struct random_work my_random_work;

...
get_blocking_random_bytes_cb(NULL, my_random_work, buf, nbytes, cb);
...

There are two benefits of doing it this way.  First of all, we don't
have to have a fixed number of queued random bytes, since it's up to
the caller to allocate the struct random_work.

Secondly, if a module tries to use use this interface, if there is an
attempt to unload the module, it can do this to cancel the /dev/random
read request:

cancel_work_sync(my_random_work.rw_work);


With the original get_blocking_bytes_nowait() proposal, there is no
way to cancel the callback request, so the module would have to do
something much more complicated.  It would have to wire up a struct
completion mechanism, and then call completion_done() from the
callback function, and use wait_for_completion() in the module_exit()
function, which is more complicated.

Furthermore, for a DRBG, if it is going to be seeding itself from its
module_init() function, and then explicitly via an ioctl() request,
then it can use the simpler blocking interface directly.

Cheers,

- Ted

P.S.  The sample implementation of the helper functions:

struct random_work {
   struct work  rw_work;
   void *rw_buf;
   int  rw_len;
   void (*rw_cb)(void *buf, int buflen);
}
   
void get_blocking_random_bytes_work(struct work_struct *work)
{
struct random_work *rw = container_of(work, struct random_work,
  

RE: [PATCH] crypto: caam - Fix key inlining in AEAD shared descriptors

2014-04-27 Thread Ruchika Gupta
Reviewed-by: Ruchika Gupta ruchika.gu...@freescale.com

 -Original Message-
 From: Vakul Garg [mailto:va...@freescale.com]
 Sent: Sunday, April 27, 2014 8:56 PM
 To: linux-crypto@vger.kernel.org
 Cc: herb...@gondor.apana.org.au; Geanta Neag Horia Ioan-B05471; Gupta
 Ruchika-R66431; Porosanu Alexandru-B06830
 Subject: [PATCH] crypto: caam - Fix key inlining in AEAD shared descriptors
 
 The variable 'keys_fit_inline' is initialised correctly to avoid using its
 stale value while creating shared descriptor for decryption and given-iv-
 encryption.
 
 Signed-off-by: Vakul Garg va...@freescale.com
 ---
  drivers/crypto/caam/caamalg.c | 15 +--
  1 file changed, 13 insertions(+), 2 deletions(-)
 
 diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
 index 5f89125..99fda94 100644
 --- a/drivers/crypto/caam/caamalg.c
 +++ b/drivers/crypto/caam/caamalg.c
 @@ -209,7 +209,7 @@ static int aead_null_set_sh_desc(struct crypto_aead
 *aead)
   struct aead_tfm *tfm = aead-base.crt_aead;
   struct caam_ctx *ctx = crypto_aead_ctx(aead);
   struct device *jrdev = ctx-jrdev;
 - bool keys_fit_inline = false;
 + bool keys_fit_inline;
   u32 *key_jump_cmd, *jump_cmd, *read_move_cmd, *write_move_cmd;
   u32 *desc;
 
 @@ -220,6 +220,8 @@ static int aead_null_set_sh_desc(struct crypto_aead
 *aead)
   if (DESC_AEAD_NULL_ENC_LEN + DESC_JOB_IO_LEN +
   ctx-split_key_pad_len = CAAM_DESC_BYTES_MAX)
   keys_fit_inline = true;
 + else
 + keys_fit_inline = false;
 
   /* aead_encrypt shared descriptor */
   desc = ctx-sh_desc_enc;
 @@ -306,6 +308,8 @@ static int aead_null_set_sh_desc(struct crypto_aead
 *aead)
   if (DESC_AEAD_NULL_DEC_LEN + DESC_JOB_IO_LEN +
   ctx-split_key_pad_len = CAAM_DESC_BYTES_MAX)
   keys_fit_inline = true;
 + else
 + keys_fit_inline = false;
 
   desc = ctx-sh_desc_dec;
 
 @@ -399,7 +403,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
   struct aead_tfm *tfm = aead-base.crt_aead;
   struct caam_ctx *ctx = crypto_aead_ctx(aead);
   struct device *jrdev = ctx-jrdev;
 - bool keys_fit_inline = false;
 + bool keys_fit_inline;
   u32 geniv, moveiv;
   u32 *desc;
 
 @@ -418,6 +422,9 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
   ctx-split_key_pad_len + ctx-enckeylen =
   CAAM_DESC_BYTES_MAX)
   keys_fit_inline = true;
 + else
 + keys_fit_inline = false;
 +
 
   /* aead_encrypt shared descriptor */
   desc = ctx-sh_desc_enc;
 @@ -476,6 +483,8 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
   ctx-split_key_pad_len + ctx-enckeylen =
   CAAM_DESC_BYTES_MAX)
   keys_fit_inline = true;
 + else
 + keys_fit_inline = false;
 
   /* aead_decrypt shared descriptor */
   desc = ctx-sh_desc_dec;
 @@ -531,6 +540,8 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
   ctx-split_key_pad_len + ctx-enckeylen =
   CAAM_DESC_BYTES_MAX)
   keys_fit_inline = true;
 + else
 + keys_fit_inline = false;
 
   /* aead_givencrypt shared descriptor */
   desc = ctx-sh_desc_givenc;
 --
 1.8.1.4

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html