Re: [PATCH v3 02/13] random: add get_random_{bytes,u32,u64,int,long,once}_wait family

2017-06-05 Thread Jeffrey Walton
On Mon, Jun 5, 2017 at 8:50 PM, Jason A. Donenfeld  wrote:
> These functions are simple convenience wrappers that call
> wait_for_random_bytes before calling the respective get_random_*
> function.

It may be advantageous to add a timeout, too.

There's been a number of times I did not want to wait an INFINITE
amount of time for a completion. (In another context).

Jeff

> Signed-off-by: Jason A. Donenfeld 
> ---
>  include/linux/net.h|  2 ++
>  include/linux/once.h   |  2 ++
>  include/linux/random.h | 25 +
>  3 files changed, 29 insertions(+)
>
> diff --git a/include/linux/net.h b/include/linux/net.h
> index abcfa46a2bd9..dda2cc939a53 100644
> --- a/include/linux/net.h
> +++ b/include/linux/net.h
> @@ -274,6 +274,8 @@ do {  
>   \
>
>  #define net_get_random_once(buf, nbytes)   \
> get_random_once((buf), (nbytes))
> +#define net_get_random_once_wait(buf, nbytes)  \
> +   get_random_once_wait((buf), (nbytes))
>
>  int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec,
>size_t num, size_t len);
> diff --git a/include/linux/once.h b/include/linux/once.h
> index 285f12cb40e6..9c98aaa87cbc 100644
> --- a/include/linux/once.h
> +++ b/include/linux/once.h
> @@ -53,5 +53,7 @@ void __do_once_done(bool *done, struct static_key *once_key,
>
>  #define get_random_once(buf, nbytes)\
> DO_ONCE(get_random_bytes, (buf), (nbytes))
> +#define get_random_once_wait(buf, nbytes)
> \
> +   DO_ONCE(get_random_bytes_wait, (buf), (nbytes))  \
>
>  #endif /* _LINUX_ONCE_H */
> diff --git a/include/linux/random.h b/include/linux/random.h
> index e29929347c95..4aecc339558d 100644
> --- a/include/linux/random.h
> +++ b/include/linux/random.h
> @@ -58,6 +58,31 @@ static inline unsigned long get_random_long(void)
>  #endif
>  }
>
> +/* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, 
> nbytes).
> + * Returns the result of the call to wait_for_random_bytes. */
> +static inline int get_random_bytes_wait(void *buf, int nbytes)
> +{
> +   int ret = wait_for_random_bytes();
> +   if (unlikely(ret))
> +   return ret;
> +   get_random_bytes(buf, nbytes);
> +   return 0;
> +}
> +
> +#define declare_get_random_var_wait(var) \
> +   static inline int get_random_ ## var ## _wait(var *out) { \
> +   int ret = wait_for_random_bytes(); \
> +   if (unlikely(ret)) \
> +   return ret; \
> +   *out = get_random_ ## var(); \
> +   return 0; \
> +   }
> +declare_get_random_var_wait(u32)
> +declare_get_random_var_wait(u64)
> +declare_get_random_var_wait(int)
> +declare_get_random_var_wait(long)
> +#undef declare_get_random_var
> +
>  unsigned long randomize_page(unsigned long start, unsigned long range);
>
>  u32 prandom_u32(void);


Re: [kernel-hardening] Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-05 Thread Eric Biggers
On Tue, Jun 06, 2017 at 05:56:20AM +0200, Jason A. Donenfeld wrote:
> Hey Ted,
> 
> On Tue, Jun 6, 2017 at 5:00 AM, Theodore Ts'o  wrote:
> > Note that crypto_rng_reset() is called by big_key_init() in
> > security/keys/big_key.c as a late_initcall().  So if we are on a
> > system where the crng doesn't get initialized until during the system
> > boot scripts, and big_key is compiled directly into the kernel, the
> > boot could end up deadlocking.
> >
> > There may be other instances of where crypto_rng_reset() is called by
> > an initcall, so big_key_init() may not be an exhaustive enumeration of
> > potential problems.  But this is an example of why the synchronous
> > API, although definitely much more convenient, can end up being a trap
> > for the unwary
> 
> Thanks for pointing this out. I'll look more closely into it and see
> if I can figure out a good way of approaching this.

I don't think big_key even needs randomness at init time.  The 'big_key_rng'
could just be removed and big_key_gen_enckey() changed to call
get_random_bytes().  (Or get_random_bytes_wait(), I guess; it's only reachable
via the keyring syscalls.)

It's going to take a while to go through all 217 users of get_random_bytes()
like this, though...  It's really a shame there's no way to guarantee good
randomness at boot time.

Eric


Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-05 Thread Jason A. Donenfeld
Hey Ted,

On Tue, Jun 6, 2017 at 5:00 AM, Theodore Ts'o  wrote:
> Note that crypto_rng_reset() is called by big_key_init() in
> security/keys/big_key.c as a late_initcall().  So if we are on a
> system where the crng doesn't get initialized until during the system
> boot scripts, and big_key is compiled directly into the kernel, the
> boot could end up deadlocking.
>
> There may be other instances of where crypto_rng_reset() is called by
> an initcall, so big_key_init() may not be an exhaustive enumeration of
> potential problems.  But this is an example of why the synchronous
> API, although definitely much more convenient, can end up being a trap
> for the unwary

Thanks for pointing this out. I'll look more closely into it and see
if I can figure out a good way of approaching this.

Indeed you're right -- that we have to be really quite careful every
time we use the synchronous API. For this reason, I separated things
out into the wait_for_random_bytes and then the wrapper around
wait_for_random_bytes+get_random_bytes of get_random_bytes_wait. The
idea here would be that drivers could place a single
wait_for_random_bytes at some userspace entry point -- a configuration
ioctl, for example -- and then try to ensure that all calls to
get_random_bytes are ordered _after_ that wait_for_random_bytes call.
While this pattern doesn't fix all cases of unseeded get_random_bytes
calls -- we'll need to do some module loading order cleverness for
that, as we discussed in the other thread -- I think this pattern will
fix an acceptable amount of call sites, as seen here in this patchset,
that it makes it worthwhile. Having it, too, I think would encourage
other new drivers to think about when their calls to get_random_bytes
happens, and if it's possible for them to defer it until after a
userspace-blocking call to wait_for_random_bytes.

Anyway, I'll look into and fix up the problem you mentioned. Looking
forward to your feedback on the other patches here.

Regards,
Jason


Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-05 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 02:50:59AM +0200, Jason A. Donenfeld wrote:
> Otherwise, we might be seeding the RNG using bad randomness, which is
> dangerous.
> 
> Cc: Herbert Xu 
> Signed-off-by: Jason A. Donenfeld 
> ---
>  crypto/rng.c | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/crypto/rng.c b/crypto/rng.c
> index f46dac5288b9..e042437e64b4 100644
> --- a/crypto/rng.c
> +++ b/crypto/rng.c
> @@ -48,12 +48,14 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 
> *seed, unsigned int slen)
>   if (!buf)
>   return -ENOMEM;
>  
> - get_random_bytes(buf, slen);
> + err = get_random_bytes_wait(buf, slen);

Note that crypto_rng_reset() is called by big_key_init() in
security/keys/big_key.c as a late_initcall().  So if we are on a
system where the crng doesn't get initialized until during the system
boot scripts, and big_key is compiled directly into the kernel, the
boot could end up deadlocking.

There may be other instances of where crypto_rng_reset() is called by
an initcall, so big_key_init() may not be an exhaustive enumeration of
potential problems.  But this is an example of why the synchronous
API, although definitely much more convenient, can end up being a trap
for the unwary

- Ted


[PATCH v3 01/13] random: add synchronous API for the urandom pool

2017-06-05 Thread Jason A. Donenfeld
This enables users of get_random_{bytes,u32,u64,int,long} to wait until
the pool is ready before using this function, in case they actually want
to have reliable randomness.

Signed-off-by: Jason A. Donenfeld 
---
 drivers/char/random.c  | 41 +++--
 include/linux/random.h |  1 +
 2 files changed, 32 insertions(+), 10 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 0ab024918907..035a5d7c06bd 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -844,11 +844,6 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
spin_unlock_irqrestore(&primary_crng.lock, flags);
 }
 
-static inline void crng_wait_ready(void)
-{
-   wait_event_interruptible(crng_init_wait, crng_ready());
-}
-
 static void _extract_crng(struct crng_state *crng,
  __u8 out[CHACHA20_BLOCK_SIZE])
 {
@@ -1466,7 +1461,10 @@ static ssize_t extract_entropy_user(struct entropy_store 
*r, void __user *buf,
  * number of good random numbers, suitable for key generation, seeding
  * TCP sequence numbers, etc.  It does not rely on the hardware random
  * number generator.  For random bytes direct from the hardware RNG
- * (when available), use get_random_bytes_arch().
+ * (when available), use get_random_bytes_arch(). In order to ensure
+ * that the randomness provided by this function is okay, the function
+ * wait_for_random_bytes() should be called and return 0 at least once
+ * at any point prior.
  */
 void get_random_bytes(void *buf, int nbytes)
 {
@@ -1496,6 +1494,24 @@ void get_random_bytes(void *buf, int nbytes)
 EXPORT_SYMBOL(get_random_bytes);
 
 /*
+ * Wait for the urandom pool to be seeded and thus guaranteed to supply
+ * cryptographically secure random numbers. This applies to: the /dev/urandom
+ * device, the get_random_bytes function, and the get_random_{u32,u64,int,long}
+ * family of functions. Using any of these functions without first calling
+ * this function forfeits the guarantee of security.
+ *
+ * Returns: 0 if the urandom pool has been seeded.
+ *  -ERESTARTSYS if the function was interrupted by a signal.
+ */
+int wait_for_random_bytes(void)
+{
+   if (likely(crng_ready()))
+   return 0;
+   return wait_event_interruptible(crng_init_wait, crng_ready());
+}
+EXPORT_SYMBOL(wait_for_random_bytes);
+
+/*
  * Add a callback function that will be invoked when the nonblocking
  * pool is initialised.
  *
@@ -1849,6 +1865,8 @@ const struct file_operations urandom_fops = {
 SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
unsigned int, flags)
 {
+   int ret;
+
if (flags & ~(GRND_NONBLOCK|GRND_RANDOM))
return -EINVAL;
 
@@ -1861,9 +1879,9 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, 
count,
if (!crng_ready()) {
if (flags & GRND_NONBLOCK)
return -EAGAIN;
-   crng_wait_ready();
-   if (signal_pending(current))
-   return -ERESTARTSYS;
+   ret = wait_for_random_bytes();
+   if (unlikely(ret))
+   return ret;
}
return urandom_read(NULL, buf, count, NULL);
 }
@@ -2023,7 +2041,10 @@ struct batched_entropy {
 /*
  * Get a random word for internal kernel use only. The quality of the random
  * number is either as good as RDRAND or as good as /dev/urandom, with the
- * goal of being quite fast and not depleting entropy.
+ * goal of being quite fast and not depleting entropy. In order to ensure
+ * that the randomness provided by this function is okay, the function
+ * wait_for_random_bytes() should be called and return 0 at least once
+ * at any point prior.
  */
 static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
 u64 get_random_u64(void)
diff --git a/include/linux/random.h b/include/linux/random.h
index ed5c3838780d..e29929347c95 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -34,6 +34,7 @@ extern void add_input_randomness(unsigned int type, unsigned 
int code,
 extern void add_interrupt_randomness(int irq, int irq_flags) __latent_entropy;
 
 extern void get_random_bytes(void *buf, int nbytes);
+extern int wait_for_random_bytes(void);
 extern int add_random_ready_callback(struct random_ready_callback *rdy);
 extern void del_random_ready_callback(struct random_ready_callback *rdy);
 extern void get_random_bytes_arch(void *buf, int nbytes);
-- 
2.13.0



[PATCH v3 02/13] random: add get_random_{bytes,u32,u64,int,long,once}_wait family

2017-06-05 Thread Jason A. Donenfeld
These functions are simple convenience wrappers that call
wait_for_random_bytes before calling the respective get_random_*
function.

Signed-off-by: Jason A. Donenfeld 
---
 include/linux/net.h|  2 ++
 include/linux/once.h   |  2 ++
 include/linux/random.h | 25 +
 3 files changed, 29 insertions(+)

diff --git a/include/linux/net.h b/include/linux/net.h
index abcfa46a2bd9..dda2cc939a53 100644
--- a/include/linux/net.h
+++ b/include/linux/net.h
@@ -274,6 +274,8 @@ do {
\
 
 #define net_get_random_once(buf, nbytes)   \
get_random_once((buf), (nbytes))
+#define net_get_random_once_wait(buf, nbytes)  \
+   get_random_once_wait((buf), (nbytes))
 
 int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec,
   size_t num, size_t len);
diff --git a/include/linux/once.h b/include/linux/once.h
index 285f12cb40e6..9c98aaa87cbc 100644
--- a/include/linux/once.h
+++ b/include/linux/once.h
@@ -53,5 +53,7 @@ void __do_once_done(bool *done, struct static_key *once_key,
 
 #define get_random_once(buf, nbytes)\
DO_ONCE(get_random_bytes, (buf), (nbytes))
+#define get_random_once_wait(buf, nbytes)\
+   DO_ONCE(get_random_bytes_wait, (buf), (nbytes))  \
 
 #endif /* _LINUX_ONCE_H */
diff --git a/include/linux/random.h b/include/linux/random.h
index e29929347c95..4aecc339558d 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -58,6 +58,31 @@ static inline unsigned long get_random_long(void)
 #endif
 }
 
+/* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbytes).
+ * Returns the result of the call to wait_for_random_bytes. */
+static inline int get_random_bytes_wait(void *buf, int nbytes)
+{
+   int ret = wait_for_random_bytes();
+   if (unlikely(ret))
+   return ret;
+   get_random_bytes(buf, nbytes);
+   return 0;
+}
+
+#define declare_get_random_var_wait(var) \
+   static inline int get_random_ ## var ## _wait(var *out) { \
+   int ret = wait_for_random_bytes(); \
+   if (unlikely(ret)) \
+   return ret; \
+   *out = get_random_ ## var(); \
+   return 0; \
+   }
+declare_get_random_var_wait(u32)
+declare_get_random_var_wait(u64)
+declare_get_random_var_wait(int)
+declare_get_random_var_wait(long)
+#undef declare_get_random_var
+
 unsigned long randomize_page(unsigned long start, unsigned long range);
 
 u32 prandom_u32(void);
-- 
2.13.0



[PATCH v3 03/13] random: invalidate batched entropy after crng init

2017-06-05 Thread Jason A. Donenfeld
It's possible that get_random_{u32,u64} is used before the crng has
initialized, in which case, its output might not be cryptographically
secure. For this problem, directly, this patch set is introducing the
*_wait variety of functions, but even with that, there's a subtle issue:
what happens to our batched entropy that was generated before
initialization. Prior to this commit, it'd stick around, supplying bad
numbers. After this commit, we force the entropy to be re-extracted
after each phase of the crng has initialized.

In order to avoid a race condition with the position counter, we
introduce a simple rwlock for this invalidation. Since it's only during
this awkward transition period, after things are all set up, we stop
using it, so that it doesn't have an impact on performance.

Signed-off-by: Jason A. Donenfeld 
---
 drivers/char/random.c | 34 ++
 1 file changed, 34 insertions(+)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 035a5d7c06bd..c328e9b11f1f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -762,6 +762,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
 static struct crng_state **crng_node_pool __read_mostly;
 #endif
 
+static void invalidate_batched_entropy(void);
+
 static void crng_initialize(struct crng_state *crng)
 {
int i;
@@ -800,6 +802,7 @@ static int crng_fast_load(const char *cp, size_t len)
}
if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
crng_init = 1;
+   invalidate_batched_entropy();
wake_up_interruptible(&crng_init_wait);
pr_notice("random: fast init done\n");
}
@@ -837,6 +840,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
crng->init_time = jiffies;
if (crng == &primary_crng && crng_init < 2) {
crng_init = 2;
+   invalidate_batched_entropy();
process_random_ready_list();
wake_up_interruptible(&crng_init_wait);
pr_notice("random: crng init done\n");
@@ -2037,6 +2041,7 @@ struct batched_entropy {
};
unsigned int position;
 };
+static rwlock_t batched_entropy_reset_lock = 
__RW_LOCK_UNLOCKED(batched_entropy_reset_lock);
 
 /*
  * Get a random word for internal kernel use only. The quality of the random
@@ -2050,6 +2055,8 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u64);
 u64 get_random_u64(void)
 {
u64 ret;
+   bool use_lock = crng_init < 2;
+   unsigned long flags;
struct batched_entropy *batch;
 
 #if BITS_PER_LONG == 64
@@ -2062,11 +2069,15 @@ u64 get_random_u64(void)
 #endif
 
batch = &get_cpu_var(batched_entropy_u64);
+   if (use_lock)
+   read_lock_irqsave(&batched_entropy_reset_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
extract_crng((u8 *)batch->entropy_u64);
batch->position = 0;
}
ret = batch->entropy_u64[batch->position++];
+   if (use_lock)
+   read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u64);
return ret;
 }
@@ -2076,22 +2087,45 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u32);
 u32 get_random_u32(void)
 {
u32 ret;
+   bool use_lock = crng_init < 2;
+   unsigned long flags;
struct batched_entropy *batch;
 
if (arch_get_random_int(&ret))
return ret;
 
batch = &get_cpu_var(batched_entropy_u32);
+   if (use_lock)
+   read_lock_irqsave(&batched_entropy_reset_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
extract_crng((u8 *)batch->entropy_u32);
batch->position = 0;
}
ret = batch->entropy_u32[batch->position++];
+   if (use_lock)
+   read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u32);
return ret;
 }
 EXPORT_SYMBOL(get_random_u32);
 
+/* It's important to invalidate all potential batched entropy that might
+ * be stored before the crng is initialized, which we can do lazily by
+ * simply resetting the counter to zero so that it's re-extracted on the
+ * next usage. */
+static void invalidate_batched_entropy(void)
+{
+   int cpu;
+   unsigned long flags;
+
+   write_lock_irqsave(&batched_entropy_reset_lock, flags);
+   for_each_possible_cpu (cpu) {
+   per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0;
+   per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0;
+   }
+   write_unlock_irqrestore(&batched_entropy_reset_lock, flags);
+}
+
 /**
  * randomize_page - Generate a random, page aligned address
  * @start: The smallest acceptable address the caller will take.
-- 
2.13.0



[PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-05 Thread Jason A. Donenfeld
Otherwise, we might be seeding the RNG using bad randomness, which is
dangerous.

Cc: Herbert Xu 
Signed-off-by: Jason A. Donenfeld 
---
 crypto/rng.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/crypto/rng.c b/crypto/rng.c
index f46dac5288b9..e042437e64b4 100644
--- a/crypto/rng.c
+++ b/crypto/rng.c
@@ -48,12 +48,14 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 
*seed, unsigned int slen)
if (!buf)
return -ENOMEM;
 
-   get_random_bytes(buf, slen);
+   err = get_random_bytes_wait(buf, slen);
+   if (err)
+   goto out;
seed = buf;
}
 
err = crypto_rng_alg(tfm)->seed(tfm, seed, slen);
-
+out:
kzfree(buf);
return err;
 }
-- 
2.13.0



[PATCH v3 05/13] security/keys: ensure RNG is seeded before use

2017-06-05 Thread Jason A. Donenfeld
Otherwise, we might use bad random numbers which, particularly in the
case of IV generation, could be quite bad. It makes sense to use the
synchronous API here, because we're always in process context (as the
code is littered with GFP_KERNEL and the like). However, we can't change
to using a blocking function in key serial allocation, because this will
block booting in some configurations, so here we use the more
appropriate get_random_u32, which will use RDRAND if available.

Signed-off-by: Jason A. Donenfeld 
Cc: David Howells 
Cc: Mimi Zohar 
Cc: David Safford 
---
 security/keys/encrypted-keys/encrypted.c |  8 +---
 security/keys/key.c  | 16 
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/security/keys/encrypted-keys/encrypted.c 
b/security/keys/encrypted-keys/encrypted.c
index 0010955d7876..d51a28fc5cd5 100644
--- a/security/keys/encrypted-keys/encrypted.c
+++ b/security/keys/encrypted-keys/encrypted.c
@@ -777,10 +777,12 @@ static int encrypted_init(struct encrypted_key_payload 
*epayload,
 
__ekey_init(epayload, format, master_desc, datalen);
if (!hex_encoded_iv) {
-   get_random_bytes(epayload->iv, ivsize);
+   ret = get_random_bytes_wait(epayload->iv, ivsize);
+   if (unlikely(ret))
+   return ret;
 
-   get_random_bytes(epayload->decrypted_data,
-epayload->decrypted_datalen);
+   ret = get_random_bytes_wait(epayload->decrypted_data,
+   epayload->decrypted_datalen);
} else
ret = encrypted_key_decrypt(epayload, format, hex_encoded_iv);
return ret;
diff --git a/security/keys/key.c b/security/keys/key.c
index 455c04d80bbb..b72078e532f2 100644
--- a/security/keys/key.c
+++ b/security/keys/key.c
@@ -134,17 +134,15 @@ void key_user_put(struct key_user *user)
  * Allocate a serial number for a key.  These are assigned randomly to avoid
  * security issues through covert channel problems.
  */
-static inline void key_alloc_serial(struct key *key)
+static inline int key_alloc_serial(struct key *key)
 {
struct rb_node *parent, **p;
struct key *xkey;
 
-   /* propose a random serial number and look for a hole for it in the
-* serial number tree */
+   /* propose a non-negative random serial number and look for a hole for
+* it in the serial number tree */
do {
-   get_random_bytes(&key->serial, sizeof(key->serial));
-
-   key->serial >>= 1; /* negative numbers are not permitted */
+   key->serial = get_random_u32() >> 1;
} while (key->serial < 3);
 
spin_lock(&key_serial_lock);
@@ -170,7 +168,7 @@ static inline void key_alloc_serial(struct key *key)
rb_insert_color(&key->serial_node, &key_serial_tree);
 
spin_unlock(&key_serial_lock);
-   return;
+   return 0;
 
/* we found a key with the proposed serial number - walk the tree from
 * that point looking for the next unused serial number */
@@ -314,7 +312,9 @@ struct key *key_alloc(struct key_type *type, const char 
*desc,
 
/* publish the key by giving it a serial number */
atomic_inc(&user->nkeys);
-   key_alloc_serial(key);
+   ret = key_alloc_serial(key);
+   if (ret < 0)
+   goto security_error;
 
 error:
return key;
-- 
2.13.0



[PATCH v3 06/13] iscsi: ensure RNG is seeded before use

2017-06-05 Thread Jason A. Donenfeld
It's not safe to use weak random data here, especially for the challenge
response randomness. Since we're always in process context, it's safe to
simply wait until we have enough randomness to carry out the
authentication correctly.

While we're at it, we clean up a small memleak during an error
condition.

Signed-off-by: Jason A. Donenfeld 
Cc: "Nicholas A. Bellinger" 
Cc: Lee Duncan 
Cc: Chris Leech 
---
 drivers/target/iscsi/iscsi_target_auth.c  | 14 +++---
 drivers/target/iscsi/iscsi_target_login.c | 22 ++
 2 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target_auth.c 
b/drivers/target/iscsi/iscsi_target_auth.c
index 903b667f8e01..f9bc8ec6fb6b 100644
--- a/drivers/target/iscsi/iscsi_target_auth.c
+++ b/drivers/target/iscsi/iscsi_target_auth.c
@@ -47,18 +47,21 @@ static void chap_binaryhex_to_asciihex(char *dst, char 
*src, int src_len)
}
 }
 
-static void chap_gen_challenge(
+static int chap_gen_challenge(
struct iscsi_conn *conn,
int caller,
char *c_str,
unsigned int *c_len)
 {
+   int ret;
unsigned char challenge_asciihex[CHAP_CHALLENGE_LENGTH * 2 + 1];
struct iscsi_chap *chap = conn->auth_protocol;
 
memset(challenge_asciihex, 0, CHAP_CHALLENGE_LENGTH * 2 + 1);
 
-   get_random_bytes(chap->challenge, CHAP_CHALLENGE_LENGTH);
+   ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);
+   if (unlikely(ret))
+   return ret;
chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
CHAP_CHALLENGE_LENGTH);
/*
@@ -69,6 +72,7 @@ static void chap_gen_challenge(
 
pr_debug("[%s] Sending CHAP_C=0x%s\n\n", (caller) ? "server" : "client",
challenge_asciihex);
+   return 0;
 }
 
 static int chap_check_algorithm(const char *a_str)
@@ -143,6 +147,7 @@ static struct iscsi_chap *chap_server_open(
case CHAP_DIGEST_UNKNOWN:
default:
pr_err("Unsupported CHAP_A value\n");
+   kfree(conn->auth_protocol);
return NULL;
}
 
@@ -156,7 +161,10 @@ static struct iscsi_chap *chap_server_open(
/*
 * Generate Challenge.
 */
-   chap_gen_challenge(conn, 1, aic_str, aic_len);
+   if (chap_gen_challenge(conn, 1, aic_str, aic_len) < 0) {
+   kfree(conn->auth_protocol);
+   return NULL;
+   }
 
return chap;
 }
diff --git a/drivers/target/iscsi/iscsi_target_login.c 
b/drivers/target/iscsi/iscsi_target_login.c
index 66238477137b..5ef028c11738 100644
--- a/drivers/target/iscsi/iscsi_target_login.c
+++ b/drivers/target/iscsi/iscsi_target_login.c
@@ -245,22 +245,26 @@ int iscsi_check_for_session_reinstatement(struct 
iscsi_conn *conn)
return 0;
 }
 
-static void iscsi_login_set_conn_values(
+static int iscsi_login_set_conn_values(
struct iscsi_session *sess,
struct iscsi_conn *conn,
__be16 cid)
 {
+   int ret;
conn->sess  = sess;
conn->cid   = be16_to_cpu(cid);
/*
 * Generate a random Status sequence number (statsn) for the new
 * iSCSI connection.
 */
-   get_random_bytes(&conn->stat_sn, sizeof(u32));
+   ret = get_random_bytes_wait(&conn->stat_sn, sizeof(u32));
+   if (unlikely(ret))
+   return ret;
 
mutex_lock(&auth_id_lock);
conn->auth_id   = iscsit_global->auth_id++;
mutex_unlock(&auth_id_lock);
+   return 0;
 }
 
 __printf(2, 3) int iscsi_change_param_sprintf(
@@ -306,7 +310,11 @@ static int iscsi_login_zero_tsih_s1(
return -ENOMEM;
}
 
-   iscsi_login_set_conn_values(sess, conn, pdu->cid);
+   ret = iscsi_login_set_conn_values(sess, conn, pdu->cid);
+   if (unlikely(ret)) {
+   kfree(sess);
+   return ret;
+   }
sess->init_task_tag = pdu->itt;
memcpy(&sess->isid, pdu->isid, 6);
sess->exp_cmd_sn= be32_to_cpu(pdu->cmdsn);
@@ -497,8 +505,7 @@ static int iscsi_login_non_zero_tsih_s1(
 {
struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
 
-   iscsi_login_set_conn_values(NULL, conn, pdu->cid);
-   return 0;
+   return iscsi_login_set_conn_values(NULL, conn, pdu->cid);
 }
 
 /*
@@ -554,9 +561,8 @@ static int iscsi_login_non_zero_tsih_s2(
atomic_set(&sess->session_continuation, 1);
spin_unlock_bh(&sess->conn_lock);
 
-   iscsi_login_set_conn_values(sess, conn, pdu->cid);
-
-   if (iscsi_copy_param_list(&conn->param_list,
+   if (iscsi_login_set_conn_values(sess, conn, pdu->cid) < 0 ||
+   iscsi_copy_param_list(&conn->param_list,
conn->tpg->param_list, 0) < 0) {
iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
ISCSI_LOGIN_STATUS_NO_R

[PATCH v3 07/13] ceph: ensure RNG is seeded before using

2017-06-05 Thread Jason A. Donenfeld
Ceph uses the RNG for various nonce generations, and it shouldn't accept
using bad randomness. So, we wait for the RNG to be properly seeded. We
do this by calling wait_for_random_bytes() in a function that is
certainly called in process context, early on, so that all subsequent
calls to get_random_bytes are necessarily acceptable.

Signed-off-by: Jason A. Donenfeld 
Cc: Ilya Dryomov 
Cc: "Yan, Zheng" 
Cc: Sage Weil 
---
 net/ceph/ceph_common.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
index 4fd02831beed..26ab58665f77 100644
--- a/net/ceph/ceph_common.c
+++ b/net/ceph/ceph_common.c
@@ -611,7 +611,11 @@ struct ceph_client *ceph_create_client(struct ceph_options 
*opt, void *private)
 {
struct ceph_client *client;
struct ceph_entity_addr *myaddr = NULL;
-   int err = -ENOMEM;
+   int err;
+
+   err = wait_for_random_bytes();
+   if (err < 0)
+   return ERR_PTR(err);
 
client = kzalloc(sizeof(*client), GFP_KERNEL);
if (client == NULL)
-- 
2.13.0



[PATCH v3 10/13] net/neighbor: use get_random_u32 for 32-bit hash random

2017-06-05 Thread Jason A. Donenfeld
Using get_random_u32 here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is when many of these structures
are assigned.

Signed-off-by: Jason A. Donenfeld 
Cc: David Miller 
---
 net/core/neighbour.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index d274f81fcc2c..9784133b0cdb 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -312,8 +312,7 @@ static struct neighbour *neigh_alloc(struct neigh_table 
*tbl, struct net_device
 
 static void neigh_get_hash_rnd(u32 *x)
 {
-   get_random_bytes(x, sizeof(*x));
-   *x |= 1;
+   *x = get_random_u32() | 1;
 }
 
 static struct neigh_hash_table *neigh_hash_alloc(unsigned int shift)
-- 
2.13.0



[PATCH v3 08/13] cifs: use get_random_u32 for 32-bit lock random

2017-06-05 Thread Jason A. Donenfeld
Using get_random_u32 here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is sometimes when this is used.

Signed-off-by: Jason A. Donenfeld 
Cc: Steve French 
---
 fs/cifs/cifsfs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index 9a1667e0e8d6..fe0c8dcc7dc7 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -1359,7 +1359,7 @@ init_cifs(void)
spin_lock_init(&cifs_tcp_ses_lock);
spin_lock_init(&GlobalMid_Lock);
 
-   get_random_bytes(&cifs_lock_secret, sizeof(cifs_lock_secret));
+   cifs_lock_secret = get_random_u32();
 
if (cifs_max_pending < 2) {
cifs_max_pending = 2;
-- 
2.13.0



[PATCH v3 12/13] bluetooth/smp: ensure RNG is properly seeded before ECDH use

2017-06-05 Thread Jason A. Donenfeld
This protocol uses lots of complex cryptography that relies on securely
generated random numbers. Thus, it's important that the RNG is actually
seeded before use. Fortuantely, it appears we're always operating in
process context (there are many GFP_KERNEL allocations and other
sleeping operations), and so we can simply demand that the RNG is seeded
before we use it.

We take two strategies in this commit. The first is for the library code
that's called from other modules like hci or mgmt: here we just change
the call to get_random_bytes_wait, and return the result of the wait to
the caller, along with the other error codes of those functions like
usual. Then there's the SMP protocol handler itself, which makes many
many many calls to get_random_bytes during different phases. For this,
rather than have to change all the calls to get_random_bytes_wait and
propagate the error result, it's actually enough to just put a single
call to wait_for_random_bytes() at the beginning of the handler, to
ensure that all the subsequent invocations are safe, without having to
actually change them. Likewise, for the random address changing
function, we'd rather know early on in the function whether the RNG
initialization has been interrupted, rather than later, so we call
wait_for_random_bytes() at the top, so that later on the call to
get_random_bytes() is acceptable.

Signed-off-by: Jason A. Donenfeld 
Cc: Marcel Holtmann 
Cc: Gustavo Padovan 
Cc: Johan Hedberg 
---
 net/bluetooth/hci_request.c |  6 ++
 net/bluetooth/smp.c | 18 ++
 2 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
index b5faff458d8b..4078057c4fd7 100644
--- a/net/bluetooth/hci_request.c
+++ b/net/bluetooth/hci_request.c
@@ -1406,6 +1406,12 @@ int hci_update_random_address(struct hci_request *req, 
bool require_privacy,
struct hci_dev *hdev = req->hdev;
int err;
 
+   if (require_privacy) {
+   err = wait_for_random_bytes();
+   if (unlikely(err))
+   return err;
+   }
+
/* If privacy is enabled use a resolvable private address. If
 * current RPA has expired or there is something else than
 * the current RPA in use, then generate a new one.
diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
index 14585edc9439..5fef1bc96f42 100644
--- a/net/bluetooth/smp.c
+++ b/net/bluetooth/smp.c
@@ -537,7 +537,9 @@ int smp_generate_rpa(struct hci_dev *hdev, const u8 
irk[16], bdaddr_t *rpa)
 
smp = chan->data;
 
-   get_random_bytes(&rpa->b[3], 3);
+   err = get_random_bytes_wait(&rpa->b[3], 3);
+   if (unlikely(err))
+   return err;
 
rpa->b[5] &= 0x3f;  /* Clear two most significant bits */
rpa->b[5] |= 0x40;  /* Set second most significant bit */
@@ -570,7 +572,9 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 
rand[16])
} else {
while (true) {
/* Seed private key with random number */
-   get_random_bytes(smp->local_sk, 32);
+   err = get_random_bytes_wait(smp->local_sk, 32);
+   if (unlikely(err))
+   return err;
 
/* Generate local key pair for Secure Connections */
if (!generate_ecdh_keys(smp->local_pk, smp->local_sk))
@@ -589,7 +593,9 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 
rand[16])
SMP_DBG("OOB Public Key Y: %32phN", smp->local_pk + 32);
SMP_DBG("OOB Private Key:  %32phN", smp->local_sk);
 
-   get_random_bytes(smp->local_rand, 16);
+   err = get_random_bytes_wait(smp->local_rand, 16);
+   if (unlikely(err))
+   return err;
 
err = smp_f4(smp->tfm_cmac, smp->local_pk, smp->local_pk,
 smp->local_rand, 0, hash);
@@ -2831,7 +2837,11 @@ static int smp_sig_channel(struct l2cap_chan *chan, 
struct sk_buff *skb)
struct hci_conn *hcon = conn->hcon;
struct smp_chan *smp;
__u8 code, reason;
-   int err = 0;
+   int err;
+
+   err = wait_for_random_bytes();
+   if (unlikely(err))
+   return err;
 
if (skb->len < 1)
return -EILSEQ;
-- 
2.13.0



[PATCH v3 13/13] random: warn when kernel uses unseeded randomness

2017-06-05 Thread Jason A. Donenfeld
This enables an important dmesg notification about when drivers have
used the crng without it being seeded first. Prior, these errors would
occur silently, and so there hasn't been a great way of diagnosing these
types of bugs for obscure setups. By adding this as a config option, we
can leave it on by default, so that we learn where these issues happen,
in the field, will still allowing some people to turn it off, if they
really know what they're doing and do not want the log entries.

However, we don't leave it _completely_ by default. An earlier version
of this patch simply had `default y`. I'd really love that, but it turns
out, this problem with unseeded randomness being used is really quite
present and is going to take a long time to fix. Thus, as a compromise
between log-messages-for-all and nobody-knows, this is `default y`,
except it is also `depends on DEBUG_KERNEL`. This will ensure that the
curious see the messages while others don't have to.

Signed-off-by: Jason A. Donenfeld 
---
 drivers/char/random.c | 15 +--
 lib/Kconfig.debug | 16 
 2 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index c328e9b11f1f..d4698c8bc35f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -285,7 +285,6 @@
 #define SEC_XFER_SIZE  512
 #define EXTRACT_SIZE   10
 
-#define DEBUG_RANDOM_BOOT 0
 
 #define LONGS(x) (((x) + sizeof(unsigned long) - 1)/sizeof(unsigned long))
 
@@ -1474,7 +1473,7 @@ void get_random_bytes(void *buf, int nbytes)
 {
__u8 tmp[CHACHA20_BLOCK_SIZE];
 
-#if DEBUG_RANDOM_BOOT > 0
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
if (!crng_ready())
printk(KERN_NOTICE "random: %pF get_random_bytes called "
   "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
@@ -2068,6 +2067,12 @@ u64 get_random_u64(void)
return ret;
 #endif
 
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
+   if (!crng_ready())
+   printk(KERN_NOTICE "random: %pF get_random_u64 called "
+  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
+#endif
+
batch = &get_cpu_var(batched_entropy_u64);
if (use_lock)
read_lock_irqsave(&batched_entropy_reset_lock, flags);
@@ -2094,6 +2099,12 @@ u32 get_random_u32(void)
if (arch_get_random_int(&ret))
return ret;
 
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
+   if (!crng_ready())
+   printk(KERN_NOTICE "random: %pF get_random_u32 called "
+  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
+#endif
+
batch = &get_cpu_var(batched_entropy_u32);
if (use_lock)
read_lock_irqsave(&batched_entropy_reset_lock, flags);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index e4587ebe52c7..c4159605bfbf 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1209,6 +1209,22 @@ config STACKTRACE
  It is also used by various kernel debugging features that require
  stack trace generation.
 
+config WARN_UNSEEDED_RANDOM
+   bool "Warn when kernel uses unseeded randomness"
+   default y
+   depends on DEBUG_KERNEL
+   help
+ Some parts of the kernel contain bugs relating to their use of
+ cryptographically secure random numbers before it's actually possible
+ to generate those numbers securely. This setting ensures that these
+ flaws don't go unnoticed, by enabling a message, should this ever
+ occur. This will allow people with obscure setups to know when things
+ are going wrong, so that they might contact developers about fixing
+ it.
+
+ Say Y here, unless you simply do not care about using unseeded
+ randomness and do not want a potential warning message in your logs.
+
 config DEBUG_KOBJECT
bool "kobject debugging"
depends on DEBUG_KERNEL
-- 
2.13.0



[PATCH v3 11/13] net/route: use get_random_int for random counter

2017-06-05 Thread Jason A. Donenfeld
Using get_random_int here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is when many of these structures
are assigned.

Also, semantically, it's not really proper to have been assigning an
atomic_t in this way before, even if in practice it works fine.

Signed-off-by: Jason A. Donenfeld 
Cc: David Miller 
---
 net/ipv4/route.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 655d9eebe43e..11e001a42094 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -2936,8 +2936,7 @@ static __net_init int rt_genid_init(struct net *net)
 {
atomic_set(&net->ipv4.rt_genid, 0);
atomic_set(&net->fnhe_genid, 0);
-   get_random_bytes(&net->ipv4.dev_addr_genid,
-sizeof(net->ipv4.dev_addr_genid));
+   atomic_set(&net->ipv4.dev_addr_genid, get_random_int());
return 0;
 }
 
-- 
2.13.0



[PATCH v3 09/13] rhashtable: use get_random_u32 for hash_rnd

2017-06-05 Thread Jason A. Donenfeld
This is much faster and just as secure. It also has the added benefit of
probably returning better randomness at early-boot on systems with
architectural RNGs.

Signed-off-by: Jason A. Donenfeld 
Cc: Thomas Graf 
Cc: Herbert Xu 
---
 lib/rhashtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index d9e7274a04cd..a1eb7c947f46 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -235,7 +235,7 @@ static struct bucket_table *bucket_table_alloc(struct 
rhashtable *ht,
 
INIT_LIST_HEAD(&tbl->walkers);
 
-   get_random_bytes(&tbl->hash_rnd, sizeof(tbl->hash_rnd));
+   tbl->hash_rnd = get_random_u32();
 
for (i = 0; i < nbuckets; i++)
INIT_RHT_NULLS_HEAD(tbl->buckets[i], ht, i);
-- 
2.13.0



[PATCH v3 00/13] Unseeded In-Kernel Randomness Fixes

2017-06-05 Thread Jason A. Donenfeld
As discussed in [1], there is a problem with get_random_bytes being
used before the RNG has actually been seeded. The solution for fixing
this appears to be multi-pronged. One of those prongs involves adding
a simple blocking API so that modules that use the RNG in process
context can just sleep (in an interruptable manner) until the RNG is
ready to be used. This winds up being a very useful API that covers
a few use cases, 5 of which are included in this patch set.

[1] http://www.openwall.com/lists/kernel-hardening/2017/06/02/2

Changes v2->v3:
  - Since this issue, in general, is going to take a long time to fully
fix, the patch turning on the warning is now dependent on DEBUG_KERNEL
so that the right people see the messages but the others aren't annoyed.
  - Fixed some inappropriate blocking for functions that load during module
insertion. As discussed in [1], module insertion deferal is a topic for
another patch set.
  - An interesting and essential patch has been added for invalidating the
batched entropy pool after the crng initializes.
  - Some places that need randomness at bootup for just small integers would
be better served by get_random_{u32,u64}, so this series makes those
changes in a few places. It's useful here, since on some architectures
that delivers better early randomness.

Jason A. Donenfeld (13):
  random: add synchronous API for the urandom pool
  random: add get_random_{bytes,u32,u64,int,long,once}_wait family
  random: invalidate batched entropy after crng init
  crypto/rng: ensure that the RNG is ready before using
  security/keys: ensure RNG is seeded before use
  iscsi: ensure RNG is seeded before use
  ceph: ensure RNG is seeded before using
  cifs: use get_random_u32 for 32-bit lock random
  rhashtable: use get_random_u32 for hash_rnd
  net/neighbor: use get_random_u32 for 32-bit hash random
  net/route: use get_random_int for random counter
  bluetooth/smp: ensure RNG is properly seeded before ECDH use
  random: warn when kernel uses unseeded randomness

 crypto/rng.c  |  6 ++-
 drivers/char/random.c | 90 ++-
 drivers/target/iscsi/iscsi_target_auth.c  | 14 +++--
 drivers/target/iscsi/iscsi_target_login.c | 22 +---
 fs/cifs/cifsfs.c  |  2 +-
 include/linux/net.h   |  2 +
 include/linux/once.h  |  2 +
 include/linux/random.h| 26 +
 lib/Kconfig.debug | 16 ++
 lib/rhashtable.c  |  2 +-
 net/bluetooth/hci_request.c   |  6 +++
 net/bluetooth/smp.c   | 18 +--
 net/ceph/ceph_common.c|  6 ++-
 net/core/neighbour.c  |  3 +-
 net/ipv4/route.c  |  3 +-
 security/keys/encrypted-keys/encrypted.c  |  8 +--
 security/keys/key.c   | 16 +++---
 17 files changed, 195 insertions(+), 47 deletions(-)

-- 
2.13.0



Re: [PATCH RFC v2 0/8] get_random_bytes_wait family of APIs

2017-06-05 Thread Jason A. Donenfeld
As this RFC series matures, all the changes are in this branch here, to look at:

https://git.zx2c4.com/linux-dev/log/?h=jd/rng-blocker

Ted -- there's one, in particular, that should probably be picked up
regardless of the rest, and that's "random: invalidate batched entropy
after crng init". Hopefully though, we can develop that in relation
with the rest and make this a proper series.

Anyway, awaiting your feedback on these RFC series, if you'd like to
help me help the Linux RNG.

Jason


Re: [PATCH RFC v2 5/8] security/keys: ensure RNG is seeded before use

2017-06-05 Thread Jason A. Donenfeld
On Mon, Jun 5, 2017 at 5:47 AM, Jason A. Donenfeld  wrote:
> -   get_random_bytes(&key->serial, sizeof(key->serial));
> +   ret = get_random_bytes_wait(&key->serial, 
> sizeof(key->serial));

This actually isn't okay at bootup, but I've got a different change
for this section that will be in the v3.


Re: workqueue list corruption

2017-06-05 Thread Tejun Heo
Hello,

On Sun, Jun 04, 2017 at 12:30:03PM -0700, Cong Wang wrote:
> On Tue, Apr 18, 2017 at 8:08 PM, Samuel Holland  wrote:
> > Representative backtraces follow (the warnings come in sets). I have
> > kernel .configs and extended netconsole output from several occurrences
> > available upon request.
> >
> > WARNING: CPU: 1 PID: 0 at lib/list_debug.c:33 __list_add+0x89/0xb0
> > list_add corruption. prev->next should be next (99f135016a90), but
> > was d34affc03b10. (prev=d34affc03b10).

So, while trying to move a work item from delayed list to the pending
list, the pending list's last item's next pointer is no longer
pointing to the head and looks re-initialized.  Could be a premature
free and reuse.

If this is reproducible, it'd help a lot to update move_linked_works()
to check for list validity directly and print out the work function of
the corrupt work item.  There's no guarantee that the re-user is the
one which did premature free but given that we're likely seeing
INIT_LIST_HEAD() instead of random corruption is encouraging, so
there's some chance that doing that would point us to the culprit or
at least pretty close to it.

Thanks.

-- 
tejun


Re: [PATCH] crypto: brcm: fix spelling mistake: "fallbck" -> "fallback"

2017-06-05 Thread Steve Lin
On Sun, Jun 4, 2017 at 2:29 PM, Colin King  wrote:
> From: Colin Ian King 
>
> Trivial fix to spelling mistake in flow_log message
>
> Signed-off-by: Colin Ian King 

Good catch, thanks!
Reviewed-by: Steve Lin 

> ---
>  drivers/crypto/bcm/cipher.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
> index 61393dc70b0b..9cfd36c1bcb6 100644
> --- a/drivers/crypto/bcm/cipher.c
> +++ b/drivers/crypto/bcm/cipher.c
> @@ -2639,7 +2639,7 @@ static int aead_need_fallback(struct aead_request *req)
> (spu->spu_type == SPU_TYPE_SPUM) &&
> (ctx->digestsize != 8) && (ctx->digestsize != 12) &&
> (ctx->digestsize != 16)) {
> -   flow_log("%s() AES CCM needs fallbck for digest size %d\n",
> +   flow_log("%s() AES CCM needs fallback for digest size %d\n",
>  __func__, ctx->digestsize);
> return 1;
> }
> --
> 2.11.0
>