[PATCH -v2] random: mix rdrand with entropy sent in from userspace

2018-07-17 Thread Theodore Ts'o
Fedora has integrated the jitter entropy daemon to work around slow
boot problems, especially on VM's that don't support virtio-rng:

https://bugzilla.redhat.com/show_bug.cgi?id=1572944

It's understandable why they did this, but the Jitter entropy daemon
works fundamentally on the principle: "the CPU microarchitecture is
**so** complicated and we can't figure it out, so it *must* be
random".  Yes, it uses statistical tests to "prove" it is secure, but
AES_ENCRYPT(NSA_KEY, COUNTER++) will also pass statistical tests with
flying colors.

So if RDRAND is available, mix it into entropy submitted from
userspace.  It can't hurt, and if you believe the NSA has backdoored
RDRAND, then they probably have enough details about the Intel
microarchitecture that they can reverse engineer how the Jitter
entropy daemon affects the microarchitecture, and attack its output
stream.  And if RDRAND is in fact an honest DRNG, it will immeasurably
improve on what the Jitter entropy daemon might produce.

This also provides some protection against someone who is able to read
or set the entropy seed file.

Signed-off-by: Theodore Ts'o 
Cc: sta...@vger.kernel.org
Cc: Arnd Bergmann 
---

Changes in v2:
 - Fix silly typo that Arnd pointed out in check the return value of
   arch_get_random_int()
 - Break out of the loop after the first failure reported by
   arch_get_random_int()

 drivers/char/random.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index cd888d4ee605..bd449ad52442 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1895,14 +1895,22 @@ static int
 write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
 {
size_t bytes;
-   __u32 buf[16];
+   __u32 t, buf[16];
const char __user *p = buffer;
 
while (count > 0) {
+   int b, i = 0;
+
bytes = min(count, sizeof(buf));
if (copy_from_user(, p, bytes))
return -EFAULT;
 
+   for (b = bytes ; b > 0 ; b -= sizeof(__u32), i++) {
+   if (!arch_get_random_int())
+   break;
+   buf[i] ^= t;
+   }
+
count -= bytes;
p += bytes;
 
-- 
2.18.0.rc0



[PATCH] random: mix rdrand with entropy sent in from userspace

2018-07-14 Thread Theodore Ts'o
Fedora has integrated the jitter entropy daemon to work around slow
boot problems, especially on VM's that don't support virtio-rng:

https://bugzilla.redhat.com/show_bug.cgi?id=1572944

It's understandable why they did this, but the Jitter entropy daemon
works fundamentally on the principle: "the CPU microarchitecture is
**so** complicated and we can't figure it out, so it *must* be
random".  Yes, it uses statistical tests to "prove" it is secure, but
AES_ENCRYPT(NSA_KEY, COUNTER++) will also pass statistical tests with
flying colors.

So if RDRAND is available, mix it into entropy submitted from
userspace.  It can't hurt, and if you believe the NSA has backdoored
RDRAND, then they probably have enough details about the Intel
microarchitecture that they can reverse engineer how the Jitter
entropy daemon affects the microarchitecture, and attack its output
stream.  And if RDRAND is in fact an honest DRNG, it will immeasurably
improve on what the Jitter entropy daemon might produce.

This also provides some protection against someone who is able to read
or set the entropy seed file.

Signed-off-by: Theodore Ts'o 
Cc: sta...@vger.kernel.org
---
 drivers/char/random.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 0706646b018d..283fe390e878 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1896,14 +1896,22 @@ static int
 write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
 {
size_t bytes;
-   __u32 buf[16];
+   __u32 t, buf[16];
const char __user *p = buffer;
 
while (count > 0) {
+   int b, i = 0;
+
bytes = min(count, sizeof(buf));
if (copy_from_user(, p, bytes))
return -EFAULT;
 
+   for (b = bytes ; b > 0 ; b -= sizeof(__u32), i++) {
+   if (arch_get_random_int())
+   continue;
+   buf[i] ^= t;
+   }
+
count -= bytes;
p += bytes;
 
-- 
2.18.0.rc0



[PATCH 1/5] random: fix crng_ready() test

2018-04-12 Thread Theodore Ts'o
The crng_init variable has three states:

0: The CRNG is not initialized at all
1: The CRNG has a small amount of entropy, hopefully good enough for
   early-boot, non-cryptographical use cases
2: The CRNG is fully initialized and we are sure it is safe for
   cryptographic use cases.

The crng_ready() function should only return true once we are in the
last state.

Reported-by: Jann Horn <ja...@google.com>
Fixes: e192be9d9a30 ("random: replace non-blocking pool...")
Cc: sta...@kernel.org # 4.8+
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
Reviewed-by: Jann Horn <ja...@google.com>
---
 drivers/char/random.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index e027e7fa1472..c8ec1e70abde 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -427,7 +427,7 @@ struct crng_state primary_crng = {
  * its value (from 0->1->2).
  */
 static int crng_init = 0;
-#define crng_ready() (likely(crng_init > 0))
+#define crng_ready() (likely(crng_init > 1))
 static int crng_init_cnt = 0;
 #define CRNG_INIT_CNT_THRESH (2*CHACHA20_KEY_SIZE)
 static void _extract_crng(struct crng_state *crng,
@@ -794,7 +794,7 @@ static int crng_fast_load(const char *cp, size_t len)
 
if (!spin_trylock_irqsave(_crng.lock, flags))
return 0;
-   if (crng_ready()) {
+   if (crng_init != 0) {
spin_unlock_irqrestore(_crng.lock, flags);
return 0;
}
@@ -856,7 +856,7 @@ static void _extract_crng(struct crng_state *crng,
 {
unsigned long v, flags;
 
-   if (crng_init > 1 &&
+   if (crng_ready() &&
time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL))
crng_reseed(crng, crng == _crng ? _pool : NULL);
spin_lock_irqsave(>lock, flags);
@@ -1139,7 +1139,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
fast_mix(fast_pool);
add_interrupt_bench(cycles);
 
-   if (!crng_ready()) {
+   if (unlikely(crng_init == 0)) {
if ((fast_pool->count >= 64) &&
crng_fast_load((char *) fast_pool->pool,
   sizeof(fast_pool->pool))) {
@@ -2212,7 +2212,7 @@ void add_hwgenerator_randomness(const char *buffer, 
size_t count,
 {
struct entropy_store *poolp = _pool;
 
-   if (!crng_ready()) {
+   if (unlikely(crng_init == 0)) {
crng_fast_load(buffer, count);
return;
}
-- 
2.16.1.72.g5be1f00a9a



[PATCH 2/5] random: use a different mixing algorithm for add_device_randomness()

2018-04-12 Thread Theodore Ts'o
add_device_randomness() use of crng_fast_load() was highly
problematic.  Some callers of add_device_randomness() can pass in a
large amount of static information.  This would immediately promote
the crng_init state from 0 to 1, without really doing much to
initialize the primary_crng's internal state with something even
vaguely unpredictable.

Since we don't have the speed constraints of add_interrupt_randomness(),
we can do a better job mixing in the what unpredictability a device
driver or architecture maintainer might see fit to give us, and do it
in a way which does not bump the crng_init_cnt variable.

Also, since add_device_randomness() doesn't bump any entropy
accounting in crng_init state 0, mix the device randomness into the
input_pool entropy pool as well.

Reported-by: Jann Horn <ja...@google.com>
Fixes: ee7998c50c26 ("random: do not ignore early device randomness")
Cc: sta...@kernel.org # 4.13+
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 55 +++
 1 file changed, 51 insertions(+), 4 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index c8ec1e70abde..2154a5fe4c81 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -787,6 +787,10 @@ static void crng_initialize(struct crng_state *crng)
crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
 }
 
+/*
+ * crng_fast_load() can be called by code in the interrupt service
+ * path.  So we can't afford to dilly-dally.
+ */
 static int crng_fast_load(const char *cp, size_t len)
 {
unsigned long flags;
@@ -813,6 +817,51 @@ static int crng_fast_load(const char *cp, size_t len)
return 1;
 }
 
+/*
+ * crng_slow_load() is called by add_device_randomness, which has two
+ * attributes.  (1) We can't trust the buffer passed to it is
+ * guaranteed to be unpredictable (so it might not have any entropy at
+ * all), and (2) it doesn't have the performance constraints of
+ * crng_fast_load().
+ *
+ * So we do something more comprehensive which is guaranteed to touch
+ * all of the primary_crng's state, and which uses a LFSR with a
+ * period of 255 as part of the mixing algorithm.  Finally, we do
+ * *not* advance crng_init_cnt since buffer we may get may be something
+ * like a fixed DMI table (for example), which might very well be
+ * unique to the machine, but is otherwise unvarying.
+ */
+static int crng_slow_load(const char *cp, size_t len)
+{
+   unsigned long   flags;
+   static unsigned charlfsr = 1;
+   unsigned char   tmp;
+   unsignedi, max = CHACHA20_KEY_SIZE;
+   const char *src_buf = cp;
+   char *  dest_buf = (char *) _crng.state[4];
+
+   if (!spin_trylock_irqsave(_crng.lock, flags))
+   return 0;
+   if (crng_init != 0) {
+   spin_unlock_irqrestore(_crng.lock, flags);
+   return 0;
+   }
+   if (len > max)
+   max = len;
+
+   for (i = 0; i < max ; i++) {
+   tmp = lfsr;
+   lfsr >>= 1;
+   if (tmp & 1)
+   lfsr ^= 0xE1;
+   tmp = dest_buf[i % CHACHA20_KEY_SIZE];
+   dest_buf[i % CHACHA20_KEY_SIZE] ^= src_buf[i % len] ^ lfsr;
+   lfsr += (tmp << 3) | (tmp >> 5);
+   }
+   spin_unlock_irqrestore(_crng.lock, flags);
+   return 1;
+}
+
 static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
 {
unsigned long   flags;
@@ -981,10 +1030,8 @@ void add_device_randomness(const void *buf, unsigned int 
size)
unsigned long time = random_get_entropy() ^ jiffies;
unsigned long flags;
 
-   if (!crng_ready()) {
-   crng_fast_load(buf, size);
-   return;
-   }
+   if (!crng_ready())
+   crng_slow_load(buf, size);
 
trace_add_device_randomness(size, _RET_IP_);
spin_lock_irqsave(_pool.lock, flags);
-- 
2.16.1.72.g5be1f00a9a



[PATCH 4/5] random: crng_reseed() should lock the crng instance that it is modifying

2018-04-12 Thread Theodore Ts'o
Reported-by: Jann Horn <ja...@google.com>
Fixes: 1e7f583af67b ("random: make /dev/urandom scalable for silly...")
Cc: sta...@kernel.org # 4.8+
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
Reviewed-by: Jann Horn <ja...@google.com>
---
 drivers/char/random.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 681ee0c0de24..6e7fa13b1a89 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -906,7 +906,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
_crng_backtrack_protect(_crng, buf.block,
CHACHA20_KEY_SIZE);
}
-   spin_lock_irqsave(_crng.lock, flags);
+   spin_lock_irqsave(>lock, flags);
for (i = 0; i < 8; i++) {
unsigned long   rv;
if (!arch_get_random_seed_long() &&
@@ -916,7 +916,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
}
memzero_explicit(, sizeof(buf));
crng->init_time = jiffies;
-   spin_unlock_irqrestore(_crng.lock, flags);
+   spin_unlock_irqrestore(>lock, flags);
if (crng == _crng && crng_init < 2) {
invalidate_batched_entropy();
numa_crng_init();
-- 
2.16.1.72.g5be1f00a9a



[PATCH 3/5] random: set up the NUMA crng instances after the CRNG is fully initialized

2018-04-12 Thread Theodore Ts'o
Until the primary_crng is fully initialized, don't initialize the NUMA
crng nodes.  Otherwise users of /dev/urandom on NUMA systems before
the CRNG is fully initialized can get very bad quality randomness.  Of
course everyone should move to getrandom(2) where this won't be an
issue, but there's a lot of legacy code out there.

Reported-by: Jann Horn <ja...@google.com>
Fixes: 1e7f583af67b ("random: make /dev/urandom scalable for silly...")
Cc: sta...@kernel.org # 4.8+
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 46 +++---
 1 file changed, 27 insertions(+), 19 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 2154a5fe4c81..681ee0c0de24 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -787,6 +787,32 @@ static void crng_initialize(struct crng_state *crng)
crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
 }
 
+#ifdef CONFIG_NUMA
+static void numa_crng_init(void)
+{
+   int i;
+   struct crng_state *crng;
+   struct crng_state **pool;
+
+   pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL|__GFP_NOFAIL);
+   for_each_online_node(i) {
+   crng = kmalloc_node(sizeof(struct crng_state),
+   GFP_KERNEL | __GFP_NOFAIL, i);
+   spin_lock_init(>lock);
+   crng_initialize(crng);
+   pool[i] = crng;
+   }
+   mb();
+   if (cmpxchg(_node_pool, NULL, pool)) {
+   for_each_node(i)
+   kfree(pool[i]);
+   kfree(pool);
+   }
+}
+#else
+static int numa_crng_init(void) {}
+#endif
+
 /*
  * crng_fast_load() can be called by code in the interrupt service
  * path.  So we can't afford to dilly-dally.
@@ -893,6 +919,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
spin_unlock_irqrestore(_crng.lock, flags);
if (crng == _crng && crng_init < 2) {
invalidate_batched_entropy();
+   numa_crng_init();
crng_init = 2;
process_random_ready_list();
wake_up_interruptible(_init_wait);
@@ -1727,28 +1754,9 @@ static void init_std_data(struct entropy_store *r)
  */
 static int rand_initialize(void)
 {
-#ifdef CONFIG_NUMA
-   int i;
-   struct crng_state *crng;
-   struct crng_state **pool;
-#endif
-
init_std_data(_pool);
init_std_data(_pool);
crng_initialize(_crng);
-
-#ifdef CONFIG_NUMA
-   pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL|__GFP_NOFAIL);
-   for_each_online_node(i) {
-   crng = kmalloc_node(sizeof(struct crng_state),
-   GFP_KERNEL | __GFP_NOFAIL, i);
-   spin_lock_init(>lock);
-   crng_initialize(crng);
-   pool[i] = crng;
-   }
-   mb();
-   crng_node_pool = pool;
-#endif
return 0;
 }
 early_initcall(rand_initialize);
-- 
2.16.1.72.g5be1f00a9a



[PATCH 5/5] random: add new ioctl RNDRESEEDCRNG

2018-04-12 Thread Theodore Ts'o
Add a new ioctl which forces the the crng to be reseeded.

Signed-off-by: Theodore Ts'o <ty...@mit.edu>
Cc: sta...@kernel.org
---
 drivers/char/random.c   | 13 -
 include/uapi/linux/random.h |  3 +++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 6e7fa13b1a89..c552431587a7 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -429,6 +429,7 @@ struct crng_state primary_crng = {
 static int crng_init = 0;
 #define crng_ready() (likely(crng_init > 1))
 static int crng_init_cnt = 0;
+static unsigned long crng_global_init_time = 0;
 #define CRNG_INIT_CNT_THRESH (2*CHACHA20_KEY_SIZE)
 static void _extract_crng(struct crng_state *crng,
  __u32 out[CHACHA20_BLOCK_WORDS]);
@@ -933,7 +934,8 @@ static void _extract_crng(struct crng_state *crng,
unsigned long v, flags;
 
if (crng_ready() &&
-   time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL))
+   (time_after(crng_global_init_time, crng->init_time) ||
+time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL)))
crng_reseed(crng, crng == _crng ? _pool : NULL);
spin_lock_irqsave(>lock, flags);
if (arch_get_random_long())
@@ -1757,6 +1759,7 @@ static int rand_initialize(void)
init_std_data(_pool);
init_std_data(_pool);
crng_initialize(_crng);
+   crng_global_init_time = jiffies;
return 0;
 }
 early_initcall(rand_initialize);
@@ -1930,6 +1933,14 @@ static long random_ioctl(struct file *f, unsigned int 
cmd, unsigned long arg)
input_pool.entropy_count = 0;
blocking_pool.entropy_count = 0;
return 0;
+   case RNDRESEEDCRNG:
+   if (!capable(CAP_SYS_ADMIN))
+   return -EPERM;
+   if (crng_init < 2)
+   return -ENODATA;
+   crng_reseed(_crng, NULL);
+   crng_global_init_time = jiffies - 1;
+   return 0;
default:
return -EINVAL;
}
diff --git a/include/uapi/linux/random.h b/include/uapi/linux/random.h
index c34f4490d025..26ee91300e3e 100644
--- a/include/uapi/linux/random.h
+++ b/include/uapi/linux/random.h
@@ -35,6 +35,9 @@
 /* Clear the entropy pool and associated counters.  (Superuser only.) */
 #define RNDCLEARPOOL   _IO( 'R', 0x06 )
 
+/* Reseed CRNG.  (Superuser only.) */
+#define RNDRESEEDCRNG  _IO( 'R', 0x07 )
+
 struct rand_pool_info {
int entropy_count;
int buf_size;
-- 
2.16.1.72.g5be1f00a9a



Re: random.c: LFSR polynomials are not irreducible/primitive

2017-08-15 Thread Theodore Ts'o
On Tue, Aug 15, 2017 at 10:45:17AM +0200, Stephan Mueller wrote:
> Am Dienstag, 15. August 2017, 00:21:05 CEST schrieb Theodore Ts'o:
> 
> Hi Theodore,
> 
> > Have you looked at section 3.1.1 of the above cited paper?
> > 
> > http://eprint.iacr.org/2012/251.pdf
> 
> Thanks for the hint, but that does not seem to solve the mystery either.
> 
> When I use magma with GF(2^32), I see that all polynomials are neither 
> primitive nor irreducible:

I believe that assertion being made in that section is not that
modified P(X) is primitive, but that Q(X) is primitive

Q(X) = α**3 (P(X) − 1) + 1

Where multiplication by α**3 is done by a twist-table lookup.

Also of interest might be this paper, which I believe totally missed
when the authors made their proposal on the linux-crypto list in
September 2016 (I've added them to the cc list):

https://eprint.iacr.org/2017/726.pdf

The date on the paper is from just 3 weeks ago or so, and it was just
luck that I found it when Googling to find some other references in
response to your question.  (Thanks for raising the question, BTW).

I don't have a huge amount invested in any of the mixing schemes,
because in practice we are *not* feeding large number of zero inputs
into mixing function.  So while it is good to make the mixing function
to have as large a cyclic length as possible, it seems unlikely that
the weaknesses of the current polynomials can be leveraged into a
practical attack.

Stephan, if you have any comments on the proposal made by David
Fontaine and Olivier Vivolo, I'd appreciate hearing them!

- Ted



Re: random.c: LFSR polynomials are not irreducible/primitive

2017-08-14 Thread Theodore Ts'o
On Mon, Aug 14, 2017 at 10:20:18AM +0200, Stephan Mueller wrote:
> Hi Ted,
> 
> drivers/char/random.c contains the following comment:
> 
> """
>  * Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and
>  * Videau in their paper, "The Linux Pseudorandom Number Generator
>  * Revisited" (see: http://eprint.iacr.org/2012/251.pdf).  In their
>  * paper, they point out that we are not using a true Twisted GFSR,
>  * since Matsumoto & Kurita used a trinomial feedback polynomial (that
>  * is, with only three taps, instead of the six that we are using).
>  * As a result, the resulting polynomial is neither primitive nor
>  * irreducible, and hence does not have a maximal period over
>  * GF(2**32).  They suggest a slight change to the generator
>  * polynomial which improves the resulting TGFSR polynomial to be
>  * irreducible, which we have made here.
> """
> 
> This comment leads me to belief that the current polynomial is primitive (and 
> irreducible).
> 
> Strangely, this is not the case as seen with the following code that can be 
> used with the mathematical tool called magma. There is a free online version 
> of magma available to recheck it: http://magma.maths.usyd.edu.au/calc/
> 
> Note, the polynomials used up till 3.12 were primitive and irreducible.
> 
> Could you please help me understanding why the current polynomials are better 
> than the old ones?

Have you looked at section 3.1.1 of the above cited paper?

http://eprint.iacr.org/2012/251.pdf

- Ted


Re: [RFC PATCH v12 3/4] Linux Random Number Generator

2017-07-23 Thread Theodore Ts'o
On Sun, Jul 23, 2017 at 02:05:38PM -0400, Sandy Harris wrote:
> Sandy Harris  wrote:
> 
> > The biggest problem with random(4) is that you cannot generate good
> > output without a good seed & just after boot, ...
> >
> > The only really good solution I know of is to find a way to provide a
> > chunk of randomness early in the boot process. John Denker has a good
> > discussion of doing this by modifying the kernel image & Ted talks of
> > doing it via the boot loader. ...
> 
> Would it be enough to have a kernel module that does more-or-less what
> the current shell scripts do, but earlier in the boot process? Throw
> the stored data into the random(4) driver at module init time & update
> it periodically later. This would not help much for first boot on a
> new system, unless its store could be updated during install; Denker's
> point that you need each system provisioned differently is important.
> However it looks like it would be enough on other boots.

There are two things that make this hard.  The first is progamming
environment for, say, the GRUB bootloader, is very different from that
of, say, a System V init script.  The second is that it's not obvious
where you can safely store the information across boots.

For a specific architecture --- say, a PC desktop/server using a MBR
or GPT partition table, it's not that hard.  At which point, if you
want to assume that you are using a distribution kernel which is using
an initial ramdisk, you might not even need a kernel module; you could
just do it in the initrd before systemd gets a chance to run, and then
you can do it as a straight-forward shell script.  So that's actually
not hard, and not that different from what we have today.

It won't solve the problem for, say a Tails system, where you could be
booting off of a DVD (in which case you might not have any place to
store state), or a USB or SD card.  That's *also* not hard, but you
end up having to have a different solution for that case.

> It also looks like it might be easier to implement & test. In
> particular it is an isolated do-one-thing-well tool; the programmer
> only needs to worry about his or her module, not several different
> boot loaders or the procedures that distros have for CD images or
> manufacturers for device setup.

Unfortunately, you *do* still have to worry about multiple partition
tables, and potential places where the entropy could be stored.  The
solution for MBR, GPT, and Tails will all have to be slightly
different.

Again, none of this is terribly tricky work.  There are lots of fiddly
details, and if you get it right for one configuration, you will still
have to do something else for the next configuration.  So it's a lot
of grunt work, and since you can't come up for one solution that will
solve it for all systems, it won't be terribly glorious.

Note that you can also solve this problem by diddling the systemd unit
files, if you can make sure that you can fix the dependencies such
that the entropy setup is the very thing that happens after root file
system is mounted read/write, and before anything that might try to
generate keys is allowed to run.  That's going to be roughly the same
level of coverage as a kernel module and doesn't require trying to
store it in some partition-table unique place, but instead you have to
learn a lot of about systemd unit file configuration.

Cheers,

- Ted



Re: Poor RNG performance on Ryzen

2017-07-22 Thread Theodore Ts'o
On Fri, Jul 21, 2017 at 04:55:12PM +0200, Oliver Mangold wrote:
> On 21.07.2017 16:47, Theodore Ts'o wrote:
> > On Fri, Jul 21, 2017 at 01:39:13PM +0200, Oliver Mangold wrote:
> > > Better, but obviously there is still much room for improvement by reducing
> > > the number of calls to RDRAND.
> > Hmm, is there some way we can easily tell we are running on Ryzen?  Or
> > do we believe this is going to be true for all AMD devices?
> I would like to note that my first measurement on Broadwell suggest that the
> current frequency of RDRAND calls seems to slow things down on Intel also
> (but not as much as on Ryzen).

On my T470 laptop (with an Intel mobile core i7 processor), using your
benchmark, I am getting 136 MB/s, versus your 75 MB/s.  But so what?

More realistically, if we are generating 256 bit keys (so we're
reading from /dev/urandom 32 bytes at a time), it takes 2.24
microseconds per key generation.  What do you get when you run:

dd if=/dev/urandom of=/dev/zero bs=256 count=100

Even if on Ryzen it's slower by a factor of two, 5 microseconds per
key generation is pretty fast!  The time to do the Diffie-Hellman
exchange and the RSA operations will probably completely swamp the
time to generate the session key.

And if you think 2.24 or 5 microseconds is to slow for the IV
generation --- then use a userspace ChaCha20 CRNG for that purpose.

I'm not really sure I see a real-life operational problem here.

- Ted


Re: Poor RNG performance on Ryzen

2017-07-21 Thread Theodore Ts'o
On Fri, Jul 21, 2017 at 01:39:13PM +0200, Oliver Mangold wrote:
> Better, but obviously there is still much room for improvement by reducing
> the number of calls to RDRAND.

Hmm, is there some way we can easily tell we are running on Ryzen?  Or
do we believe this is going to be true for all AMD devices?

I guess we could add some kind of "has_crappy_arch_get_random()" call
which could be defined by arch/x86, and change how aggressively we use
arch_get_random_*() depending on whether has_crappy_arch_get_random()
returns true or not

- Ted


Re: [RFC PATCH v12 3/4] Linux Random Number Generator

2017-07-20 Thread Theodore Ts'o
On Thu, Jul 20, 2017 at 09:00:02PM +0200, Stephan Müller wrote:
> I concur with your rationale where de-facto the correlation is effect is 
> diminished and eliminated with the fast_pool and the minimal entropy 
> estimation of interrupts.
> 
> But it does not address my concern. Maybe I was not clear, please allow me to 
> explain it again.
> 
> We have lots of entropy in the system which is discarded by the 
> aforementioned 
> approach (if a high-res timer is present -- without it all bets are off 
> anyway 
> and this should be covered in a separate discussion). At boot time, this 
> issue 
> is fixed by injecting 256 interrupts in the CRNG and consider it seeded.
> 
> But at runtime, were we still need entropy to reseed the CRNG and to supply /
> dev/random. The accounting of entropy at runtime is much too conservative...

Practically no one uses /dev/random.  It's essentially a deprecated
interface; the primary interfaces that have been recommended for well
over a decade is /dev/urandom, and now, getrandom(2).  We only need
384 bits of randomness every 5 minutes to reseed the CRNG, and that's
plenty even given the very conservative entropy estimation currently
being used.

This was deliberate.  I care a lot more that we get the initial
boot-time CRNG initialization right on ARM32 and MIPS embedded
devices, far, far, more than I care about making plenty of
information-theoretic entropy available at /dev/random on an x86
system.  Further, I haven't seen an argument for the use case where
this would be valuable.

If you don't think they count because ARM32 and MIPS don't have a
high-res timer, then you have very different priorities than I do.  I
will point out that numerically there are huge number of these devices
--- and very, very few users of /dev/random.

> You mentioned that you are super conservative for interrupts due to timer 
> interrupts. In all measurements on the different systems I conducted, I have 
> not seen that the timer triggers an interrupt picked up by 
> add_interrupt_randomness.

Um, the timer is the largest number of interrupts on my system.  Compare:

CPU0   CPU1   CPU2   CPU3
 LOC:6396552603886565586466057102   Local timer interrupts

with the number of disk related interrupts:

 120:  21492 139284  405131705886   PCI-MSI 376832-edge  
ahci[:00:17.0]

... and add_interrupt_randomness() gets called for **every**
interrupt.  On an mostly idle machine (I was in meetings most of
today) it's not surprising that time interrupts dominate.  That
doesn't matter for me as much because I don't really care about
/dev/random performance.  What's is **far** more important is that the
entropy estimations behave correctly, across all of Linux's
architectures, while the kernel is going through startup, before CRNG
is declared initialized.

> As we have no formal model about entropy to begin with, we can only assume 
> and 
> hope we underestimate entropy with the entropy heuristic.

Yes, and that's why I use an ultra-conservative estimate.  If we start
using a more aggressive hueristic, we open ourselves up to potentially
very severe security bugs --- and for what?  What's the cost benefit
ratio here which makes this a worthwhile thing to risk?

> Finally, I still think it is helpful to allow (not mandate) to involve the 
> kernel crypto API for the DRNG maintenance (i.e. the supplier for /dev/random 
> and /dev/urandom). The reason is that now more and more DRNG implementations 
> in hardware pop up. Why not allowing them to be used. I.e. random.c would 
> only 
> contain the logic to manage entropy but uses the DRNG requested by a user.

We *do* allow them to be used.  And we support a large number of
hardware random number generators already.  See drivers/char/hw_random.

BTW, I theorize that this is why the companies that could do the
bootloader random seen work haven't bothered.  Most of their products
have a TPM or equivalent, and with modern kernel the hw_random
interface now has a kernel thread that will automatically fill the
/dev/random entropy pool from the hw_random device.  So this all works
already, today, without needing a userspace rngd (which used to be
required).

> In addition allowing a replacement of the DRNG component (at compile time at 
> least) may get us away from having a separate DRNG solution in the kernel 
> crypto API. Some users want their chosen or a standardized DRNG to deliver 
> random numbers. Thus, we have several DRNGs in the kernel crypto API which 
> are 
> seeded by get_random_bytes. Or in user space, many folks need their own DRNG 
> in user space in addition to the kernel. IMHO this is all a waste. If we 
> could 
> use the user-requested DRNG when producing random numbers for 
> get_random_bytes 
> or /dev/urandom or getrandom.

To be honest, I've never understood why that's there in the crypto API
at all.  But adding more ways to switch out the DRNG for /dev/random
doesn't solve that 

Re: [RFC PATCH v12 3/4] Linux Random Number Generator

2017-07-19 Thread Theodore Ts'o
On Wed, Jul 19, 2017 at 08:22:18AM +0200, Stephan Müller wrote:
> In the email [1] I have expressed the core concerns I see -- none of them 
> address the need to keep the Jitter RNG as one noise source. To address 
> those, 
> a very deep dive into random.c needs to be made.

That's simply not true.  The other issues besides the Jitter RNG are
really nits.

One of your complaints is the fact that we collect both interrupt
timing and HID/block timings.  First of all, we could eliminate the
HID/block timings since in practice we get the vast majority of our
entropy from the interrupt timing.  I keep it because we have a real
theoretical basis for their being unpredictability from the HID/block
timings.  For example, [1].

[1] http://world.std.com/~dtd/random/forward.pdf

The fact that there might be double count from the perspective of
entropy is not really an issue because we do a "fast mix" of 64
interrupts before we mix into the primary interrupt pool.  And when we
do mix into the primary pool we count that only as a single bit of
entropy.  The reason why I'm being super cautious here is because some
of these interrupts might be timer interrupts, or come from other
sources that might be correlated to the clock interrupt.  The
conservative assumption here is that at least one of the interrupts
out of 64, on average, will come from something that the adversary can
not anticipate, such as coming from a NIC or wireless device, and that
we will get at least one bit's worth of unpredictability.

The fact that we also mix in the jiffies plus the keyboard/mouse scan
code, is something that happens immediately.  So even if you think we
should not count the fast mix interrupt count, the fact that we mix
the timing values from 64 interrupts before we credit the entropy
counter by a single bit is sufficiently conservative; we're talking
about 1/64th of a bit here.

But if you **really** think mixing in the timing of the HID event
(gathered via a different mechanism --- jiffies vs cycle counter, and
including the the keyboard scan), a patch to disable
add_keyboard_randomness() is pretty trivial.  It doesn't justify a
complete rewrite of the random core.

(BTW, granted this is anecdata, but on my laptop, the CRNG is fully
initialized before systemd has even started and before the root file
system is mounted.  And after that point the entropy initialization
only matters for the legacy apps that use /dev/random, which doesn't
even exist in your proposed RNG, since everything just uses a
ChaCha20-based CRNG.)


Another one of your complaints is a straw-man argument ("I understand
that this pathological case is not present for the legacy
/dev/random...").  First of all, how we do entropy estimation after
the CRNG boot is far less important, because the primary recommended
interface is /dev/urandom or better yet getrandom(2).  Secondly, we
*don't* allow transfer of small quantums of entropy.  There is a
minimum transfer limit of 64 bits, and that can easily be increased to
128 bits if one really cared.  I've never really considered recovery
from state compromise to be that important, but if one did care,
increasing that limit is a two line patch.


I could go on, but the bottom line is that, quite frankly, I don't
consider your criticsms to be particular compelling or convincing.

Regards,

- Ted


Re: [RFC PATCH v12 3/4] Linux Random Number Generator

2017-07-18 Thread Theodore Ts'o
On Tue, Jul 18, 2017 at 09:00:10PM -0400, Sandy Harris wrote:
> The only really good solution I know of is to find a way to provide a
> chunk of randomness early in the boot process. John Denker has a good
> discussion of doing this by modifying the kernel image & Ted talks of
> doing it via the boot loader. Neither looks remarkably easy. Other
> approaches like making the kernel read a seed file or passing a
> parameter on the kernel command line have been suggested but, if I
> recall right, rejected.

It's actually not that _hard_ to modify the boot loader.  It's not
finicky work like, say, adding support for metadata checksums or xattr
deduplication to ext4.  It's actually mostly plumbing.  It's just that
we haven't found a lot of people willing to do it as paid work, and
the hobbyists haven't been interested.

> As I see it, the questions about Jitter, or any other in-kernel
> generator based on timing, are whether it is good enough to be useful
> until we have one of the above solutions or useful as a
> defense-in-depth trick after we have one. I'd say yes to both.
> 
> There's been a lot of analysis. Stephan has a detailed rationale & a
> lot of test data in his papers & the Havege papers also discuss
> getting entropy from timer operations. I'd say the best paper is
> McGuire et al:
> https://static.lwn.net/images/conf/rtlws11/random-hardware.pdf

So here's the problem that I have with most of these analyses.  Most
of them are done using the x86 as the CPU.  This is true of the
McGuire, Okech, and Schiesser paper you've cited above.  But things
are largely irrelevant on the x86, because we have RDRAND.  And while
I like to mix in environmental noise before generating personal
long-term public keys.  I'm actually mostly OK with relying on RDRAND
for initializing the seeds for hash table to protect against network
denial of service attacks.  (Which is currently the first user of the
not-yet-initialized CRNG on my laptop during kernel boot.)

The real problem is with the non-x86 systems that don't have a
hardware RNG, and there depending timing events which don't depend on
external devices is much more dodgy.  Remember that on most embedded
devices there is only a single oscillator driving the entire system.
It's not like you even have multiple crystal oscillators beating
against one another.

So if you are only depending on CPU timing loops, you basically have a
very complex state machine, driven by a single oscillator, and you're
trying to kid yourself that you're getting entropy out the other end.
How is that any different from using AES in counter mode and claiming
because you don't know the seed, that it's "true randomness"?  It
certainly passes all of the statistical tests!

Hence, we have to rely on external events outside of the CPU and so we
need to depend on interrupt timing --- and that's what we do in
drivers/char/random.c already!  You can debate whether we are being
too conservative with when we judge that we've collective enough
unpredictability to count it as a "bit" of randomness.  So it's
trivially easy to turn the knob and make sure the CRNG gets
initialized more quickly using fewer interrupt timings, and boom!
Problem solved.

Simply turning the knob to make our entropy estimator more lax makes
people uncomfortable, and since they don't have access to the internal
microarchitecture of the CPU, they take comfort in the fact that it's
really, really complicated, and so something like the Jitter RNG
*must* be a more secure way to do things.  But that's really an illusion.

If the real unpredictability is really coming from the interrupts
changing the state of the CPU microarchitecture, the real question is
how many interrupts do you need before you consider things
"unpredictable" to an adequate level of security?  Arguing that we
should turn down the "interrupts per bit of entropy" in
drivers/char/random.c is a much more honest way of having that
discussion.

- Ted

P.S.  In the McGuire paper you cited, it assumes that the system is
fully booted and there are multiple processes running which are
influencing the kernel scheduler.  This makes the paper **not** an
applicable at all.  So if you think that is the most compelling
analysis, I'm definitely not impressed


Re: [RFC PATCH v12 3/4] Linux Random Number Generator

2017-07-18 Thread Theodore Ts'o
On Tue, Jul 18, 2017 at 04:37:11PM +0200, Stephan Müller wrote:
> > 
> > > I have stated the core concerns I have with random.c in [1]. To remedy
> > > these core concerns, major changes to random.c are needed. With the past
> > > experience, I would doubt that I get the changes into random.c.
> > > 
> > > [1] https://www.spinics.net/lists/linux-crypto/msg26316.html
> > 
> > Evolution is the correct way to do this, kernel development relies on
> > that.  We don't do the "use this totally different and untested file
> > instead!" method.
> 
> I am not sure I understand your reply. The offered patch set does not rip out 
> existing code. It adds a replacement implementation which can be enabled 
> during compile time. Yet it is even disabled per default (and thus the legacy 
> code is compiled).

I've been trying to take the best features and suggestions from your
proposal and integrating them into /dev/random already.  Things that
I've chosen not take is basically because I disbelieve that the Jitter
RNG is valid.  And that's mostly becuase I trust Peter Anvin (who has
access to Intel chip architects, who has expressed unease) more than
you.  (No hard feelings).

So I have been trying to do the evolution thing already.  

> I see such a development approach in numerous different kernel core areas: 
> memory allocators (SLAB, SLOB, SLUB), process schedulers, IRQ schedulers.

But we don't have two VFS layers or two MM layers.  We also don't have
two implementations of printk.

I'm obviously biased, but I don't see I see the Raison d'Etre for
merging LRNG into the kernel.

- Ted


Re: Antw: Re: [kernel-hardening] Re: [PATCH v4 06/13] iscsi: ensure RNG is seeded before use

2017-07-05 Thread Theodore Ts'o
On Wed, Jul 05, 2017 at 09:16:09AM -0400, Paul Koning wrote:
> 
> In the implementations I know, /dev/random and /dev/urandom are the
> same driver, the only difference is that when you read from
> /dev/random there's a check for the current entropy level.

It's in the same driver but /dev/random and /dev/urandom are different
beasts.  Pre-4.9 we use the same SHA-1 based generator, but we use
different state pools, and we periodically "catastrophically reseed"
(ala Yarrow) from the random pool to the urandom pool.  Getrandom(2)
uses the urandom pool.

In 4.9 and later kernels, /dev/urandom and getrandom(2) use a ChaCha20
based generator which provides for more speed.  We still use the SHA-1
pool for the random pool because it allows for a much larger "state
pool" to store entropy.

> If you have a properly constructed RNG, as soon as it's been fed
> enough entropy it is secure (at least for the next 2^64 bits or so).
> The notion of "using up entropy" is not meaningful for a good
> generator.  See Bruce Schneier's "Yarrow" paper for the details.

A lot of this depends on whether or not you trust your crypto
primitives or not.  The /dev/random driver was the very first OS-based
random generator, and back then, export restrictions were still a
thing (which is why we only used MD5 and SHA-1 as crypto primitives),
and our cryptoanalytic understanding of what makes for a good crypto
hash or encryption algorithm was quite limited.  So trying to
accumulate a large amount of entropy pool of entropy is a good thing,
because even if there was a minor weakness in the crypto hash (and for
a while we were using MD5), relying on the adversary not being able to
guess all of the environmental noise harvested by the kernel would
cover for a lot of sins.

Remember, in the kernel we have access to a large amount of
environmental noise, so it makes a lot of sense to take advantage of
that as much as possible.  So by having a huge state pool for the
/dev/random entropy pool, we can harvest and store as much of that
environmental noise as possible.  This buys us a large amount of
safety margin, which is good thing because somewhere there might be
some Linux 2.0 or Linux 2.2 based router sitting in someone's home
where /dev/random is using MD5.  Those ancient kernels are probably
riddled with zero-day security holes, but the safety margin of using a
large entropy pool is such that even though there are many known
attacks against MD5, I'm actually pretty confident that the large
state pool mitigates the weakness of MD5 as used by /dev/random and
/dev/urandom.  At the very least, it will be much easier for the NSA
to use some other zero-day to attack the said router with the ancient
kernel, well before they try to reverse engineer its /dev/urandom
output.  :-)

- Ted


Re: Antw: Re: [kernel-hardening] Re: [PATCH v4 06/13] iscsi: ensure RNG is seeded before use

2017-07-05 Thread Theodore Ts'o
On Wed, Jul 05, 2017 at 09:03:43AM +0200, Ulrich Windl wrote:
> > Note, during the development of my /dev/random implementation, I added the 
> > getrandom-like blocking behavior to /dev/urandom (which is the equivalent to
> > Jason's patch except that it applies to user space). The boot process locked
> 
> I thought reads from urandom never block by definition. An older manual page
> (man urandom) also says: "A  read  from  the  /dev/urandom device will not
> block waiting for more entropy."

As I said in my original message, I *tried* this as an experiment.
Because lots of security-obsessed people were disputing my
intelligence, my judgement, and in some cases, my paternity becuase I
wouldn't change /dev/urandom not to block.

So I did the experiment so I could give them hard data about why we
couldn't go down that path.

> > up since systemd wanted data from /dev/urandom while it processed the 
> > initramfs. As it did not get any, the boot process did not commence that 
> > could 
> > deliver new events to be picked up by the RNG.

And indeed, making this change brick'ed at least one version of Ubuntu
and one version of CeroWRT, as reported by the kernel's 0-day testing
system.  As a result, this patch (which was always a proof of concept,
not anything I thought had any chance of going upstream), was dropped.

Since in the kernel, We Do Not Break Backwards Compatibility, this is
why we have a new interface --- getrandom(2) --- instead of changing
an existing interface.  (Well, there were multiple good reasons for
getrandom, but this was definitely one of them.)

- Ted


Re: [kernel-hardening] [PATCH] random: warn when kernel uses unseeded randomness

2017-06-21 Thread Theodore Ts'o
On Wed, Jun 21, 2017 at 04:06:49PM +1000, Michael Ellerman wrote:
> All the distro kernels I'm aware of have DEBUG_KERNEL=y.
> 
> Where all includes at least RHEL, SLES, Fedora, Ubuntu & Debian.
> 
> So it's still essentially default y.
> 
> Emitting *one* warning by default would be reasonable. That gives users
> who are interested something to chase, they can then turn on the option
> to get the full story.
> 
> Filling the dmesg buffer with repeated warnings is really not helpful.

I agree completely with all of this.  The following patch replaces the
current topmost patch on the random.git tree:


>From 25b683ee9bd5536807f813efbd19809333461f89 Mon Sep 17 00:00:00 2001
From: Theodore Ts'o <ty...@mit.edu>
Date: Thu, 8 Jun 2017 04:16:59 -0400
Subject: [PATCH] random: suppress spammy warnings about unseeded randomness

Unfortunately, on some models of some architectures getting a fully
seeded CRNG is extremely difficult, and so this can result in dmesg
getting spammed for a surprisingly long time.  This is really bad from
a security perspective, and so architecture maintainers needed to do
what they can to get the CRNG seeded sooner after the system is
booted.  However, users can't do anything actionble to address this,
and spamming the kernel messages log will only just annoy people.

For developers who want to work on improving this situation,
CONFIG_WARN_UNSEEDED_RANDOM has been renamed to
CONFIG_WARN_ALL_UNSEEDED_RANDOM.  By default the kernel will always
print the first use of unseeded randomness.  This way, hopefully the
security obsessed will be happy that there is _some_ indication when
the kernel boots there may be a potential issue with that architecture
or subarchitecture.  To see all uses of unseeded randomness,
developers can enable CONFIG_WARN_ALL_UNSEEDED_RANDOM.

Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 45 ++---
 lib/Kconfig.debug | 24 ++--
 2 files changed, 48 insertions(+), 21 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index fa5bbd5a7ca0..7405c914bbcf 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1466,6 +1466,30 @@ static ssize_t extract_entropy_user(struct entropy_store 
*r, void __user *buf,
return ret;
 }
 
+#define warn_unseeded_randomness(previous) \
+   _warn_unseeded_randomness(__func__, (void *) _RET_IP_, (previous))
+
+static void _warn_unseeded_randomness(const char *func_name, void *caller,
+ void **previous)
+{
+#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM
+   const bool print_once = false;
+#else
+   static bool print_once __read_mostly;
+#endif
+
+   if (print_once ||
+   crng_ready() ||
+   (previous && (caller == READ_ONCE(*previous
+   return;
+   WRITE_ONCE(*previous, caller);
+#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM
+   print_once = true;
+#endif
+   pr_notice("random: %s called from %pF with crng_init=%d\n",
+ func_name, caller, crng_init);
+}
+
 /*
  * This function is the exported kernel interface.  It returns some
  * number of good random numbers, suitable for key generation, seeding
@@ -1479,12 +1503,9 @@ static ssize_t extract_entropy_user(struct entropy_store 
*r, void __user *buf,
 void get_random_bytes(void *buf, int nbytes)
 {
__u8 tmp[CHACHA20_BLOCK_SIZE];
+   static void *previous;
 
-#ifdef CONFIG_WARN_UNSEEDED_RANDOM
-   if (!crng_ready())
-   printk(KERN_NOTICE "random: %pF get_random_bytes called "
-  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
-#endif
+   warn_unseeded_randomness();
trace_get_random_bytes(nbytes, _RET_IP_);
 
while (nbytes >= CHACHA20_BLOCK_SIZE) {
@@ -2064,6 +2085,7 @@ u64 get_random_u64(void)
bool use_lock = READ_ONCE(crng_init) < 2;
unsigned long flags = 0;
struct batched_entropy *batch;
+   static void *previous;
 
 #if BITS_PER_LONG == 64
if (arch_get_random_long((unsigned long *)))
@@ -2074,11 +2096,7 @@ u64 get_random_u64(void)
return ret;
 #endif
 
-#ifdef CONFIG_WARN_UNSEEDED_RANDOM
-   if (!crng_ready())
-   printk(KERN_NOTICE "random: %pF get_random_u64 called "
-  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
-#endif
+   warn_unseeded_randomness();
 
batch = _cpu_var(batched_entropy_u64);
if (use_lock)
@@ -2102,15 +2120,12 @@ u32 get_random_u32(void)
bool use_lock = READ_ONCE(crng_init) < 2;
unsigned long flags = 0;
struct batched_entropy *batch;
+   static void *previous;
 
if (arch_get_random_int())
return ret;
 
-#ifdef CONFIG_WARN_UNSEEDED_RANDOM
-   if (!crng_ready())
-   printk(KERN_N

Re: [PATCH] random: silence compiler warnings and fix race

2017-06-20 Thread Theodore Ts'o
On Tue, Jun 20, 2017 at 11:49:07AM +0200, Jason A. Donenfeld wrote:
> Uh, talk about a totally unnecessary punch... In case my last email
> wasn't clear, I fully recognize that `default y` is a tad too extreme,
> which is why from one of the earliest revisions in this series, I
> moved directly to the compromise solution (`depends DEBUG_KERNEL`)
> without even waiting for people to complain first.

The punch was in response to this statement, which I personally found
fairly infuriating:

>> I more or less agree with you that we should just turn this on for all
>> users and they'll just have to live with the spam and report odd
>> entries, and overtime we'll fix all the violations.

There seems to be a fundamental misapprehension that it will be easy
to "fix all the violations".  For certain hardware types, this is
not easy, and the "eh, let them get spammed until we get around to
fixing it" attitude is precisely what I was pushing back against.

There's a certain amount of privilege for those of us who are using
x86 systems with built-in hardware random number generators, and cycle
counters, where the problem is much easier to solve.  But for other
platforms, it really, REALLY isn't that easy to fix.

One solution that might be acceptable is to simply print a *single*
warning, the first time some piece of kernel code tries to get
randomness before the CRNG is initialized.  And that's it.  If it's
only a single dmesg line, then we probably don't need to hide it
behind a #ifdef.  That might satisfy the security-obsessed who want to
rub users' noses in the face that their kernel is doing something
potentially insecure and there is nothing they can do about it.  But
since it's also a single line in the syslog, it's not actively
annoying.

The #ifdef will allow the current behaviour where we suppress
duplicates, but we warn for every attempt to get randomness.  That we
can default to no, since it will only be people who are trying to
audit calls to see if the real fix is to switch the call to
prandom_u32, because the use case really was't security/crypto
sensitive.

As I have said before, ultimately I think the only real solution to
this whole mess is to allow the bootloader to read entropy from the
boot device (or maybe some NVRAM or battery-backed memory), which is
then overwritten as early as possible in the boot process with new
random data.  This is what OpenBSD does, but OpenBSD has a much easier
job because they only have to support a small set of architectures.
We will need to do this for each bootloader that is used by Linux,
which is a pretty large set.  But ultimately, it solves *all* the
problems, including getting entropy for KASLR, which needs the
randomness super-early in the boot process, long before we have any
hope of initializing the entropy pool from environmental noise.

So personally, I think this focus on trying to warn/spam users is not
particularly useful.  If we can mute the warnings to those people who
want to play whack-a-mole, it's not harmful, but for those who think
that futzing with get_random_* calls is the right approach, personally
I'm really not convinced.  Of course, the kernel is a volunteer
project, so ultimately all a maintainer can do is to say no to
patches, and not command people to work on things that he or she
wishes.  I *can* try to pursude people about what the right way to go
is, because doing something that involves boot loaders is going to
require a huge amount of effort from many people.  It's certainly not
anything I or anyone else can do by him or herself.

- Ted




Re: [PATCH] random: silence compiler warnings and fix race

2017-06-20 Thread Theodore Ts'o
On Tue, Jun 20, 2017 at 10:53:35AM +0200, Jason A. Donenfeld wrote:
> > Suppressing all messages for all configurations cast a wider net than
> > necessary. Configurations that could potentially be detected and fixed
> > likely will go unnoticed. If the problem is not brought to light, then
> > it won't be fixed.
> 
> I more or less agree with you that we should just turn this on for all
> users and they'll just have to live with the spam and report odd
> entries, and overtime we'll fix all the violations.

Fix all the problems *how*?  If you are on an old system which doesn't
a hardware random number generator, and which doesn't have a high
resolution cycle counter, and may not have a lot of entropy easily
harvestable from the environment, there may not be a lot you can do.
Sure, you can pretend that the cache (which by the way is usually
determinstic) is ***so*** complicated that no one can figure it out,
and essentially pretend that you have entropy when you probably don't;
that just simply becomes a different way of handwaving and suppressing
the warning messages.

> But I think there's another camp that would mutiny in the face of this
> kind of hubris.

Blocking the boot for hours and hours until we have enough entropy to
initialize the CRNG is ***not*** an acceptable way of making the
warning messages go away.  Do that and the users **will** mutiny.

It's this sort of attitude which is why Linus has in the past said
that security people are sometimes insane

- Ted


Re: [PATCH] random: silence compiler warnings and fix race

2017-06-20 Thread Theodore Ts'o
On Mon, Jun 19, 2017 at 10:57:18PM +0200, Jason A. Donenfeld wrote:
> 
> With rc6 already released and rc7 coming up, I'd really appreciate you
> stepping in here and either ACKing the above commit, or giving your
> two cents about it in case I need to roll something different.

I actually had set up an earlier version of your patch for on Saturday
while I was in Beijing.  (Like Linus, I'm attending the LinuxCon China
conference Monday and Tuesday.)  I had even created the signed tag,
but I didn't send the pull request to Linus because I was waiting to
see about how discussions over the locking strategy and the spammy log
messages on PowerPC was going to get resolved.

I've since respun the commit to reflect your newer patch (see the
random_for_linus_stable tag on random.git) and rebased the dev branch
on top of that.  Please take a look and comment.

The other open issue I want to resolve before sending a pull request
this week is whether we want to change the default for
CONFIG_WARN_UNSEEDED_RANDOM so that the answer is 'n'.  It *is* spammy
for PowerPC, because they aren't getting their CRNG initialized
quickly enough, so several userspace processes are getting
fork/exec'ed with an uninitialized CRNG.  That being said, it is a
valid warning because it means that the initial stack canary for the
first couple of PowerPC processes are being created without a fully
initialized CRNG, which may mean that an attacker might be able to
circumvent the stack canary on the first couple of processes.  So that
could potentially be a real security issue on Power.  OTOH, most Power
users aren't going to be able to do anything about the fact the stack
canaries of the system daemons started during early boot don't have
strong randomness, so perhaps we should disable the warning by
default.

Opinions?

- Ted


Re: [kernel-hardening] Re: [PATCH v4 13/13] random: warn when kernel uses unseeded randomness

2017-06-18 Thread Theodore Ts'o
On Thu, Jun 15, 2017 at 01:59:43PM +0200, Stephan Müller wrote:
> I would think that the issue regarding the logging is relevant for 
> cryptographic use cases or use cases requiring strong random numbers only. 
> Only those use cases should be fixed eventually to wait for a fully seeded 
> DRNG.
> 
> The logged messages you present here indicate use cases where no strong 
> security is required. It looks like that the logs show ASLR related use of 
> random numbers. Those do not require a fully seeded ChaCha20 DRNG.

I suspect there is a range of opinions aobut whether or not ASLR
requires strongly secure random numbers or not.  It seems pretty clear
that if we proposed using prandom_u32 for ASLR, people would object
very strongly indeed, since that would make it trivially easy for
attackers to circumvent ASLR protections.

> IMHO, users using the get_random_u64 or get_random_u32 are use cases that do 
> not require a fully seeded DRNG thus do not need a cryptographically strong 
> random number. Hence, I would think that the logging should be removed from 
> get_random_u32/u64.

You are effectively proposing that there ought to be a middle range of
security between prandom_32, get_random_u32/get_random_u64 and
get_random_bytes().  I think that's going to lead to all sorts of
complexity and bugs from people not understanding when they should use
get_random_u32 vs get_random_bytes versus prandom_u32.  And then we'll
end up needing to audit all of the callsites for get_random_u32() so
they don't violate this new usage rule that you are proposing.

   - Ted


Re: Crypto Fixes for 4.12

2017-06-16 Thread Theodore Ts'o
On Thu, Jun 15, 2017 at 11:01:18AM -0400, David Miller wrote:
> As a side note, ext4 does something similar with a private
> implementation, but it doesn't use something the evaluates to an
> alloca.  Instead it uses a fixed 4-byte size for the shash context
> value in the on-stack declaration.

In ext4's case, we're doing it inside an inline function, and then
using the "return" value from inside the calling function.  Assuming
that gcc actually inlines the function, are we in danger of tripping
over the bug?

- Ted


Re: [PATCH v4 13/13] random: warn when kernel uses unseeded randomness

2017-06-08 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:48:04PM +0200, Jason A. Donenfeld wrote:
> This enables an important dmesg notification about when drivers have
> used the crng without it being seeded first. Prior, these errors would
> occur silently, and so there hasn't been a great way of diagnosing these
> types of bugs for obscure setups. By adding this as a config option, we
> can leave it on by default, so that we learn where these issues happen,
> in the field, will still allowing some people to turn it off, if they
> really know what they're doing and do not want the log entries.
> 
> However, we don't leave it _completely_ by default. An earlier version
> of this patch simply had `default y`. I'd really love that, but it turns
> out, this problem with unseeded randomness being used is really quite
> present and is going to take a long time to fix. Thus, as a compromise
> between log-messages-for-all and nobody-knows, this is `default y`,
> except it is also `depends on DEBUG_KERNEL`. This will ensure that the
> curious see the messages while others don't have to.
> 
> Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>

This patch is pretty spammy.  On my KVM test kernel:

random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0
random: neigh_hash_alloc+0x77/0x8f get_random_u32 called with crng_init = 0
random: neigh_hash_alloc+0x77/0x8f get_random_u32 called with crng_init = 0
random: neigh_hash_alloc+0x77/0x8f get_random_u32 called with crng_init = 0
random: neigh_hash_alloc+0x77/0x8f get_random_u32 called with crng_init = 0
random: rt_genid_init+0x24/0x2f get_random_u32 called with crng_init = 0
random: bucket_table_alloc+0x15f/0x190 get_random_u32 called with crng_init = 0

At the very least we probably should do a logical "uniq" on the output
(e.g., if we have complained about the previous callsite, don't whinge
about it again).

- Ted

commit 9d9035bc6d7871a73d7f9aada4e63cb190874a68
Author: Theodore Ts'o <ty...@mit.edu>
Date:   Thu Jun 8 04:16:59 2017 -0400

random: suppress duplicate crng_init=0 warnings

Suppress duplicate CONFIG_WARN_UNSEEDED_RANDOM warnings to avoid
spamming dmesg.

Signed-off-by: Theodore Ts'o <ty...@mit.edu>

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 798f353f0d3c..3bdeef13afda 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1481,9 +1481,14 @@ void get_random_bytes(void *buf, int nbytes)
__u8 tmp[CHAC

Re: [PATCH v4 12/13] bluetooth/smp: ensure RNG is properly seeded before ECDH use

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:48:03PM +0200, Jason A. Donenfeld wrote:
> This protocol uses lots of complex cryptography that relies on securely
> generated random numbers. Thus, it's important that the RNG is actually
> seeded before use. Fortuantely, it appears we're always operating in
> process context (there are many GFP_KERNEL allocations and other
> sleeping operations), and so we can simply demand that the RNG is seeded
> before we use it.
> 
> We take two strategies in this commit. The first is for the library code
> that's called from other modules like hci or mgmt: here we just change
> the call to get_random_bytes_wait, and return the result of the wait to
> the caller, along with the other error codes of those functions like
> usual. Then there's the SMP protocol handler itself, which makes many
> many many calls to get_random_bytes during different phases. For this,
> rather than have to change all the calls to get_random_bytes_wait and
> propagate the error result, it's actually enough to just put a single
> call to wait_for_random_bytes() at the beginning of the handler, to
> ensure that all the subsequent invocations are safe, without having to
> actually change them. Likewise, for the random address changing
> function, we'd rather know early on in the function whether the RNG
> initialization has been interrupted, rather than later, so we call
> wait_for_random_bytes() at the top, so that later on the call to
> get_random_bytes() is acceptable.

Do we need to do all of this?  Bluetooth folks, is it fair to assume
that hci_power_on() has to be called before any bluetooth functions
can be done?  If so, adding a wait_for_random_bytes() in
hci_power_on() might be all that is necessary.

- Ted


Re: [PATCH v4 11/13] net/route: use get_random_int for random counter

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:48:02PM +0200, Jason A. Donenfeld wrote:
> Using get_random_int here is faster, more fitting of the use case, and
> just as cryptographically secure. It also has the benefit of providing
> better randomness at early boot, which is when many of these structures
> are assigned.
> 
> Also, semantically, it's not really proper to have been assigning an
> atomic_t in this way before, even if in practice it works fine.
> 
> Signed-off-by: Jason A. Donenfeld 
> Cc: David Miller 

Applied to the dev branch of the random.git branch, thanks.

- Ted


Re: [PATCH v4 10/13] net/neighbor: use get_random_u32 for 32-bit hash random

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:48:01PM +0200, Jason A. Donenfeld wrote:
> Using get_random_u32 here is faster, more fitting of the use case, and
> just as cryptographically secure. It also has the benefit of providing
> better randomness at early boot, which is when many of these structures
> are assigned.
> 
> Signed-off-by: Jason A. Donenfeld 
> Cc: David Miller 

Applied to the random.git dev branch, thanks.

  - Ted


Re: [PATCH v4 09/13] rhashtable: use get_random_u32 for hash_rnd

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:48:00PM +0200, Jason A. Donenfeld wrote:
> This is much faster and just as secure. It also has the added benefit of
> probably returning better randomness at early-boot on systems with
> architectural RNGs.
> 
> Signed-off-by: Jason A. Donenfeld 
> Cc: Thomas Graf 
> Cc: Herbert Xu 

Thanks, applied to the random.git dev branch.

  - Ted


Re: [kernel-hardening] [PATCH v4 07/13] ceph: ensure RNG is seeded before using

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:47:58PM +0200, Jason A. Donenfeld wrote:
> Ceph uses the RNG for various nonce generations, and it shouldn't accept
> using bad randomness. So, we wait for the RNG to be properly seeded. We
> do this by calling wait_for_random_bytes() in a function that is
> certainly called in process context, early on, so that all subsequent
> calls to get_random_bytes are necessarily acceptable.
> 
> Signed-off-by: Jason A. Donenfeld 
> Cc: Ilya Dryomov 
> Cc: "Yan, Zheng" 
> Cc: Sage Weil 

Thanks, applied to the dev branch.

- Ted


Re: [PATCH v4 06/13] iscsi: ensure RNG is seeded before use

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:47:57PM +0200, Jason A. Donenfeld wrote:
> It's not safe to use weak random data here, especially for the challenge
> response randomness. Since we're always in process context, it's safe to
> simply wait until we have enough randomness to carry out the
> authentication correctly.
> 
> While we're at it, we clean up a small memleak during an error
> condition.

What was the testing that was done for commit?  It looks safe, but I'm
unfamiliar enough with how the iSCSI authentication works that I'd
prefer getting an ack'ed by from the iSCSI maintainers or
alternativel, information about how to kick off some kind of automated
test suite ala xfstests for file systems.

Thanks,

- Ted


Re: [kernel-hardening] [PATCH v4 05/13] crypto/rng: ensure that the RNG is ready before using

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:47:56PM +0200, Jason A. Donenfeld wrote:
> Otherwise, we might be seeding the RNG using bad randomness, which is
> dangerous. The one use of this function from within the kernel -- not
> from userspace -- is being removed (keys/big_key), so that call site
> isn't relevant in assessing this.

The use in keys/big_key is _being_ removed, so this commit is
dependent on that commit landing, correct?  (Order matters, because
otherwise we don't want to potentially screw up doing a kernel bisect
and causing their kernel to deadlock during the boot while they are
trying to track down an unreleated problem.)

   - Ted


Re: [PATCH v4 04/13] security/keys: ensure RNG is seeded before use

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:47:55PM +0200, Jason A. Donenfeld wrote:
> -static inline void key_alloc_serial(struct key *key)
> +static inline int key_alloc_serial(struct key *key)

> @@ -170,7 +168,7 @@ static inline void key_alloc_serial(struct key *key)
>   rb_insert_color(>serial_node, _serial_tree);
>  
>   spin_unlock(_serial_lock);
> - return;
> + return 0;
>  
>   /* we found a key with the proposed serial number - walk the tree from
>* that point looking for the next unused serial number */

> @@ -314,7 +312,9 @@ struct key *key_alloc(struct key_type *type, const char 
> *desc,
>  
>   /* publish the key by giving it a serial number */
>   atomic_inc(>nkeys);
> - key_alloc_serial(key);
> + ret = key_alloc_serial(key);
> + if (ret < 0)
> + goto security_error;
>  
>  error:
>   return key;

I'm guessing you changed key_alloc_serial() to return an int back when
you were thinking that you might use get_random_bytes_wait(), which
could return -ERESTARTSYS.

Now that you're not doing this, but using get_random_u32() instead,
there's no point to change the function signature of
key_alloc_serial() and add an error check in key_alloc() that will
never fail, right?  That's just adding a dead code path.  Which the
compiler can probably optimize away, but why make the code slightly
harder to read than necessasry?

- Ted


Re: [kernel-hardening] [PATCH v4 08/13] cifs: use get_random_u32 for 32-bit lock random

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:47:59PM +0200, Jason A. Donenfeld wrote:
> Using get_random_u32 here is faster, more fitting of the use case, and
> just as cryptographically secure. It also has the benefit of providing
> better randomness at early boot, which is sometimes when this is used.
> 
> Signed-off-by: Jason A. Donenfeld 
> Cc: Steve French 

There's a bigger problem here, which is that cifs_lock_secret is a
32-bit value which is being used to obscure flock->fl_owner before it
is sent across the wire.  But flock->fl_owner is a pointer to the
struct file *, so 64-bit architecture, the high 64-bits of a kernel
pointer is being exposed to anyone using tcpdump.  (Oops, I'm showing
my age; I guess all the cool kids are using Wireshark these days.)

Worse, the obscuring is being done using XOR.  How an active attacker
might be able to trivially reverse engineer the 32-bit "secret" is
left as an exercise to the reader.  The bottom line is if the goal is
to hide the memory location of a struct file from an attacker,
cifs_lock_secret is about as useful as a TSA agent doing security
theatre at an airport.  Which is to say, it makes the civilians feel
good.  :-)

BTW, Jason, this is why it's *good* to audit all of the uses of
get_random_bytes().  It only took me about 30 seconds in the first
patch in your series that changes a caller of get_random_bytes(), and
look what I was able to find by just taking a quick look.  Not waiting
for the CRNG to be fully initialized is the *least* of its problems.

Anyway, I'll include this commit in the dev branch of the random tree,
since it's not going to make things worse.

- Ted


Re: [kernel-hardening] [PATCH v4 03/13] random: add get_random_{bytes,u32,u64,int,long,once}_wait family

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:47:54PM +0200, Jason A. Donenfeld wrote:
> These functions are simple convenience wrappers that call
> wait_for_random_bytes before calling the respective get_random_*
> function.
> 
> Signed-off-by: Jason A. Donenfeld 

Thanks, applied to the dev branch.

- Ted


Re: [PATCH v4 02/13] random: add synchronous API for the urandom pool

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:47:53PM +0200, Jason A. Donenfeld wrote:
> This enables users of get_random_{bytes,u32,u64,int,long} to wait until
> the pool is ready before using this function, in case they actually want
> to have reliable randomness.
> 
> Signed-off-by: Jason A. Donenfeld 

Thanks, applied for the dev branch of random.git.  (I changed the
patch summary slightly; it now reads: "random: add
wait_for_random_bytes() API").

- Ted


Re: [PATCH v4 01/13] random: invalidate batched entropy after crng init

2017-06-07 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:47:52PM +0200, Jason A. Donenfeld wrote:
> It's possible that get_random_{u32,u64} is used before the crng has
> initialized, in which case, its output might not be cryptographically
> secure. For this problem, directly, this patch set is introducing the
> *_wait variety of functions, but even with that, there's a subtle issue:
> what happens to our batched entropy that was generated before
> initialization. Prior to this commit, it'd stick around, supplying bad
> numbers. After this commit, we force the entropy to be re-extracted
> after each phase of the crng has initialized.
> 
> In order to avoid a race condition with the position counter, we
> introduce a simple rwlock for this invalidation. Since it's only during
> this awkward transition period, after things are all set up, we stop
> using it, so that it doesn't have an impact on performance.
> 
> This should probably be backported to 4.11.
> 
> (Also: adding my copyright to the top. With the patch series from
> January, this patch, and then the ones that come after, I think there's
> a relevant amount of code in here to add my name to the top.)
> 
> Signed-off-by: Jason A. Donenfeld 
> Cc: Greg Kroah-Hartman 

Thanks, applied.  This will be on the for_stable that I will be
sending to Linus sometime during 4.12-rcX.

- Ted


Re: [kernel-hardening] Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-07 Thread Theodore Ts'o
On Wed, Jun 07, 2017 at 07:00:17AM +0200, Stephan Müller wrote:
> > On that same idea, one could add an early_initramfs handler for entropy
> > data.
> 
> Any data that comes from outside during the boot process, be it some NVRAM 
> location, the /var/lib...seed file for /dev/random or other approaches are 
> viewed by a number of folks to have zero bits of entropy.

The Open BSD folks would disagree with you.  They've designed their
whole system around saving entropy at shutdown, reading it as early as
possible by the bootloader, and then as soon as possible after the
reboot, to overwrite and reinitialize the entropy seed stored on disk
so that on a reboot after a crash.  They consider this good enough to
assume that their CRNG is *always* strongly initialized.

I'll let you have that discussion/argument with Theo de Raadt, though.
Be warned that he has opinions about security that are almost
certainly as strong (and held with the same level of certainty) as a
certain Brad Spengler...

- Ted


Re: [kernel-hardening] Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-06 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 07:19:10PM -0300, Henrique de Moraes Holschuh wrote:
> On that same idea, one could add an early_initramfs handler for entropy
> data.
> 
> One could also ensure the kernel command line is used to feed some
> entropy for the CRNG init (for all I know, this is already being done),
> and add a do-nothing parameter (that gets sanitized away after use) that
> can be used to add entropy to that command line.  Something like
> random.someentropy=.  This
> might be more generic than the x86 boot protocol reserved space...
> 
> On the better bootloaders, an initramfs segment can be loaded
> independently (and you can have as many as required), which makes an
> early_initramfs a more palatable vector to inject large amounts of
> entropy into the next boot than, say, modifying the kernel image
> directly at every boot/shutdown to stash entropy in there somewhere.
> 
> These are all easy to implement, I just don't know how *useful* they
> would really be.

+1

The kernel side for all of these are relatively easy.  The hard part
is to do an end-to-end solution.  Which means the changes to the
bootloader, the initramfs tools, etc.

As I recall one issue with doing things in the initramfs scripts is
that for certain uses of randomness (such as the stack canary), it's
hard to change the valid canary after it's been used for a userspace
process, since you might have to support more than one valid canary
value until all of the proceses using the original (not necessarily
cryptographically initialized) stack canary has exited.  So while the
upside is that it might not require any kernel changes to inject the
randomness into the non-blocking pool via one of the initramfs
scripts, from an overall simplicity for the kernel users, it's nice if
we can initialize the CRNG as early possible --- in the ideal world,
even before KASLR has been initialized, which means really
early in the boot process.

That's the advantage of doing it as part of the Linux/x86 boot
protocol, since it's super simple to get at the entropy seed.  It
doesn't require parsing the kernel command-line.  The tradeoff is that
it is x86 specific, and the ARM, ppcle folks, etc. would have to
implement their own way of injecting entropy super-early into the boot
process.

One advantage of doing this somewhat harder thing is that **all** of
the issues around re-architecting a new rng_init initcall level, and
dealing with module load order, etc., disappear if we can figure out a
way to initialize the entropy pre-KASLR.  Yes, it's harder; but it
solves all of the issues at one fell swoop.

- Ted


Re: [kernel-hardening] Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-06 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 02:34:43PM +0200, Jason A. Donenfeld wrote:
> 
> Yes, I agree whole-heartedly.  A lot of people have proposals for
> fixing the direct idea of entropy gathering, but for whatever reason,
> Ted hasn't merged stuff. I think Stephan (CCd) rewrote big critical
> sections of the RNG, called LRNG, and published a big paper for peer
> review and did a lot of cool engineering, but for some reason this
> hasn't been integrated. I look forward to movement on this front in
> the future, if it ever happens. Would be great.

So it's not clear what you mean by Stephan's work.  It can be
separated into multiple pieces; one is simply using a mechanism which
can be directly mapped to NIST's DRBG framework.  I don't believe this
actually adds any real security per se, but it can make it easier to
get certification for people who care about getting FIPS
certification.  Since I've seen a lot of snake oil and massive waste
of taxpayer and industry dollars by FIPS certification firms, it's not
a thing I particularly find particularly compelling.

The second bit is "Jitter Entropy".  The problem I have with that is
there isn't any convincing explanation about why it can't be predicted
to some degree of accuracy with someone who understands what's going
on with Intel's cache architecture.  (And this isn't just me, I've
talked to people who work at Intel and they are at best skeptical of
the whole idea.)

To be honest, there is a certain amount of this which is true with
harvesting interrupt timestamps, since for at least some interrupts
(in the worst case, the timer interrupt, especially on SOC's where all
of the clocks are generated from a single master oscillator) at least
some of the unpredictability is due to fact that the attacker needs to
understand what's going on with cache hits and misses, and that in
turn is impacted by compiler code generation, yadda, yadda, yadda.

The main thing then with trying to get entropy from sampling from the
environment is to have a mixing function that you trust, and that you
capture enough environmental data which hopefully is not available to
the attacker.  So for example, radio strength measurements from the
WiFi data is not necessarily secret, but hopefully the information of
whether the cell phone is on your desk, or in your knapsack, either on
the desk, or under the desk, etc., is not available the analyst
sitting in Fort Meade (or Beijing, if you trust the NSA but not the
Ministry of State Security :-).

The judgement call is when you've gathered enough environmental data
(whether it is from CPU timing and cache misses if you are using
Jitter Entropy), or interupt timing, etc., is when you have enough
unpredictable data that it will be sufficient to protect you against
the attacker.  We try to make some guesses of when we've gathered a
"bit" of entropy, but it's important to be humble here.  We don't have
a theoretical framework for *any* of this, so the way we gather
metrics is really not all that scientific.

We also need to be careful not to fall into the trap of wishful
thinking.  Yes, if we can say that the CRNG is fully initialized
before the init scripts are started, or even very early in the
initcall, then we can say yay!  Problem solved!!  But just because
someone *claims* that JitterEntropy will solve the problem, doesn't
necessarily mean it really does.  I'm not accusing Stephan of trying
to deliberately sell snake oil; just that at least some poeople have
looked at it dubiously, and I would at least prefer to gather a lot
more environmental noise, and be more conservative before saying that
we're sure the CRNG is fully initialized.


The other approach is to find a way to have initialized "seed" entropy
which we can count on at every boot.  The problem is that this is very
much dependent on how the bootloader works.  It's easy to say "store
it in the kernel", but where the kernel is stored varies greatly from
architecture to architecture.  In some cases, the kernel can stored in
ROM, where it can't be modified at all.

It might be possible, for example, to store a cryptographic key in a
UEFI boot-services variable, where the key becomes inaccessible after
the boot-time services terminate.  But you also need either a reliable
time-of-day clock, or a reliable counter which is incremented each
time the system that boots, and which can't be messed with by an
attacker, or trivially reset by a clueless user/sysadmin.

Or maybe we can have a script that is run at shutdown and boot-up that
stashes 32 bytes of entropy in a reserved space accessible to GRUB,
and which GRUB then passes to the kernel using an extension to the
Linux/x86 Boot Protocol.  (See Documentation/x86/boot.txt)


Quite frankly, I think this is actually a more useful and fruitful
path than either the whack-a-mole audit of all of the calls to
get_random_bytes() or adding a blocking variant to get_random_bytes()
(since in my opinion this becomes yet another version of whack-a-mole,
since 

Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-05 Thread Theodore Ts'o
On Tue, Jun 06, 2017 at 02:50:59AM +0200, Jason A. Donenfeld wrote:
> Otherwise, we might be seeding the RNG using bad randomness, which is
> dangerous.
> 
> Cc: Herbert Xu 
> Signed-off-by: Jason A. Donenfeld 
> ---
>  crypto/rng.c | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/crypto/rng.c b/crypto/rng.c
> index f46dac5288b9..e042437e64b4 100644
> --- a/crypto/rng.c
> +++ b/crypto/rng.c
> @@ -48,12 +48,14 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 
> *seed, unsigned int slen)
>   if (!buf)
>   return -ENOMEM;
>  
> - get_random_bytes(buf, slen);
> + err = get_random_bytes_wait(buf, slen);

Note that crypto_rng_reset() is called by big_key_init() in
security/keys/big_key.c as a late_initcall().  So if we are on a
system where the crng doesn't get initialized until during the system
boot scripts, and big_key is compiled directly into the kernel, the
boot could end up deadlocking.

There may be other instances of where crypto_rng_reset() is called by
an initcall, so big_key_init() may not be an exhaustive enumeration of
potential problems.  But this is an example of why the synchronous
API, although definitely much more convenient, can end up being a trap
for the unwary

- Ted


Re: get_random_bytes returns bad randomness before seeding is complete

2017-06-02 Thread Theodore Ts'o
On Sat, Jun 03, 2017 at 01:58:25AM +0200, Jason A. Donenfeld wrote:
> Hi Ted,
> 
> Based on the tone of your last email, before I respond to your
> individual points

May I gently point out that *your* tone that started this whole thread
has been pretty terrible?   Quoting your your first message:

   "Yes, yes, you have arguments for why you're keeping this
   pathological, but you're still wrong, and this api is still a bug."

This kind of "my shit doesn't stink, but yours does", is not
particularly useful if you are trying to have a constructive
discussion.  If you think the /dev/urandom question wasn't appropriate
to discuss in this thread, then why the *hell* did you bring it up
*first*?

The reason why I keep harping on this is because I'm concerned about
an absolutist attitude towards technical design, where the good is the
enemy of the perfect, and where bricking machines in the pursuit of
cryptographic perfection is considered laudable.  Yes, if you apply
thermite to the CPU and the hard drives, the system will be secure.
But it also won't be terribly useful.

And to me, the phrase "you're still wrong" doesn't seem to admit any
concessions towards acknowledging that there might be another side to
the security versus usability debate.  And I'd ***really*** rather not
waste my time trying to work with someone who doesn't care about
usability, because my focus is in practical engineering, which means
if no one uses the system, or can use the system, then I consider it a
failure, even if it is 100% secure.  And this is true no matter
whether we're talking about /dev/urandom or some enhancement to
get_random_bytes().

> Do you have any opinions on the issue of what the most flexible API
> would be? There are a lot of varieties of wait_event. The default one
> is uninterruptable, but as Stephan and Herbert saw when they were
> working on this was that it could be dangerous in certain
> circumstances not to allow interruption. So probably we'd want an
> interruptable variety. But then maybe we want to support the other
> wait_event_* modes too, like killable, timeout, hrtimeout, io, and so
> on. There's a huge list. These are all implemented as macros in
> wait.h, and I don't really want to have a different get_random_bytes_*
> function that corresponds to _each_ of these wait_event_* functions,
> since that'd be overwhelming.

We're going to have to look at a representative sample of the call
sites to figure this out.  The simple case is where the call site is
only run in response to a userspace system call.  There, blocking
makes perfect sense.  I'm just not sure there are many callers of
get_random_ bytes() where this is the case.

There are definitely quite a few callsites of get_random_bytes() where
the caller is in the middle of an interrupt service routine, or is
holding spinlock.  There, we are *not* allowed to block.  So any kind
of blocking interface isn't going to be allowed; the async callback is
the only thing which is guaranteed to work, in fact.  Mercifully, many
of these are cases where prandom_u32() makes sense.

> So I was thinking maybe a simple flag and a timeout param could work:

When would a timeout be useful?  If you are using get_random_bytes()
for security reasons, does the security reason go away after 15
seconds?  Or even 30 seconds?

> > Or maybe we can then help figure out what percentage of the callsites
> > can be fixed with a synchronous interface, and fix some number of them
> > just to demonstrate that the synchronous interface does work well.
> 
> I was planning on doing this to at least a couple callsites in my
> upcoming patch series that adds the synchronous interface. It seems
> like the right way to see if the API is good or not.

This is absolutely the right approach.  See my above comments about
why there may not be that many places where a synchronous interface
will work out.  And I'm going to be rather surprised if there are
places where having a timeout parameter will be helpful.  But I could
be wrong; let's see if we can find cases where we (a) need secure
bytes, (b) are allowed to block, and (c) where a timeout would make
sense.

> Right. In my initial email I proposed a distinction:
> 
> - External module loading is usually in a different process context,
> so multiple modules can be attempted to be loaded at the same time. In
> this case, it is probably safe to block in request_module.
> - Built-in modules are loaded ± linearly, from init/main.c, so these
> really can't block each other. For this, I was thinking of introducing
> an rnginit section, to add to the list of things like lateinit. The
> rnginit drivers would be loaded _after_ the RNG has initialized, so
> that they don't get blocked.

... except that we already have an order in which drivers are added,
so moving a driver to the rnginit section might mean that some kernel
module that depends on that driver being loaded also needs to be moved
after rnginit.  We already have the following 

Re: get_random_bytes returns bad randomness before seeding is complete

2017-06-02 Thread Theodore Ts'o
On Fri, Jun 02, 2017 at 07:44:04PM +0200, Jason A. Donenfeld wrote:
> On Fri, Jun 2, 2017 at 7:26 PM, Theodore Ts'o <ty...@mit.edu> wrote:
> > I tried making /dev/urandom block.
> > So if you're a security focused individual who is kvetching
> > And if we're breaking
> 
> Yes yes, bla bla, predictable response. I don't care. Your API is
> still broken. Excuses excuses. Yes, somebody needs to do the work in
> the end, maybe that person can be me, maybe you, maybe somebody else.

It's not _my_ API, it's *our* API --- that is the Linux kernel
community's.  And part of the rules of this community is that we very
much don't break backwards compatibility, unless there is a really
good reason, where Linus gets to decide if it's a really good reason.
So if you care a lot about this issue, then you need to do the work to
make the change, and part of it is showing, to a high degree of
certainty, that it won't break backwards compatibility.

Because if you don't, and you flout community norms, and users get
broken, and they complain, and you tell them to suck it, then Linus
will pull out is patented clue stick, and tell you that you have in
fact flouted community norms, correct you publically, and then revert
your change.

If you are using the word *you*, and speaking as an outside to the
community, they you can kvetch all you like.  But you're an outsider,
and don't have to listen to you.  But if you want to make a positive
difference here, and you're passionate about it --- this is you would
need to do.  That being said, we're all volunteers, so if you don't
want to bother, that's fine.  But then don't be surprised if we don't
take your complaints seriously.

> While we're on the topic of that, you might consider adding a simple
> synchronous interface.

There's that word "you" again

> I realize that the get_blocking_random_bytes
> attempt was aborted as soon as it began, because of issues of
> cancelability, but you could just expose the usual array of wait,
> wait_interruptable, wait_killable, etc, or just make that wait object
> and condition non-static so others can use it as needed. Having to
> wrap the current asynchronous API like this kludge is kind of a
> downer:

This is open source --- want to send patches?  It sounds like it's a
workable, good idea.

> No, what it means is that the particularities of individual examples I
> picked at random don't matter. Are we really going to take the time to
> audit each and every callsite to see "do they need actually random
> data? can this be called pre-userspace?" I mentioned this in my
> initial email. As I said there, I think analyzing all the cases one by
> one is fragile, and more will pop up, and that's not really the right
> way to approach this. And furthermore, as alluded to above, even
> fixing clearly-broken places means using that hard-to-use asynchronous
> API, which adds even more potentially buggy TCB to these drivers and
> all the rest. Not a good strategy.
> 
> Seeing as you took the time to actually respond to the
> _particularities_ of each individual random example I picked could
> indicate that you've missed this point prior.

...or that I disagree with your prior point.  I think you're being
lazy, and trying to make it someone else's problem and standing on the
side lines and complaining, as opposed to trying to help solve the
problem.

No, of course we can't audit all of the code, but it's probably a good
idea to take a random sample, and to analyze them, so we can get a
sense of what the issues are.  And then maybe we can find a way to
quickly find a class of users that can be easily fixed by using
prandom_u32() (for example).

Or maybe we can then help figure out what percentage of the callsites
can be fixed with a synchronous interface, and fix some number of them
just to demonstrate that the synchronous interface does work well.

> Right, it was him and Stephan (CCd). They initially started by adding
> get_blocking_random_bytes, but then replaced this with the
> asynchronous one, because they realized it could block forever. As I
> said above, though, I still think a blocking API would be useful,
> perhaps just with more granularity for the way in which it blocks.

It depends on where it's being used.  If it's part of module load,
especially if it's one that's done automatically, having something
that blocks forever might not be all that useful.  Especially if it
blocks device drivers from being albe to be initialized enough to
actually supply entropy to the whole system.

Or maybe (in the case of stack canaries), the answer is we should
start with crappy random numbers, but then once the random number
generator has been initialized, we can use the callback to get
cryptographically secure random number generators, and then we need to
figure out how to phase out use of the old crappy random 

Re: get_random_bytes returns bad randomness before seeding is complete

2017-06-02 Thread Theodore Ts'o
On Fri, Jun 02, 2017 at 04:59:56PM +0200, Jason A. Donenfeld wrote:
> /dev/urandom will return bad randomness before its seeded, rather than
> blocking, and despite years and years of discussion and bike shedding,
> this behavior hasn't changed, and the vast majority of cryptographers
> and security experts remain displeased. It _should_ block until its
> been seeded for the first time, just like the new getrandom() syscall,
> and this important bug fix (which is not a real api break) should be
> backported to all stable kernels. (Userspace doesn't even have a
> reliable way of determining whether or not it's been seeded!) Yes,
> yes, you have arguments for why you're keeping this pathological, but
> you're still wrong, and this api is still a bug. Forcing userspace
> architectures to work around your bug, as your arguments usually go,
> is disquieting.

I tried making /dev/urandom block.  The zero day kernel testing within
a few hours told us that Ubuntu LTS at the time and OpenWrt would fail
to boot.  And when Python made a change to call getrandom(2) with the
default blocking semantic instead of using /dev/urandom, some
distributions using systemd stopped booting.

So if you're a security focused individual who is kvetching that we're
asking to force userspace to change to fix "a bug", you need to
understand that making the change you're suggesting will *also*
require making changes to userspace.  And if you want to go try to
convince Linus Torvalds that it's OK to make a backwards-incompatible
changes that break Ubuntu LTS and OpenWRT by causing them to fail to
boot --- be my guest.

And if we're breaking distributions from them booting when their use
of /dev/urandom is not security critical, I suspect Linus is not going
to be impressed that you are breaking those users for no beneft.  (For
example, Python's use of getrandom(2) was to prevent denial of service
attacks when Python is used as a fast-CGI web server script, and it
was utterly pointless from security perspective to block when the
python script was creating on-demand systemd unit files, where the DOS
threat was completely irrelevant.)

> As the printk implies, it's possible for get_random_bytes() to be
> called before it's seeded. Why was that helpful printk put behind an
> ifdef? I suspect because people would see the lines in their dmesg and
> write emails like this one to the list.

The #ifdef and the printk was there from the very beginning.

commit 392a546dc8368d1745f9891ef3f8f7c380de8650
Author: Theodore Ts'o <ty...@mit.edu>
Date:   Sun Nov 3 18:24:08 2013 -0500

random: add debugging code to detect early use of get_random_bytes()

Signed-off-by: "Theodore Ts'o" <ty...@mit.edu>

It was four years ago, so I don't remember the exact circumstances,
but if I recall the issue was not wanting to spam the dmesg.  Also, at
the time, we hadn't yet created the asynchronous interfaces that allow
kernel code to do the right thing, even if it was massively
inconvenient.

At this point, I think we should be completely open to making it be a
config option, and if it looks like for common configs we're not
spamming dmesg, we could even it make it be the default.  We just also
need to give driver writers some easy-to-understand receipes to fix
their drivers to do the right thing.  If we do that first, it's much
more likely they will actually fix their kernel code, instead of just
turning the warning off.

> drivers/net/ieee802154/at86rf230.c:
> static int at86rf230_probe(struct spi_device *spi)
> {
> ...
> get_random_bytes(csma_seed, ARRAY_SIZE(csma_seed));
> 
> No clue what this driver is for or if it actually needs good
> randomness, but using get_random_bytes in a hardware probe function
> can happen prior to userspace's ability to seed.

What, you mean as a computer scientist you didn't immediately
understand that csma doesn't refer to CSMA/CD --- or "Carrier-sense
multiple access with collision avoidance"?  For shame!  :-)

This is basically the exponential backoff which used in ethernet
networks, and the *real* answer is that they should be using
prandom_u32().

> arch/s390/crypto/prng.c seems to call get_random_bytes on module load,
> which can happen pre-userspace, for seeding some kind of RNG of its
> own.

It appears to be accessing a hardware cryptographic processor
function, and seems to do its own entropy gathering relying on stack
garbage as well as using get_random_bytes().  By default it is defined
as a module, so I suspect in most situations it's called
post-userspace, but agreed that it is not guaranteed to be the case.
I think in practice most IBM customers will be safe because they tend
to use distro kernels that will almost certainly be building it as a
module, but your point is well taken.

> And so on and so on and so on. If you grep the source as I did, you'll
> find ther

Re: [PATCH] random: Don't overwrite CRNG state in crng_initialize()

2017-02-09 Thread Theodore Ts'o
On Thu, Feb 09, 2017 at 01:13:22AM -0700, Alden Tondettar wrote:
> And using:
> 
> $ qemu-system-x86_64 --version
> QEMU emulator version 2.1.2 (Debian 1:2.1+dfsg-12+deb8u6), Copyright (c) 
> 2003-2008 Fabrice Bellard
> $ qemu-system-x86_64 -nographic -enable-kvm -m 1024M -kernel bzImage -append 
> "root=/dev/sda1 loglevel=3 console=ttyS0" hd3

Hmm, I'm not seeing this at *all*.  I assume you must be using Debian
stable?  I'm using Debain Testing, which has much newer version of qemu:

% /usr/bin/kvm --version
QEMU emulator version 2.8.0(Debian 1:2.8+dfsg-2)
Copyright (c) 2003-2016 Fabrice Bellard and the QEMU Project developers

And I'm using:

/usr/bin/kvm -drive 
file=/usr/projects/xfstests-bld/build-32/kvm-xfstests/test-appliance/root_fs.img,if=virtio,snapshot=on
 -vga none -nographic -m 1024 --kernel /build/random/arch/x86/boot/bzImage 
--append "root=/dev/vda console=ttyS0,115200"

See below for an excerpt of the log, but basically we don't get the
first call to crng_fast_load until a good 2 seconds into the boot,
when we're doing device probing.

The only thing I think of is that your version of qemu is spewing a
*huge* number of interrupts to the guest kernel, as soon as interrupts
are enabled, and *before* the kernel even starts trying to talk to the
devices.

That's bad, because it's going to be destroying CPU efficiency of the
VM, and even if we add a safety mechanism to prohibit calling
crng_fast_load until after crng_initialize() has been called, it's
likely that you're not getting much entropy from the interrupts,
because qemu must be spewing interrupts as fast as possible, and there
may not be a lot of unpredictability in that circumstance.  So we can
put in some changes to try to mitigate this, but even with your patch,
there might not be a lot of entropy because qemu is clearly spewing
interrupts at line rate.

Hence, I'd call this a qemu BUG, and I'd strongly suggest you look at
fixing it by upgrading qemu.

- Ted


[0.029226] mce: CPU supports 10 MCE banks
[0.030077] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[0.09] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
[0.041436] Freeing SMP alternatives memory: 20K
[0.043621] ftrace: allocating 34091 entries in 67 pages
[0.053659] smpboot: Max logical packages: 1
[0.056696] Enabling APIC mode:  Flat.  Using 1 I/O APICs
[0.061854] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[0.06] smpboot: CPU0: Intel QEMU Virtual CPU version 2.5+ (family: 0x6, 
model: 0x6, stepping: 0x3)
[0.063588] Performance Events: PMU not available due to virtualization, 
using software events only.
[0.067555] crng_initialize called
[0.070107] smp: Bringing up secondary CPUs ...
[0.072108] smp: Brought up 1 node, 1 CPU
[0.073351] smpboot: Total of 1 processors activated (4801.01 BogoMIPS)
[0.077456] devtmpfs: initialized
[0.079945] clocksource: jiffies: mask: 0x max_cycles: 0x, 
max_idle_ns: 6370867519511994 ns

[2.063906] crng: dumping entropy
[2.065382] crng_fast_load called
[2.066747] crng_fast_load: 16/64
[2.066747] crng_fast_load: 16
[2.073526] tsc: Refined TSC clocksource calibration: 2399.998 MHz
[2.076219] clocksource: tsc: mask: 0x max_cycles: 
0x229835b7123, max_idle_ns: 440795242976 ns
[2.134486] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
[2.144405] ata2.00: configured for MWDMA2
[2.153349] scsi 1:0:0:0: CD-ROMQEMU QEMU DVD-ROM 2.5+ 
PQ: 0 ANSI: 5
[2.187210] sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
[2.194289] cdrom: Uniform CD-ROM driver Revision: 3.20
[2.205026] sr 1:0:0:0: Attached scsi generic sg0 type 5
[2.277461] crng: dumping entropy
[2.279017] crng_fast_load called
[2.279017] crng_fast_load: 32/64
[2.279017] crng_fast_load: 16
[2.720393] crng: dumping entropy
[2.723448] crng_fast_load called
[2.723448] crng_fast_load: 48/64
[2.723448] crng_fast_load: 16
[2.744182] input: ImExPS/2 Generic Explorer Mouse as 
/devices/platform/i8042/serio1/input/input3
[2.760954] EXT4-fs (vda): couldn't mount as ext3 due to feature 
incompatibilities
[2.774648] EXT4-fs (vda): mounted filesystem with ordered data mode. Opts: 
(null)
[2.779939] VFS: Mounted root (ext4 filesystem) readonly on device 254:0.
[2.785356] devtmpfs: mounted
[2.788127] Freeing unused kernel memory: 2404K
[2.789833] Write protecting the kernel text: 7512k
[2.791856] Write protecting the kernel read-only data: 3568k
[2.793918] NX-protecting the kernel data: 8872k
[2.822964] x86/mm: Checked W+X mappings: FAILED, 96 W+X pages found.
[2.845398] crng: dumping entropy
[2.846536] crng_fast_load called
[2.847460] crng_fast_load: 64/64
[2.848137] random: fast init done
[2.848137] crng_fast_load: 16
[2.914998] systemd[1]: systemd 215 running in system mode. 

Re: [PATCH] random: Don't overwrite CRNG state in crng_initialize()

2017-02-09 Thread Theodore Ts'o
OK, I figured out what is going on with your test results.

If you use qemu-system-x86_64 **without** --enable-kvm, then on both
the Debian Jessie version of qemu as well as the Debian Stretch
version of qemu, crng_fast_load() will be called _twice_ before
crng_initialize has a chance to be called.  At least for my kernel
configuration and my CPU.

If you're using a different kernel configuration and a slower CPU,
such that when qemu is doing instruction by instruction emulation,
which slows down the boot sequence **massively**, then that probably
explains your results.

I'm not sure if there are any real life use cases where someone would
be insane enough to use virtualization without enabling KVM, but at
least we know what is happening now.

This makes me feel better, because I've looked at kernel boot messags
from a variety of systems, from big data center servers to laptops to
mobile handsets, and I had **never** seen the sequence of crng
initialization messages that you had been reporting.

- Ted


Re: [PATCH] random: Don't overwrite CRNG state in crng_initialize()

2017-02-08 Thread Theodore Ts'o
On Wed, Feb 08, 2017 at 08:31:26PM -0700, Alden Tondettar wrote:
> The new non-blocking system introduced in commit e192be9d9a30 ("random:
> replace non-blocking pool with a Chacha20-based CRNG") can under
> some circumstances report itself initialized while it still contains
> dangerously little entropy, as follows:
> 
> Approximately every 64th call to add_interrupt_randomness(), the "fast"
> pool of interrupt-timing-based entropy is fed into one of two places. At
> calls numbered <= 256, the fast pool is XORed into the primary CRNG state.
> At call 256, the CRNG is deemed initialized, getrandom(2) is unblocked,
> and reading from /dev/urandom no longer gives warnings.
> 
> At calls > 256, the fast pool is fed into the input pool, leaving the CRNG
> untouched.
> 
> The problem arises between call number 256 and 320. If crng_initialize()
> is called at this time, it will overwrite the _entire_ CRNG state with
> 48 bytes generated from the input pool.

So in practice this isn't a problem because crng_initialize is called
in early init.   For reference, the ordering of init calls are:

"early",  <--- crng_initialize is here()
"core",   < ftrace is initialized here()
"postcore",
"arch",
"subsys",  < acpi_init is here()
"fs",
"device",  < device probing is here
"late",

So in practice, call 256 typically happens **well** after
crng_initialize.  You can see where it is the boot messages, which is
after 2.5 seconds into the boot:

[2.570733] rtc_cmos 00:02: alarms up to one month, y3k, 114 bytes nvram, 
hpet irqs
[2.570863] usbcore: registered new interface driver i2c-tiny-usb
[2.571035] device-mapper: uevent: version 1.0.3
[2.571215] random: fast init done  <-
[2.571316] device-mapper: ioctl: 4.35.0-ioctl (2016-06-23) initialised: 
dm-de...@redhat.com
[2.571678] device-mapper: multipath round-robin: version 1.1.0 loaded
[2.571728] intel_pstate: Intel P-state driver initializing
[2.572331] input: AT Translated Set 2 keyboard as 
/devices/platform/i8042/serio0/input/input3
[2.572462] intel_pstate: HWP enabled
[2.572464] sdhci: Secure Digital Host Controller Interface driver

When is crng_initialize() called?  Sometime *before* 0.05 seconds into
the boot on my laptop:

[0.054529] ftrace: allocating 29140 entries in 114 pages

> In short, the situation is:
> 
> A) No usable hardware RNG or arch_get_random() (or we don't trust it...)
> B) add_interrupt_randomness() called 256-320 times but other
>add_*_randomness() functions aren't adding much entropy.
> C) then crng_initialize() is called
> D) not enough calls to add_*_randomness() to push the entropy
>estimate over 128 (yet)
> E) getrandom(2) or /dev/urandom used for something important
> 
> Based on a few experiments with VMs, A) through D) can occur easily in
> practice. And with no HDD we have a window of about a minute or two for
> E) to happen before add_interrupt_randomness() finally pushes the
> estimate over 128 on its own.

How did you determine when crng_initialize() was being called?  On a
VM generally there are fewer interrupts than on real hardware.  On
KVM, for I see the random: fast_init message being printed 3.6 seconds
into the boot.

On Google Compute Engine, the fast_init message happens 52 seconds into the
boot.

So what VM where you using?  I'm trying to figure out whether this is
hypothetical or real problem, and on what systems.

 - Ted


Re: [PATCH 7/8] random: remove noop function call to xfer_secondary_pool

2017-01-18 Thread Theodore Ts'o
On Tue, Dec 27, 2016 at 11:41:46PM +0100, Stephan Müller wrote:
> Since the introduction of the ChaCha20 DRNG, extract_entropy is only
> invoked with the input_pool. For this entropy pool, xfer_secondary_pool
> is a no-op and can therefore be safely removed.
> 
> Signed-off-by: Stephan Mueller 

Instead of doing some minor deletions of single lines, what I want to
do is to look at a more comprehensive refactoring of the code.  The
fact that we have extract_entropy() only being used for the input
pool, and extract_entropy_user() ony being used for the non-blocking
pool, is not obvious from the function name and the arguments that
these functions take.

Either the functions should be kept general (so someone else using
them in the future won't get confused about how they work), or they
should be made more speceific.  But doing light modifications like
have the danger of causing confusion and bugs in the future.

 - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/8] random: remove unused branch in hot code path

2017-01-17 Thread Theodore Ts'o
On Tue, Dec 27, 2016 at 11:40:23PM +0100, Stephan Müller wrote:
> The variable ip is defined to be a __u64 which is always 8 bytes on any
> architecture. Thus, the check for sizeof(ip) > 4 will always be true.
> 
> As the check happens in a hot code path, remove the branch.

The fact that it's a hot code path means the compiler will optimize it
out, so the fact that it's on the hot code path is irrelevant.  The
main issue is that on platforms with a 32-bit IP's, ip >> 32 will
always be zero.  It might be that we can just do this via

#if BITS_PER_LONG == 32
  ...
#else
  ...  
#endif

I'm not sure that works for all platforms, though.  More research is
needed...

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/8] random: trigger random_ready callback upon crng_init == 1

2017-01-17 Thread Theodore Ts'o
On Tue, Dec 27, 2016 at 11:39:57PM +0100, Stephan Müller wrote:
> The random_ready callback mechanism is intended to replicate the
> getrandom system call behavior to in-kernel users. As the getrandom
> system call unblocks with crng_init == 1, trigger the random_ready
> wakeup call at the same time.

It was deliberate that random_ready would only get triggered with
crng_init==2.

In general I'm assuming kernel callers really want real randomness (as
opposed to using prandom), where as there's a lot of b.s. userspace
users of kernel randomness (for things that really don't require
cryptographic randomness, e.g., for salting Python dictionaries,
systemd/udev using /dev/urandom for non-cryptographic, non-security
applications etc.)

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5

2016-12-22 Thread Theodore Ts'o
On Thu, Dec 22, 2016 at 07:08:37PM +0100, Hannes Frederic Sowa wrote:
> I wasn't concerned about performance but more about DoS resilience. I
> wonder how safe half md4 actually is in terms of allowing users to
> generate long hash chains in the filesystem (in terms of length
> extension attacks against half_md4).
> 
> In ext4, is it actually possible that a "disrupter" learns about the
> hashing secret in the way how the inodes are returned during getdents?

They'd have to be a local user, who can execute telldir(3) --- in
which case there are plenty of other denial of service attacks one
could carry out that would be far more devastating.

It might also be an issue if the file system is exposed via NFS, but
again, there are so many other ways an attacker could DoS a NFS server
that I don't think of it as a much of a concern.

Keep in mind that worst someone can do is cause directory inserts to
fail with an ENOSPC, and there are plenty of other ways of doing that
--- such as consuming all of the blocks and inodes in the file system,
for example.

So it's a threat, but not a high priority one as far as I'm concerned.
And if this was a problem in actual practice, users could switch to
the TEA based hash, which should be far harder to attack, and
available today.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5

2016-12-22 Thread Theodore Ts'o
On Thu, Dec 22, 2016 at 05:16:47PM +0100, Jason A. Donenfeld wrote:
> Could you offer a bit of advice on how to manage dependencies between
> patchsets during merge windows? I'm a bit new to the process.
> 
> Specifically, we how have 4 parts:
> 1. add siphash, and use it for some networking code. to: david miller's 
> net-next

I'd do this first, as one set.  Adding a new file to crypto is
unlikely to cause merge conflicts.

> 2. convert char/random to use siphash. to: ted ts'o's random-next

I'm confused, I thought you had agreed to the batched chacha20
approach?

> 3. move lib/md5.c to static function in crypto/md5.c, remove entry
> inside of linux/cryptohash.h. to: ??'s ??-next

This is cleanup, so it doesn't matter that much when it happens.  md5
changes to crypto is also unlikely to cause conflicts, so we could do
this at the same time as (2), if Herbert (the crypto maintainer) agrees.

> 4. move lib/halfmd4.c to static function in fs/ext/hash.c, remove
> entry inside of linux/cryptohash.c. to: td ts'o's ext-next

This is definitely separate.

One more thing.  Can you add some test cases to lib/siphash.h?
Triggered off of a CONFIG_SIPHASH_REGRESSION_TEST config flag, with
some test inputs and known outputs?  I'm going to need to add a
version of siphash to e2fsprogs, and I want to make sure the userspace
version is implementing the same algorithm as the kernel siphash.

  - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5

2016-12-22 Thread Theodore Ts'o
On Thu, Dec 22, 2016 at 07:03:29AM +0100, Jason A. Donenfeld wrote:
> I find this compelling. We'll have one csprng for both
> get_random_int/long and for /dev/urandom, and we'll be able to update
> that silly warning on the comment of get_random_int/long to read
> "gives output of either rdrand quality or of /dev/urandom quality",
> which makes it more useful for other things. It introduces less error
> prone code, and it lets the RNG analysis be spent on just one RNG, not
> two.
> 
> So, with your blessing, I'm going to move ahead with implementing a
> pretty version of this for v8.

Can we do this as a separate series, please?  At this point, it's a
completely separate change from a logical perspective, and we can take
in the change through the random.git tree.

Changes that touch files that are normally changed in several
different git trees leads to the potential for merge conflicts during
the linux-next integration and merge window processes.  Which is why
it's generally best to try to isolate changes as much as possible.

Cheers,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5

2016-12-22 Thread Theodore Ts'o
On Thu, Dec 22, 2016 at 02:10:33PM +0100, Jason A. Donenfeld wrote:
> On Thu, Dec 22, 2016 at 1:47 PM, Hannes Frederic Sowa
>  wrote:
> > following up on what appears to be a random subject: ;)
> >
> > IIRC, ext4 code by default still uses half_md4 for hashing of filenames
> > in the htree. siphash seems to fit this use case pretty good.
> 
> I saw this too. I'll try to address it in v8 of this series.

This is a separate issue, and this series is getting a bit too
complex.  So I'd suggest pushing this off to a separate change.

Changing the htree hash algorithm is an on-disk format change, and so
we couldn't roll it out until e2fsprogs gets updated and rolled out
pretty broadley.  In fact George sent me patches to add siphash as a
hash algorithm for htree a while back (for both the kernel and
e2fsprogs), but I never got around to testing and applying them,
mainly because while it's technically faster, I had other higher
priority issues to work on --- and see previous comments regarding
pixel peeping.  Improving the hash algorithm by tens or even hundreds
of nanoseconds isn't really going to matter since we only do a htree
lookup on a file creation or cold cache lookup, and the SSD or HDD I/O
times will dominate.  And from the power perspective, saving
microwatts of CPU power isn't going to matter if you're going to be
spinning up the storage device

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kernel-hardening] Re: [PATCH v7 3/6] random: use SipHash in place of MD5

2016-12-21 Thread Theodore Ts'o
On Thu, Dec 22, 2016 at 03:49:39AM +0100, Jason A. Donenfeld wrote:
> 
> Funny -- while you guys were sending this back & forth, I was writing
> my reply to Andy which essentially arrives at the same conclusion.
> Given that we're all arriving to the same thing, and that Ted shot in
> this direction long before we all did, I'm leaning toward abandoning
> SipHash for the de-MD5-ification of get_random_int/long, and working
> on polishing Ted's idea into something shiny for this patchset.

here are my numbers comparing siphash (using the first three patches
of the v7 siphash patches) with my batched chacha20 implementation.
The results are taken by running get_random_* 1 times, and then
dividing the numbers by 1 to get the average number of cycles for
the call.  I compiled 32-bit and 64-bit kernels, and ran the results
using kvm:

   siphashbatched chacha20
 get_random_int  get_random_long   get_random_int   get_random_long   

32-bit270  278 114146
64-bit 75   75 106186

> I did have two objections to this. The first was that my SipHash
> construction is faster.

Well, it's faster on everything except 32-bit x86.  :-P

> The second, and the more
> important one, was that batching entropy up like this means that 32
> calls will be really fast, and then the 33rd will be slow, since it
> has to do a whole ChaCha round, because get_random_bytes must be
> called to refill the batch.

... and this will take 2121 cycles on 64-bit x86, and 2315 cycles on a
32-bit x86.  Which on a 2.3 GHz processor, is just under a
microsecond.  As far as being inconsistent on process startup, I very
much doubt a microsecond is really going to be visible to the user.  :-)

The bottom line is that I think we're really "pixel peeping" at this
point --- which is what obsessed digital photographers will do when
debating the quality of a Canon vs Nikon DSLR by blowing up a photo by
a thousand times, and then trying to claim that this is visible to the
human eye.  Or people who obsessing over the frequency response curves
of TH-X00 headphones with Mahogony vs Purpleheart wood, when it's
likely that in a blind head-to-head comparison, most people wouldn't
be able to tell the difference

I think the main argument for using the batched getrandom approach is
that it, I would argue, simpler than introducing siphash into the
picture.  On 64-bit platforms it is faster and more consistent, so
it's basically that versus complexity of having to adding siphash to
the things that people have to analyze when considering random number
security on Linux.   But it's a close call either way, I think.

  - Ted

P.S.  My benchmarking code

diff --git a/drivers/char/random.c b/drivers/char/random.c
index a51f0ff43f00..41860864b775 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1682,6 +1682,55 @@ static int rand_initialize(void)
 }
 early_initcall(rand_initialize);
 
+static unsigned int get_random_int_new(void);
+static unsigned long get_random_long_new(void);
+
+#define NUM_CYCLES 1
+#define AVG(finish, start) ((unsigned int)(finish - start + NUM_CYCLES/2) / 
NUM_CYCLES)
+
+static int rand_benchmark(void)
+{
+   cycles_t start,finish;
+   int i, out;
+
+   pr_crit("random benchmark!!\n");
+   start = get_cycles();
+   for (i = 0; i < NUM_CYCLES; i++) {
+   get_random_int();}
+   finish = get_cycles();
+   pr_err("get_random_int # cycles: %u\n", AVG(finish, start));
+
+   start = get_cycles();
+   for (i = 0; i < NUM_CYCLES; i++) {
+   get_random_int_new();
+   }
+   finish = get_cycles();
+   pr_err("get_random_int_new (batched chacha20) # cycles: %u\n", 
AVG(finish, start));
+
+   start = get_cycles();
+   for (i = 0; i < NUM_CYCLES; i++) {
+   get_random_long();
+   }
+   finish = get_cycles();
+   pr_err("get_random_long # cycles: %u\n", AVG(finish, start));
+
+   start = get_cycles();
+   for (i = 0; i < NUM_CYCLES; i++) {
+   get_random_long_new();
+   }
+   finish = get_cycles();
+   pr_err("get_random_long_new (batched chacha20) # cycles: %u\n", 
AVG(finish, start));
+
+   start = get_cycles();
+   for (i = 0; i < NUM_CYCLES; i++) {
+   get_random_bytes(, sizeof(out));
+   }
+   finish = get_cycles();
+   pr_err("get_random_bytes # cycles: %u\n", AVG(finish, start));
+   return 0;
+}
+device_initcall(rand_benchmark);
+
 #ifdef CONFIG_BLOCK
 void rand_initialize_disk(struct gendisk *disk)
 {
@@ -2064,8 +2113,10 @@ unsigned int get_random_int(void)
unsigned int ret;
u64 *chaining;
 
+#if 0  // force slow path
if (arch_get_random_int())
return ret;
+#endif
 
chaining = _cpu_var(get_random_int_chaining);
ret = *chaining = 

Re: HalfSipHash Acceptable Usage

2016-12-21 Thread Theodore Ts'o
On Wed, Dec 21, 2016 at 01:37:51PM -0500, George Spelvin wrote:
> SipHash annihilates the competition on 64-bit superscalar hardware.
> SipHash dominates the field on 64-bit in-order hardware.
> SipHash wins easily on 32-bit hardware *with enough registers*.
> On register-starved 32-bit machines, it really struggles.

And "with enough registers" includes ARM and MIPS, right?  So the only
real problem is 32-bit x86, and you're right, at that point, only
people who might care are people who are using a space-radiation
hardened 386 --- and they're not likely to be doing high throughput
TCP connections.  :-)

- Ted

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: HalfSipHash Acceptable Usage

2016-12-20 Thread Theodore Ts'o
On Mon, Dec 19, 2016 at 06:32:44PM +0100, Jason A. Donenfeld wrote:
> 1) Anything that requires actual long-term security will use
> SipHash2-4, with the 64-bit output and the 128-bit key. This includes
> things like TCP sequence numbers. This seems pretty uncontroversial to
> me. Seem okay to you?

Um, why do TCP sequence numbers need long-term security?  So long as
you rekey every 5 minutes or so, TCP sequence numbers don't need any
more security than that, since even if you break the key used to
generate initial sequence numbers seven a minute or two later, any
pending TCP connections will have timed out long before.

See the security analysis done in RFC 6528[1], where among other
things, it points out why MD5 is acceptable with periodic rekeying,
although there is the concern that this could break certain hueristics
used when establishing new connections during the TIME-WAIT state.

[1] https://tools.ietf.org/html/rfc6528

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kernel-hardening] Re: [PATCH v5 1/4] siphash: add cryptographically secure PRF

2016-12-17 Thread Theodore Ts'o
On Fri, Dec 16, 2016 at 09:15:03PM -0500, George Spelvin wrote:
> >> - Ted, Andy Lutorminski and I will try to figure out a construction of
> >>   get_random_long() that we all like.

We don't have to find the most optimal solution right away; we can
approach this incrementally, after all.

So long as we replace get_random_{long,int}() with something which is
(a) strictly better in terms of security given today's use of MD5, and
(b) which is strictly *faster* than the current construction on 32-bit
and 64-bit systems, we can do that, and can try to make it be faster
while maintaining some minimum level of security which is sufficient
for all current users of get_random_{long,int}() and which can be
clearly artificulated for future users of get_random_{long,int}().

The main worry at this point I have is benchmarking siphash on a
32-bit system.  It may be that simply batching the chacha20 output so
that we're using the urandom construction more efficiently is the
better way to go, since that *does* meet the criteron of strictly more
secure and strictly faster than the current MD5 solution.  I'm open to
using siphash, but I want to see the the 32-bit numbers first.

As far as half-siphash is concerned, it occurs to me that the main
problem will be those users who need to guarantee that output can't be
guessed over a long period of time.  For example, if you have a
long-running process, then the output needs to remain unguessable over
potentially months or years, or else you might be weakening the ASLR
protections.  If on the other hand, the hash table or the process will
be going away in a matter of seconds or minutes, the requirements with
respect to cryptographic strength go down significantly.

Now, maybe this doesn't matter that much if we can guarantee (or make
assumptions) that the attacker doesn't have unlimited access the
output stream of get_random_{long,int}(), or if it's being used in an
anti-DOS use case where it ultimately only needs to be harder than
alternate ways of attacking the system.

Rekeying every five minutes doesn't necessarily help the with respect
to ASLR, but it might reduce the amount of the output stream that
would be available to the attacker in order to be able to attack the
get_random_{long,int}() generator, and it also reduces the value of
doing that attack to only compromising the ASLR for those processes
started within that five minute window.

Cheers,

- Ted

P.S.  I'm using ASLR as an example use case, above; of course we will
need to make similar eximainations of the other uses of
get_random_{long,int}().

P.P.S.  We might also want to think about potentially defining
get_random_{long,int}() to be unambiguously strong, and then creating
a get_weak_random_{long,int}() which on platforms where performance
might be a consideration, it uses a weaker algorithm perhaps with some
kind of rekeying interval.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 1/4] siphash: add cryptographically secure PRF

2016-12-16 Thread Theodore Ts'o
On Fri, Dec 16, 2016 at 03:17:39PM -0500, George Spelvin wrote:
> > That's a nice analysis. Might one conclude from that that hsiphash is
> > not useful for our purposes? Or does it still remain useful for
> > network facing code?
> 
> I think for attacks where the threat is a DoS, it's usable.  The point
> is you only have to raise the cost to equal that of a packet flood.
> (Just like in electronic warfare, the best you can possibly do is force
> the enemy to use broadband jamming.)
> 
> Hash collision attacks just aren't that powerful.  The original PoC
> was against an application that implemented a hard limit on hash chain
> length as a DoS defense, which the attack then exploited to turn it into
> a hard DoS.

What should we do with get_random_int() and get_random_long()?  In
some cases it's being used in performance sensitive areas, and where
anti-DoS protection might be enough.  In others, maybe not so much.

If we rekeyed the secret used by get_random_int() and
get_random_long() frequently (say, every minute or every 5 minutes),
would that be sufficient for current and future users of these
interfaces?

- Ted

P.S.  I'll note that my performance figures when testing changes to
get_random_int() were done on a 32-bit x86; Jason, I'm guessing your
figures were using a 64-bit x86 system?.  I haven't tried 32-bit ARM
or smaller CPU's (e.g., mips, et. al.) that might be more likely to be
used on IoT devices, but I'm worried about those too, of course.

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/3] random: use siphash24 instead of md5 for get_random_int/long

2016-12-14 Thread Theodore Ts'o
On Wed, Dec 14, 2016 at 04:10:37AM +0100, Jason A. Donenfeld wrote:
> This duplicates the current algorithm for get_random_int/long, but uses
> siphash24 instead. This comes with several benefits. It's certainly
> faster and more cryptographically secure than MD5. This patch also
> hashes the pid, entropy, and timestamp as fixed width fields, in order
> to increase diffusion.
> 
> The previous md5 algorithm used a per-cpu md5 state, which caused
> successive calls to the function to chain upon each other. While it's
> not entirely clear that this kind of chaining is absolutely necessary
> when using a secure PRF like siphash24, it can't hurt, and the timing of
> the call chain does add a degree of natural entropy. So, in keeping with
> this design, instead of the massive per-cpu 64-byte md5 state, there is
> instead a per-cpu previously returned value for chaining.
> 
> Signed-off-by: Jason A. Donenfeld 
> Cc: Jean-Philippe Aumasson 

The original reason for get_random_int was because the original
urandom algorithms were too slow.  When we moved to chacha20, which is
must faster, I didn't think to revisit get_random_int() and
get_random_long().

One somewhat undesirable aspect of the current algorithm is that we
never change random_int_secret.  So I've been toying with the
following, which is 4 times faster than md5.  (I haven't tried
benchmarking against siphash yet.)

[3.606139] random benchmark!!
[3.606276] get_random_int # cycles: 326578
[3.606317] get_random_int_new # cycles: 95438
[3.607423] get_random_bytes # cycles: 2653388

  - Ted

P.S.  It's interesting to note that siphash24 and chacha20 are both
add-rotate-xor based algorithms.

diff --git a/drivers/char/random.c b/drivers/char/random.c
index d6876d506220..be172ea75799 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1681,6 +1681,38 @@ static int rand_initialize(void)
 }
 early_initcall(rand_initialize);
 
+static unsigned int get_random_int_new(void);
+
+static int rand_benchmark(void)
+{
+   cycles_t start,finish;
+   int i, out;
+
+   pr_crit("random benchmark!!\n");
+   start = get_cycles();
+   for (i = 0; i < 1000; i++) {
+   get_random_int();
+   }
+   finish = get_cycles();
+   pr_err("get_random_int # cycles: %llu\n", finish - start);
+
+   start = get_cycles();
+   for (i = 0; i < 1000; i++) {
+   get_random_int_new();
+   }
+   finish = get_cycles();
+   pr_err("get_random_int_new # cycles: %llu\n", finish - start);
+
+   start = get_cycles();
+   for (i = 0; i < 1000; i++) {
+   get_random_bytes(, sizeof(out));
+   }
+   finish = get_cycles();
+   pr_err("get_random_bytes # cycles: %llu\n", finish - start);
+   return 0;
+}
+device_initcall(rand_benchmark);
+
 #ifdef CONFIG_BLOCK
 void rand_initialize_disk(struct gendisk *disk)
 {
@@ -2064,8 +2096,10 @@ unsigned int get_random_int(void)
__u32 *hash;
unsigned int ret;
 
+#if 0  // force slow path
if (arch_get_random_int())
return ret;
+#endif
 
hash = get_cpu_var(get_random_int_hash);
 
@@ -2100,6 +2134,38 @@ unsigned long get_random_long(void)
 }
 EXPORT_SYMBOL(get_random_long);
 
+struct random_buf {
+   __u8 buf[CHACHA20_BLOCK_SIZE];
+   int ptr;
+};
+
+static DEFINE_PER_CPU(struct random_buf, batched_entropy);
+
+static void get_batched_entropy(void *buf, int n)
+{
+   struct random_buf *p;
+
+   p = _cpu_var(batched_entropy);
+
+   if ((p->ptr == 0) ||
+   (p->ptr + n >= CHACHA20_BLOCK_SIZE)) {
+   extract_crng(p->buf);
+   p->ptr = 0;
+   }
+   BUG_ON(n > CHACHA20_BLOCK_SIZE);
+   memcpy(buf, p->buf, n);
+   p->ptr += n;
+   put_cpu_var(batched_entropy);
+}
+
+static unsigned int get_random_int_new(void)
+{
+   int ret;
+
+   get_batched_entropy(, sizeof(ret));
+   return ret;
+}
+
 /**
  * randomize_page - Generate a random, page aligned address
  * @start: The smallest acceptable address the caller will take.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] fscrypto: don't use on-stack buffer for key derivation

2016-11-15 Thread Theodore Ts'o
On Thu, Nov 03, 2016 at 03:03:02PM -0700, Eric Biggers wrote:
> With the new (in 4.9) option to use a virtually-mapped stack
> (CONFIG_VMAP_STACK), stack buffers cannot be used as input/output for
> the scatterlist crypto API because they may not be directly mappable to
> struct page.  get_crypt_info() was using a stack buffer to hold the
> output from the encryption operation used to derive the per-file key.
> Fix it by using a heap buffer.
> 
> This bug could most easily be observed in a CONFIG_DEBUG_SG kernel
> because this allowed the BUG in sg_set_buf() to be triggered.
> 
> Signed-off-by: Eric Biggers 

This commit is on the fscrypt and dev branches on ext4.git.

  - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] fscrypto: don't use on-stack buffer for filename encryption

2016-11-15 Thread Theodore Ts'o
On Thu, Nov 03, 2016 at 03:03:01PM -0700, Eric Biggers wrote:
> With the new (in 4.9) option to use a virtually-mapped stack
> (CONFIG_VMAP_STACK), stack buffers cannot be used as input/output for
> the scatterlist crypto API because they may not be directly mappable to
> struct page.  For short filenames, fname_encrypt() was encrypting a
> stack buffer holding the padded filename.  Fix it by encrypting the
> filename in-place in the output buffer, thereby making the temporary
> buffer unnecessary.
> 
> This bug could most easily be observed in a CONFIG_DEBUG_SG kernel
> because this allowed the BUG in sg_set_buf() to be triggered.
> 
> Signed-off-by: Eric Biggers 

This commit is on the fscrypt and dev branches on ext4.git.

  - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 6/7] random: make /dev/urandom scalable for silly userspace programs

2016-08-21 Thread Theodore Ts'o
On Sun, Aug 21, 2016 at 12:53:15PM +0300, Jan Varho wrote:
> On Mon, Jun 13, 2016 at 6:48 PM, Theodore Ts'o <ty...@mit.edu> wrote:
> > +static inline void maybe_reseed_primary_crng(void)
> > +{
> > +   if (crng_init > 2 &&
> > +   time_after(jiffies, primary_crng.init_time + 
> > CRNG_RESEED_INTERVAL))
> > +   crng_reseed(_crng, _pool);
> > +}
> 
> Is the above function (which is now in 4.8-rc2) supposed to do
> something? It seems to have no callers and the maximum value of
> crng_init is 2.

It's dead code.  Its function got moved into _extra_crng(), and you're
right, these days crng_init never gets above 2.  Thanks for pointing
that out.  I'll take it out as a cleanup patch.

 - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 0/5] /dev/random - a new approach

2016-08-18 Thread Theodore Ts'o
On Thu, Aug 18, 2016 at 08:39:23PM +0200, Pavel Machek wrote:
> 
> But this is the scary part. Not limited to ssh. "We perform the
> largest ever network survey of TLS and SSH servers and present
> evidence that vulnerable keys are surprisingly widespread. We find
> that 0.75% of TLS certificates share keys due to insufficient entropy
> during key generation, and we suspect that another 1.70% come from the
> same faulty implementations and may be susceptible to compromise.
> Even more alarmingly, we are able to obtain RSA private keys for 0.50%
> of TLS hosts and 0.03% of SSH hosts, because their public keys shared
> nontrivial common factors due to entropy problems, and DSA private
> keys for 1.03% of SSH hosts, because of insufficient signature
> randomness"
> 
> https://factorable.net/weakkeys12.conference.pdf

That's a very old paper, and we've made a lot of changes since then.
Before that we weren't accumulating entropy from the interrupt
handler, but only from spinning disk drives, some network interrupts
(but not from all NIC's; it was quite arbitrary), and keyboard and
mouse interrupts.  So hours and hours could go by and you still
wouldn't have accumulated much entropy.

> From my point of view, it would make sense to factor time from RTC and
> mac addresses into the initial hash. Situation in the paper was so bad
> some devices had _completely identical_ keys. We should be able to do
> better than that.

We fixed that **years** ago.  In fact, the authors shared with me an
early look at that paper and I implemented add_device_entropy() over
the July 4th weekend back in 2012.  So we are indeed mixing in MAC
addresses and the hardware clock (if it is initialized that early).
In fact that was one of the first things that I did.  Note that this
doesn't really add much entropy, but it does prevent the GCD attack
from demonstrating completely identical keys.  Hence, we had
remediations in the mainline kernel before the factorable.net paper
was published (not that really helped with devices with embedded
Linux, especially since device manufactures don't see anything wrong
with shipping machines with kernels that are years and years out of
date --- OTOH, these systems were probably also shipping with dozens
of known exploitable holes in userspace, if that's any comfort.
Probably not much if you were planning on deploying lots of IOT
devices in your home network.  :-)

> BTW... 128 interrupts... that's 1.3 seconds, right? Would it make
> sense to wait two seconds if urandom use is attempted before it is
> ready?

That really depends on the system.  We can't assume that people are
using systems with a 100Hz clock interrupt.  More often than not
people are using tickless kernels these days.  That's actually the
problem with changing /dev/urandom to block until things are
initialized.

If you do that, then on some system Python will use /dev/urandom to
initialize a salt used by the Python dictionaries, to protect against
DOS attacks when Python is used to run web scripts.  This is a
completely irrelevant reason when Python is being used for systemd
generator scripts in early boot, and if /dev/urandom were to block,
then the system ends up doing nothing, and on a tickless kernels hours
and hours can go by on a VM and Python would still be blocked on
/dev/urandom.  And since none of the system scripts are running, there
are no interrupts, and so Python ends up blocking on /dev/urandom for
a very long time.  (Eventually someone will start trying to brute
force passwords on the VM's ssh port, assuming that the VM's firewall
rules allow this, and that will cause interrupts that will eventually
initialize /dev/urandom.  But that could take hours.)

And this, boys and girls, is why we can't make /dev/urandom block
until its pool is initialized.  There's too great of a chance that we
will break userspace, and then Linus will yell at us and revert the
commit.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 0/5] /dev/random - a new approach

2016-08-15 Thread Theodore Ts'o
On Mon, Aug 15, 2016 at 08:13:06AM +0200, Stephan Mueller wrote:
> 
> According to my understanding of NAPI, the network card sends one interrupt 
> when receiving the first packet of a packet stream and then the driver goes 
> into polling mode, disabling the interrupt. So, I cannot see any batching 
> based on some on-board timer where add_interrupt_randomness is affected.
>  
> Can you please elaborate?

>From https://wiki.linuxfoundation.org/networking/napi:

NAPI (“New API”) is an extension to the device driver packet
processing framework, which is designed to improve the performance
of high-speed networking. NAPI works through:

* Interrupt mitigation
* High-speed networking can create thousands of interrupts per
  second, all of which tell the system something it already knew:
  it has lots of packets to process. NAPI allows drivers to run
  with (some) interrupts disabled during times of high traffic,
  with a corresponding decrease in system load.
  ...

The idea is to mitigate the CPU load from having a large number of
interrupts.  Spinning in a tight loop, wihch is what polling odoes,
doesn't help reduce the CPU load.  So it's *not* what you would want
to do on a small-count core CPU, or on a bettery operated device.

What you're thinking about is a technique to reduce interrupt latency,
which might be useful on a 32-core server CPU where trading off power
consumption for interrupt latency makes sense.  But NAPI is the exact
reverese thing --- it trades interrupt latency for CPU and power
efficiency.

NAPI in its traditional works by having the interrupt card *not* send
an interrupt after the packet comes in, and instead accumulate packets
in a buffer.  The interupt only gets sent after an short timeout, or
when the on-NIC buffer is in danger of filling.  As a result, when
interrupts get sent might be granularized based on some clock --- and
on small systems, there may only be a single CPU on that clock.

> Well, injecting a trojan to a system in user space as unprivileged user that 
> starts some X11 session and that can perform the following command is all you 
> need to get to the key commands of the console.
> 
> xinput list |   grep -Po 'id=\K\d+(?=.*slave\s*keyboard)' |   xargs -P0 -n1 
> xinput test
> 
> That is fully within reach of not only some agencies but also other folks. It 
> is similar for mice.

This doesn't result in keyboard and mice interrupts, which is how
add_input_randomness() works.  So it's mostly irrelevant.

> When you refer to my Jitter RNG, I think I have shown that its strength comes 
> from the internal state of the CPU (states of the internal building blocks 
> relative to each other which may cause internal wait states, state of branch 
> prediction or pipelines, etc.) and not of the layout of the CPU.

All of this is deterministic.  Just as AES_ENCRPT(NSA_KEY, COUNTER++)
is completely deterministic and dependant on the internal state of the
PRNG.  But it's not random, and if you don't know NSA_KEY you can't
prove that it's not really random.

> On VMs, the add_disk_randomness is always used with the exception of KVM when 
> using a virtio disk. All other VMs do not use virtio and offer the disk as a 
> SCSI or IDE device. In fact, add_disk_randomness is only disabled when the 
> kernel detects:
> 
> - SDDs
> 
> - virtio
> 
> - use of device mapper

AWS uses para-virtualized SCSI; Google Compute Engine uses
virtio-SCSI.  So the kernel should know that these are virtual
devices, and I'd argue that if we're setting the add_random flag, we
shouldn't be.

> > As far as whether you can get 5-6 bits of entropy from interrupt
> > timings --- that just doesn't pass the laugh test.  The min-entropy
> 
> May I ask what you find amusing? When you have a noise source for which you 
> have no theoretical model, all you can do is to revert to statistical 
> measurements.

So tell me, how much "minimum", "conservative" entropy do the non-IID
tests report when fed as input AES_ENCRYPT*NSA_KEY, COUNTER++)?

Sometimes, laughing is better than crying.  :-)

> Just see the guy that sent an email to linux-crypto today. His MIPS /dev/
> random cannot produce 16 bytes of data within 4 hours (which is similar to 
> what I see on POWER systems). This raises a very interesting security issue: /
> dev/urandom is not seeded properly. And we all know what folks do in the 
> wild: 
> when /dev/random does not produce data, /dev/urandom is used -- all general 
> user space libs (OpenSSL, libgcrypt, nettle, ...) seed from /dev/urandom per 
> default.
>
> And I call that a very insecure state of affairs.

Overestimating entropy that isn't there doesn't actually make things
more secure.  It just makes people feel better.  This is especially
true if the goal is declare the /dev/urandom to be fully initialized
before userspace is started.  So if the claim is that your "LRNG" can
fully initialize the /dev/urandom pool, and it's using fundamentally
using the 

Re: [PATCH v6 0/5] /dev/random - a new approach

2016-08-12 Thread Theodore Ts'o
On Fri, Aug 12, 2016 at 11:34:55AM +0200, Stephan Mueller wrote:
> 
> - correlation: the interrupt noise source is closely correlated to the HID/
> block noise sources. I see that the fast_pool somehow "smears" that 
> correlation. However, I have not seen a full assessment that the correlation 
> is gone away. Given that I do not believe that the HID event values (key 
> codes, mouse coordinates) have any entropy -- the user sitting at the console 
> exactly knows what he pressed and which mouse coordinates are created, and 
> given that for block devices, only the high-resolution time stamp gives any 
> entropy, I am suggesting to remove the HID/block device noise sources and 
> leave the IRQ noise source. Maybe we could record the HID event values to 
> further stir the pool but do not credit it any entropy. Of course, that would 
> imply that the assumed entropy in an IRQ event is revalued. I am currently 
> finishing up an assessment of how entropy behaves in a VM (where I hope that 
> the report is released). Please note that contrary to my initial 
> expectations, the IRQ events are the only noise sources which are almost 
> unaffected by a VMM operation. Hence, IRQs are much better in a VM 
> environment than block or HID noise sources.

The reason why I'm untroubled with leaving them in is because I beieve
the quality of the timing information from the HID and block devices
is better than most of the other interrupt sources.  For example, most
network interfaces these days use NAPI, which means interrupts get
coalesced and sent in batch, which means the time of the interrupt is
latched off of some kind of timer --- and on many embeded devices
there is a single oscillator for the entire mainboard.  We only call
add_disk_randomness for rotational devices (e.g., only HDD's, not
SSD's), after the interrupt has been recorded.  Yes, most of the
entropy is probably going to be found in the high entropy time stamp
rather than the jiffies-based timestamp, especially for the hard drive
completion time.

I also tend to take a much more pragmatic viewpoint towards
measurability.  Sure, the human may know what she is typing, and
something about when she typed it (although probably not accurately
enough on a millisecond basis, so even the jiffies number is going to
be not easily predicted), but the analyst sitting behind the desk at
the NSA or the BND or the MSS is probably not going to have access to
that information.

(Whereas the NSA or the BND probably *can* get low-level information
about the Intel x86 CPU's internal implementation, which is why I'm
extremely amused by the arugment --- "the internals of the Intel CPU
are **so** complex we can't reverse engineer what's going on inside,
so the jitter RNG *must* be good!"  Note BTW that the NSA has only
said they won't do industrial espionage for economic for economic
gain, not that they won't engage in espionage against industrial
entities at all.  This is why the NSA spying on Petrobras is
considered completely fair game, even if it does enrage the
Brazillians.  :-)

> - entropy estimate: the current entropy heuristics IMHO have nothing to do 
> with the entropy of the data coming in. Currently, the min of first/second/
> third derivative of the Jiffies time stamp is used and capped at 11. That 
> value is the entropy value credited to the event. Given that the entropy 
> rests with the high-res time stamp and not with jiffies or the event value, I 
> think that the heuristic is not helpful. I understand that it underestimates 
> on average the available entropy, but that is the only relationship I see. In 
> my mentioned entropy in VM assessment (plus the BSI report on /dev/random 
> which is unfortunately written in German, but available in the Internet) I 
> did a min entropy calculation based on different min entropy formulas 
> (SP800-90B). That calculation shows that we get from the noise sources is 
> about 5 to 6 bits. On average the entropy heuristic credits between 0.5 and 1 
> bit for events, so it underestimates the entropy. Yet, the entropy heuristic 
> can credit up to 11 bits. Here I think it becomes clear that the current 
> entropy heuristic is not helpful. In addition, on systems where no high-res 
> timer is available, I assume (I have not measured it yet), the entropy 
> heuristic even overestimates the entropy.

The disks on a VM are not rotational disks, so we wouldn't be using
the add-disk-randomness entropy calculation.  And you generally don't
have a keyboard on a mouse attached to the VM, so we would be using
the entropy estimate from the interrupt timing.

As far as whether you can get 5-6 bits of entropy from interrupt
timings --- that just doesn't pass the laugh test.  The min-entropy
formulas are estimates assuming IID data sources, and it's not at all
clear (in fact, i'd argue pretty clearly _not_) that they are IID.  As
I said, take for example the network interfaces, and how NAPI gets
implemented.  And in a VM environment, 

Re: [PATCH v6 0/5] /dev/random - a new approach

2016-08-11 Thread Theodore Ts'o
On Thu, Aug 11, 2016 at 02:24:21PM +0200, Stephan Mueller wrote:
> 
> The following patch set provides a different approach to /dev/random which
> I call Linux Random Number Generator (LRNG) to collect entropy within the 
> Linux
> kernel. The main improvements compared to the legacy /dev/random is to provide
> sufficient entropy during boot time as well as in virtual environments and 
> when
> using SSDs. A secondary design goal is to limit the impact of the entropy
> collection on massive parallel systems and also allow the use accelerated
> cryptographic primitives. Also, all steps of the entropic data processing are
> testable. Finally massive performance improvements are visible at /dev/urandom
> and get_random_bytes.
> 
> The design and implementation is driven by a set of goals described in [1]
> that the LRNG completely implements. Furthermore, [1] includes a
> comparison with RNG design suggestions such as SP800-90B, SP800-90C, and
> AIS20/31.

Given the changes that have landed in Linus's tree for 4.8, how many
of the design goals for your LRNG are still left not yet achieved?

Reading the paper, you are still claiming huge performance
improvements over getrandomm and /dev/urandom.  With the use of the
ChaCha20 (and given that you added a ChaCha20 DRBG as well), it's not
clear this is still an advantage over what we currently have.

As far as whether or not you can gather enough entropy at boot time,
what we're really talking about how how much entropy we want to assume
can be gathered from interrupt timings, since what you do in your code
is not all that different from what the current random driver is
doing.  So it's pretty easy to turn a knob and say, "hey presto, we
can get all of the entropy we need before userspace starts!"  But
justifying this is much harder, and using statistical tests isn't
really sufficient as far as I'm concerned.

Cheers,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] RANDOM: ATH9K RNG delivers zero bits of entropy

2016-08-10 Thread Theodore Ts'o
On Tue, Aug 09, 2016 at 02:04:44PM +, Jason Cooper wrote:
> 
> iiuc, Ted, you're saying using the hw_random framework would be
> disasterous because despite most drivers having a default quality of 0,
> rngd assumes 1 bit of entropy for every bit read?

Sorry, what I was trying to say (but failed) was that bypassing the
hwrng framework and injecting entropy directly the entropy pool was
disatrous.

> Thankfully, most hw_random drivers don't set the quality.  So unless the
> user sets the default_quality param, it's zero.

The fact that this is "most" and not "all" does scare me a little.

As far as I'm concerned *all* hw_random drivers should set quality to
zero, since it should be up to the system administrator.  Perhaps the
one exception is virtio_rng, since if you don't trust the hypvervisor,
the security of the VM is hopeless.  That being said, I have seen
configurations of KVM which use:

-object rng-random,filename=/dev/urandom,id=rng0 \
-device virtio-rng-pci,rng=rng0

Which is somewhat non-ideal.  (Try running od -x /dev/random on such a
guest system)

- Ted

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] RANDOM: ATH9K RNG delivers zero bits of entropy

2016-08-09 Thread Theodore Ts'o
On Tue, Aug 09, 2016 at 06:30:03AM +, Pan, Miaoqing wrote:
> Agree with Jason's point, also understand Stephan's concern.  The
> date rate can be roughly estimated by 'cat /dev/random |rngtest -c
> 1000', the average speed is .294Kibits/s. I will sent the patch
> to disable ath9k RNG by default.

If the ATH9K is generating some random amount of data, but you don't
know how random, and you're gathering it opportunistically --- for
example, suppose a wireless driver gets the radio's signal strength
through the normal course of its operation, and while there might not
be that much randomness for someone who can observe the exact details
of how the phone is positioned in the room --- but for which the
analyst sitting in Fort Meade won't know whether or not the phone is
on the desk, or in a knapsack under the table, the right interface to
use is:

   void add_device_randomness(const void *buf, unsigned int size);

This won't increase the entropy count, but if you have the bit of
potential randomness "for free", you might as well contribute it to
the entropy pool.  If it turns out that the chip was manufactured in
China, and the MSS has backdoored it out the wazoo, it won't do any
harm --- where as using the hwrng framework would be disastrous.

Cheerfs,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: getrandom waits for a long time when /dev/random is insufficiently read from

2016-07-30 Thread Theodore Ts'o
On Fri, Jul 29, 2016 at 01:31:14PM -0400, Alex Xu wrote:
> 
> My understanding was that all three methods of obtaining entropy from
> userspace all receive data from the CSPRNG in the kernel, and that the
> only difference is that /dev/random and getrandom may block depending
> on the kernel's estimate of the currently available entropy.

This is incorrect.

/dev/random is a legacy interface which dates back to a time when
people didn't have as much trust in the cryptographic primitives ---
when there was concerns that the NSA might have put a back-door into
SHA-1, for example.  (As it turns out; we were wrong.  NSA put the
back door into Dual EC DRBG.)  So it uses a strategy of an extremely
conservative entropy estimator, and will allow N bytes to be
/dev/random pool as the entropy estimator believes that it has
gathered at least N bytes of entropy from environmental noise.

/dev/urandom uses a different output pool from /dev/random (the random
and urandom pools both draw from an common input pool).  Originally
the /dev/urandom pool drew from the input pool as needed, but it
wouldn't block if there was insufficient entropy.  Over time, it now
has limits about how quickly it can draw from the input pool, and it
behaves more and more like a CSPRNG.  In fact, in the most recent set
of patches which Linus has accepted for v4.8-rc1, the urandom pool has
been replaced by an actual CSPRNG using the ChaCha-20 stream cipher.

The getrandom(2) system call uses the same output pool (4.7 and
earlier) or CSPRG (starting with v4.8-rc1) as /dev/urandom.  The big
difference is that it blocks until we know for sure that the output
pool or CSRPNG has been seeded with 128 bits of entropy.  We don't do
this with /dev/urandom for backwards compatibility reasons.  (For
example, if we did make /dev/urandom block until it was seeded, it
would break systemd, because systemd and progams run by systemd draws
from /dev/urandom before it has been initialized, and if /dev/urandom
were to block, the boot would hang, and with the system quiscient, we
wouldn't get much environmental noise, and the system would hang
hours.)

> When qemu is started with -object rng-random,filename=/dev/urandom, and
> immediately (i.e. with no initrd and as the first thing in init):
> 
> 1. the guest runs dd if=/dev/random, there is no blocking and tons of
> data goes to the screen. the data appears to be random.
> 
> 2. the guest runs getrandom with any requested amount (tested 1 byte
> and 16 bytes) and no flags, it blocks for 90-110 seconds while the
> "non-blocking pool is initialized". the returned data appears to be
> random.
> 
> 3. the guest runs getrandom with GRND_RANDOM with any requested amount,
> it returns the desired amount or possibly less, but in my experience at
> least 10 bytes. the returned data appears to be random.
> 
> I believe that the difference between cases 1 and 2 is a bug, since
> based on my previous statement, in this scenario, getrandom should
> never block.

This is correct; and it has been fixed in the patches in v4.8-rc1.
The patch which fixes this has been marked for backporting to stable
kernels:

commit 3371f3da08cff4b75c1f2dce742d460539d6566d
Author: Theodore Ts'o <ty...@mit.edu>
Date:   Sun Jun 12 18:11:51 2016 -0400

random: initialize the non-blocking pool via add_hwgenerator_randomness()

If we have a hardware RNG and are using the in-kernel rngd, we should
use this to initialize the non-blocking pool so that getrandom(2)
doesn't block unnecessarily.

Cc: sta...@kernel.org
Signed-off-by: Theodore Ts'o <ty...@mit.edu>

Basically, the urandom pool (now CSRPNG) wasn't getting initialized
from the hardware random number generator.  Most people didn't notice
because very few people actually *use* hardware random number
generators (although it's much more common in VM's, which is how
you're using it), and use of getrandom(2) is still relatively rare,
given that glibc hasn't yet seen fit to support it yet.

Cheers,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[GIT PULL] /dev/random driver fix for 4.8

2016-07-28 Thread Theodore Ts'o
The following changes since commit 86a574de4590ffe6fd3f3ca34cdcf655a78e36ec:

  random: strengthen input validation for RNDADDTOENTCNT (2016-07-03 17:09:33 
-0400)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random.git 
tags/random_for_linus_stable

for you to fetch changes up to 59b8d4f1f5d26e4ca92172ff6dcd1492cdb39613:

  random: use for_each_online_node() to iterate over NUMA nodes (2016-07-27 
23:30:25 -0400)


Fix a boot failure on systems with non-contiguous NUMA id's.


Theodore Ts'o (1):
  random: use for_each_online_node() to iterate over NUMA nodes

 drivers/char/random.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PULL] /dev/random driver changes for 4.8

2016-07-27 Thread Theodore Ts'o
On Mon, Jul 25, 2016 at 02:44:24AM -0400, Theodore Ts'o wrote:
> The following changes since commit 1a695a905c18548062509178b98bc91e67510864:
> 
>   Linux 4.7-rc1 (2016-05-29 09:29:24 -0700)
> 
> are available in the git repository at:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random.git 
> tags/random_for_linus
> 
> for you to fetch changes up to 86a574de4590ffe6fd3f3ca34cdcf655a78e36ec:
> 
>   random: strengthen input validation for RNDADDTOENTCNT (2016-07-03 17:09:33 
> -0400)

Hi Linus,

Are you planning on pulling the random tree this cycle?  I'm not sure
if you wanted to let it soak for a few days in linux-next, or whether
you want to wait another full release cycle, given that the random
tree had gotten dropped from linux-next some time ago without my
realizing it.  (The code has actually been soaking for 1.5 releases,
since I wanted to give it lots of soak time, but of course, it would
have been more helpful if it actually was in linux-next.  Sigh...)

  - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PULL] /dev/random driver changes for 4.8

2016-07-25 Thread Theodore Ts'o
On Mon, Jul 25, 2016 at 05:15:15PM +1000, Stephen Rothwell wrote:
> Hi Ted,
> 
> On Mon, 25 Jul 2016 02:44:24 -0400 Theodore Ts'o <ty...@mit.edu> wrote:
> >
> > The following changes since commit 1a695a905c18548062509178b98bc91e67510864:
> > 
> >   Linux 4.7-rc1 (2016-05-29 09:29:24 -0700)
> > 
> > are available in the git repository at:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random.git 
> > tags/random_for_linus
> > 
> > for you to fetch changes up to 86a574de4590ffe6fd3f3ca34cdcf655a78e36ec:
> > 
> >   random: strengthen input validation for RNDADDTOENTCNT (2016-07-03 
> > 17:09:33 -0400)
> 
> Of course none of this has been in linux-next since the random tree was
> dropped in March because it had not been updated for more than a year at
> that point.
> 
> However, at least half of these look like bug fixes (cced to stable).
> 
> Should I reinstate the random tree to linux-next?

Yes, please do.  It was getting zero-day checks, and I assumed it was
in linux-next; I should have checked, though.

- Ted

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[GIT PULL] /dev/random driver changes for 4.8

2016-07-25 Thread Theodore Ts'o
The following changes since commit 1a695a905c18548062509178b98bc91e67510864:

  Linux 4.7-rc1 (2016-05-29 09:29:24 -0700)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random.git 
tags/random_for_linus

for you to fetch changes up to 86a574de4590ffe6fd3f3ca34cdcf655a78e36ec:

  random: strengthen input validation for RNDADDTOENTCNT (2016-07-03 17:09:33 
-0400)


A number of improvements for the /dev/random driver; the most
important is the use of a ChaCha20-based CRNG for /dev/urandom, which
is faster, more efficient, and easier to make scalable for
silly/abusive userspace programs that want to read from /dev/urandom
in a tight loop on NUMA systems.

This set of patches also improves entropy gathering on VM's running on
Microsoft Azure, and will take advantage of a hw random number
generator (if present) to initialize the /dev/urandom pool.


Eric Biggers (1):
  random: properly align get_random_int_hash

Stephan Mueller (1):
  random: add interrupt callback to VMBus IRQ handler

Theodore Ts'o (6):
  random: initialize the non-blocking pool via add_hwgenerator_randomness()
  random: print a warning for the first ten uninitialized random users
  random: replace non-blocking pool with a Chacha20-based CRNG
  random: make /dev/urandom scalable for silly userspace programs
  random: add backtracking protection to the CRNG
  random: strengthen input validation for RNDADDTOENTCNT

 crypto/chacha20_generic.c |  61 --
 drivers/char/random.c | 482 
++---
 drivers/hv/vmbus_drv.c|   3 +
 include/crypto/chacha20.h |   1 +
 lib/Makefile  |   2 +-
 lib/chacha20.c|  79 
 6 files changed, 468 insertions(+), 160 deletions(-)
 create mode 100644 lib/chacha20.c
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/7] random: add backtracking protection to the CRNG

2016-06-26 Thread Theodore Ts'o
On Sun, Jun 26, 2016 at 08:47:53PM +0200, Pavel Machek wrote:
> 
> You are basically trying to turn CRNG into one way hash function here,
> right? Do you have any explanation that it has the required
> properties?

Well, not really.  A CRNG has the property that if you generate a
series of outputs: O_N-1, O_N, O_N+1, etc., knowledge of O_N does not
give you any special knowledge with respect to O_N+1 or O_N-1.

The anti-backtracking protection means that when we generate O_N, we
use O_N+1 to mutate the state used for the CRNG; specifically, we are
XOR'ing O_N+1 into the state.  Now let's suppose that state gets
exposed.  Even if you know O_N, that's not going to let you know
O_N+1, so knowledge of the exposed state post XOR with O_N+1 isn't
going to help you get back the original state.

More generally, if we assume ChaCha20 is secure, that means that you
can't derive the key even if you have known plaintext.  The output of
the CRNG is basically the keystream --- what you have after you XOR
the ciphertext with the plaintext.  If ChaCha20 is secure, knowledge
of large portions of the keystream should not help you determine the
key, which means is why knowledge of O_N-1, O_N, won't help you derive
either (a) the state of CRNG, aka the ChaCha20 key, or (b) O_N+1.

Cheers,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-26 Thread Theodore Ts'o
On Sun, Jun 26, 2016 at 08:47:43PM +0200, Pavel Machek wrote:
> Ok, so lets say I'm writing some TLS server, and I know that traffic
> is currently heavy because it was heavy in last 5 minutes. Would it
> make sense for me to request 128M of randomness from /dev/urandom, and
> then use that internally, to avoid the syscall overhead?

Probably not.  Keep in mind that even requesting a 256 bit key one at
a time, it's still only taking 1.7 microseconds.  The time to do the
Diffie-Hellman will vary depending on whether you are doing DH in a
log field or a ECC field, but whether it's around 18-20ms or 3-4ms,
it's over *three* orders of magnitude more than the time it takes to
generate a random session key.

So trying to grab 1M of randomness and managing it in your TLS server
is almost certainly optimizing for The Wrong Thing.  (Not to mention
the potential for disaster if that randomness gets stolen via some
kind of wild pointer bug, etc., etc.)

This is why I think it's a waste of time talknig about trying to use
AVX or SIMD optimized version of ChaCha20.  Even if the cost to do the
XSAVE / XRESTORE doesn't crush the optimization of advantages of
ChaCha20, and if you ignore the costs of adding backtracking defenses,
etc., bloating the kernel by adding support for the crypto layer
doesn't make sense when using the generic ChaCha20 already gets you
down into the microsecond arena.  You might be able to get the session
key generation down by another .3 or .4 microseconds, but even if you
cut it down by half to .9 microseconds, the public key operations and
the diffie-hellman operations are going to make such savings be almost
completely unnoticeable.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 0/7] /dev/random - a new approach

2016-06-20 Thread Theodore Ts'o
On Mon, Jun 20, 2016 at 09:00:49PM +0200, Stephan Mueller wrote:
> 
> The time stamp maintenance is the exact cause for the correlation: one HID 
> event triggers:
> 
> - add_interrupt_randomness which takes high-res time stamp, Jiffies and some 
> pointers
> 
> - add_input_randomness which takes high-res time stamp, Jiffies and HID event 
> value
> 
> The same applies to disk events. My suggestion is to get rid of the double 
> counting of time stamps for one event.
> 
> And I guess I do not need to stress that correlation of data that is supposed 
> to be entropic is not good :-)

What is your concern, specifically?  If it is in the entropy
accounting, there is more entropy in HID event interrupts, so I don't
think adding the extra 1/64th bit of entropy is going to be problematic.

If it is that there are two timestamps that are closely correleated
being added into the pool, the add_interrupt_randomness() path is
going to mix that timestamp with the interrupt timings from 63 other
interrupts before it is mixed into the input pool, while the
add_input_randomness() mixes it directly into the pool.  So if you
think there is a way this could be leveraged into attack, please give
specifics --- but I think we're on pretty solid ground here.

Cheers,

- Ted

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-20 Thread Theodore Ts'o
On Mon, Jun 20, 2016 at 05:49:17PM +0200, Stephan Mueller wrote:
> 
> Is speed everything we should care about? What about:
> 
> - offloading of crypto operation from the CPU

In practice CPU offland is not helpful, and in fact, in most cases is
harmful, when one is only encrypting a tiny amount of data.  That's
because the cost of setup and teardown, not to mention key scheduling,
dominate.  This is less of the case in the case of the SIMD / AVX
optimizations --- but that's because these are CPU instructions, and
there really isn't any CPU offloading going on.

> - potentially additional security features a hardware cipher may provide like 
> cache coloring attack resistance?

Um have you even taken a *look* at how ChaCha20 is implemented?
*What* cache coloring attack is possible at all, period?

Hint: where are the lookup tables?  Where are the data-dependent
memory accesses in the ChaCha20 core?

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 0/7] /dev/random - a new approach

2016-06-20 Thread Theodore Ts'o
On Mon, Jun 20, 2016 at 07:51:59AM +0200, Stephan Mueller wrote:
> 
> - Correlation of noise sources: as outlined in [1] chapter 1, the three noise 
> sources of the legacy /dev/random implementation have a high correlation. 
> Such 
> correlation is due to the fact that a HID/disk event at the same time 
> produces 
> an IRQ event. The time stamp (which deliver the majority of entropy) of both 
> events are correlated. I would think that the maintenance of the fast_pools 
> partially breaks that correlation to some degree though, yet how much the 
> correlation is broken is unknown.

We add more entropy for disk events only if they are rotational (i.e.,
not flash devices), and the justification for this is Don Davis's
"Cryptographic randomness from air turbulence in disk drives" paper.
There has also been work showing that by measuring disk completion
times, you can actually notice when someone walks by the server
(because the vibrations from footsteps affect disk head settling
times).  In fact how you mount HDD's to isolate them from vibration
can make a huge difference to the overall performance of your system.

As far as HID's are concerned, I will note that in 99.99% of the
systems, if you have direct physical access to the system, you
probably are screwed from a security perspective anyway.  Yes, one
could imagine systems where the user might have access to keyboard and
the mouse, and not be able to do other interesting things (such as
inserting a BadUSB device into one of the ports, rebooting the system
into single user mode, etc.).  But now you have to assume the user can
actually manipulate the input devices down to jiffies 1ms or cycle
counter (nanonsecond) level of granularity...

All of this being said, I will freely admit that the hueristics of
entropy collection is by far one of the weaker aspects of the system.
Ultimately there is no way to be 100% accurate with **any** entropy
system, since ENCRYPT(NSA_KEY, COUNTER++) has zero entropy, but good
luck finding a entropy estimation system that can detect that.

> - The delivery of entropic data from the input_pool to the 
> (non)blocking_pools 
> is not atomic (for the lack of better word), i.e. one block of data with a 
> given entropy content is injected into the (non)blocking_pool where the 
> output 
> pool is still locked (the user cannot obtain data during that injection 
> time). 
> With Ted's new patch set, two 64 bit blocks from the fast_pools are injected 
> into the ChaCha20 DRNG. So, it is clearly better than previously. But still, 
> with the blocking_pool, we face that issue. The reason for that issue is 
> outlined in [1] 2.1. In the pathological case with an active attack, 
> /dev/random could have a security strength of 2 * 128 bits of and not 2^128 
> bits when reading 128 bits out of it (the numbers are for illustration only, 
> it is a bit better as /dev/random is woken up at random_read_wakeup_bits 
> intervals -- but that number can be set to dangerous low levels down to 8 
> bits).

I believe what you are referring to is addressed by the avalanche
reseeding feature.  Yes, that can be turned down by
random_read_wakeup_bits, but (a) this requires root (at which point
the game is up), and (b) in practice no one touches that knob.  You're
right that it would probably be better to reject attempts to set that
number to too-small a number, or perhaps remove the knob entirely.

Personally, I don't really use /dev/random, nor would I recommend it
for most application programmers.  At this point, getrandom(2) really
is the preferred interface unless you have some very specialized
needs.  


Cheers,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-20 Thread Theodore Ts'o
On Mon, Jun 20, 2016 at 01:19:17PM +0800, Herbert Xu wrote:
> On Mon, Jun 20, 2016 at 01:02:03AM -0400, Theodore Ts'o wrote:
> > 
> > It's work that I'm not convinced is worth the gain?  Perhaps I
> > shouldn't have buried the lede, but repeating a paragraph from later
> > in the message:
> > 
> >So even if the AVX optimized is 100% faster than the generic version,
> >it would change the time needed to create a 256 byte session key from
> >1.68 microseconds to 1.55 microseconds.  And this is ignoring the
> >extra overhead needed to set up AVX, the fact that this will require
> >the kernel to do extra work doing the XSAVE and XRESTORE because of
> >the use of the AVX registers, etc.
> 
> We do have figures on the efficiency of the accelerated chacha
> implementation on 256-byte requests (I've picked the 8-block
> version):

Sorry, I typo'ed this.  s/bytes/bits/.  256 bits / 32 bytes is the
much more common amount that someone might be trying to extract, to
get a 256 **bit** session key.

And also note my comments about how we need to permute the key
directly, and not just go through the set_key abstraction.  And when
you did your benchmarks, how often was XSAVE / XRESTORE happening ---
in between every single block operation?

Remember, what we're talking about for getrandom(2) in the most common
case is syscall, extrate a 32 bytes worth of keystream, ***NOT***
XOR'ing it with plaintext buffer, and then permuting the key.

So simply doing chacha20 encryption in a tight loop in the kernel
might not be a good proxy for what would actually happen in real life
when someone calls getrandom(2).  (Another good question to ask is
when someone might be needing to generate millions of 256-bit session
keys per second, when the D-H setup, even if you were using ECCDH,
would be largely dominating the time for the connection setup anyway.)

Cheers,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-19 Thread Theodore Ts'o
On Mon, Jun 20, 2016 at 09:25:28AM +0800, Herbert Xu wrote:
> > Yes, I understand the argument that the networking stack is now
> > requiring the crypto layer --- but not all IOT devices may necessarily
> > require the IP stack (they might be using some alternate wireless
> > communications stack) and I'd much rather not make things worse.
> 
> Sure, but 99% of the kernels out there will have a crypto API.
> So why not use it if it's there and use the standalone chacha
> code otherwise?

It's work that I'm not convinced is worth the gain?  Perhaps I
shouldn't have buried the lede, but repeating a paragraph from later
in the message:

   So even if the AVX optimized is 100% faster than the generic version,
   it would change the time needed to create a 256 byte session key from
   1.68 microseconds to 1.55 microseconds.  And this is ignoring the
   extra overhead needed to set up AVX, the fact that this will require
   the kernel to do extra work doing the XSAVE and XRESTORE because of
   the use of the AVX registers, etc.

So in the absolute best case, this improves the time needed to create
a 256 bit session key by 0.13 microseconds.  And that assumes that the
extra setup and teardown overhead of an AVX optimized ChaCha20
(including the XSAVE and XRESTORE of the AVX registers, etc.) don't
end up making the CRNG **slower**.

The thing to remember about these optimizations is that they are great
for bulk encryption, but that's not what the getrandom(2) and
get_random_bytes() are used for, in general.  We don't need to create
multiple megabytes of random numbers at a time.  We need to create
them 256 bits at a time, with anti-backtracking protections in
between.  Think of this as the random number equivalent of artisinal
beer making, as opposed to Budweiser beer, which ferments the beer
literally in pipelines.  :-)

Yes, Budweiser may be made more efficiently using continuous
fermentation --- but would you want to drink it?   And if you have to
constantly start and stop the continuous fermentation pipeline, the net
result can actually be less efficient compared to doing it right in
the first place

   - Ted

P.S.  I haven't measured this to see, mainly because I really don't
care about the difference between 1.68 vs 1.55 microseconds, but there
is a good chance in the crypto layer that it might be a good idea to
have the system be smart enough to automatically fall back to using
the **non** optimized version if you only need to encrypt a small
amount of data.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-19 Thread Theodore Ts'o
On Wed, Jun 15, 2016 at 10:59:08PM +0800, Herbert Xu wrote:
> I think you should be accessing this through the crypto API rather
> than going direct.  We already have at least one accelerated
> implementation of chacha20 and there may well be more of them
> in future.  Going through the crypto API means that you will
> automatically pick up the best implementation for the platform.

While there are some benefits of going through the crypto API, there
are some downsides as well:

A) Unlike using ChaCha20 in cipher mode, only need the keystream, and
we don't need to XOR the output with plaintext.  We could supply a
dummy zero-filled buffer to archive the same result, but now the
"accelerated" version is having to do an extra memory reference.  Even
if the L1 cache is big enough so that we're not going all the way out
to DRAM, we're putting additional pressure the D cache.

B) The anti-backtracking feature involves taking the existing key and
XOR'ing it with unsued output from the keystream.  We can't do that
using the Crypto API without keeping our own copy of the key, and then
calling setkey --- which means yet more extra memory references.

C) Simply compiling in the Crypto layer and the ChaCha20 generic
handling (all of which is doing extra work which we would then be
undoing in the random layer --- and I haven't included the extra code
in the random driver needed interface with the crypto layer) costs an
extra 20k.  That's roughly the amount of extra kernel bloat that the
Linux kernel grew in its allnoconfig from version to version from 3.0
to 3.16.  I don't have the numbers from the more recent kernels, but
roughly speaking, we would be responsible for **all** of the extra
kernel bloat (and if there was any extra kernel bloat, we would
helping to double it) in the kernel release where this code would go
in.  I suspect the folks involved with the kernel tinificaiton efforts
wouldn't exactly be pleased with this.

Yes, I understand the argument that the networking stack is now
requiring the crypto layer --- but not all IOT devices may necessarily
require the IP stack (they might be using some alternate wireless
communications stack) and I'd much rather not make things worse.


The final thing is that it's not at all clear that the accelerated
implementation is all that important anyway.  Consider the following
two results using the unaccelerated ChaCha20:

% dd if=/dev/urandom bs=4M count=32 of=/dev/null
32+0 records in
32+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 1.18647 s, 113 MB/s

% dd if=/dev/urandom bs=32 count=4194304 of=/dev/null
4194304+0 records in
4194304+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 7.08294 s, 18.9 MB/s

So in both cases, we are reading 128M from the CRNG.  In the first
case, we see the sort of speed we would get if we were using the CRNG
for some illegitimate, such as "dd if=/dev/urandom of=/dev/sdX bs=4M"
(because they were too lazy to type "apt-get install nwipe").

In the second case, we see the use of /dev/urandom in a much more
reasonable, proper, real-world use case for /de/urandom, which is some
userspace process needing a 256 bit session key for a TLS connection,
or some such.  In this case, we see that the other overheads of
providing the anti-backtracking protection, system call overhead,
etc., completely dominate the speed of the core crypto primitive.

So even if the AVX optimized is 100% faster than the generic version,
it would change the time needed to create a 256 byte session key from
1.68 microseconds to 1.55 microseconds.  And this is ignoring the
extra overhead needed to set up AVX, the fact that this will require
the kernel to do extra work doing the XSAVE and XRESTORE because of
the use of the AVX registers, etc.

The bottom line is that optimized ChaCha20 optimizations might be
great for bulk encryption, but for the purposes of generating 256 byte
session keys, I don't think the costs outweigh the benefits.

Cheers,

- Ted

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 0/5] /dev/random - a new approach

2016-06-18 Thread Theodore Ts'o
On Fri, Jun 17, 2016 at 03:56:13PM +0200, David Jaša wrote:
> I was thinking along the lines that "almost every important package
> supports FreeBSD as well where they have to handle the condition so
> option to switch to Rather Break Than Generate Weak Keys would be nice"
> - but I didn't expect that systemd could be a roadblock here. :-/

It wasn't just systemd; it also broke OpenWRT and Ubuntu Quantal
systems from booting.

> I was also thinking of little devices where OpenWRT or proprietary
> Linux-based systems run that ended up with predictable keys way too
> ofter (or as in OpenWRT's case, with cumbersome tutorials how to
> generate keys elsewhere).

OpenWRT and other embedded devices (a) generally use a single master
oscillator to drive everything, and (b) often use RISC architectures
such as MIPS.

Which means that arguments of the form ``the Intel L1 / L2 cache
architecture is s complicated that no human could possibly
figure out how they would affect timing calculations, and besides, my
generator passes FIPS 140-2 tests (never mind AES(NSA_KEY, CNTR++)
also passes the FIPS 140-2 statistical tests)'' --- which I normally
have trouble believing --- are even harder for me to believe.

At the end of the day, with these devices you really badly need a
hardware RNG.  We can't generate randomness out of thin air.  The only
thing you really can do requires user space help, which is to generate
keys lazily, or as late as possible, so you can gather as much entropy
as you can --- and to feed in measurements from the WiFi (RSSI
measurements, MAC addresses seen, etc.)  This won't help much if you
have an FBI van parked outside your house trying to carry out a
TEMPEST attack, but hopefully it provides some protection against a
remote attacker who isn't try to carry out an on-premises attack.

Cheers,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-13 Thread Theodore Ts'o
On Mon, Jun 13, 2016 at 08:00:33PM +0200, Stephan Mueller wrote:
> 
> 1. The ChaCha20 is seeded with 256 bits (let us assume it is full entropy)
> 
> 2. The ChaCha20 block operation shuffles the 256 bits of entropy over the 512 
> bit state -- already here we see that after shuffling, the entropy to bit 
> ratio fell from (256 bits of entropy / 256 data bits) to (256 bits of entropy 
> / 512 data bits).
> 
> 3. The code above directly returns the output of the ChaCha20 round to the 
> caller. Considering the discussion in step 2, I would assume that the entropy 
> content of the output size is cut in half.

This is a CSRNG, not an entropy pool.  One could make the same
argument about an AES Counter based DRBG.  So you could start with an
256 AES key, and then after you return ENCRYPT(AES_KEY, COUNTER++),
you've "halved" the entropy in the AES CTR-DRBG state.  Oh, noes!!!

The reason why I don't view this as a problem is because we're
assuming that the cipher algorithm is secure.  With /dev/urandom we
were always emitting more bytes than we had entropy available, because
not blocking was considered more important.  Previously we were
relying on the security of SHA-1.  With AES CTR-DRBG, you rely on the
security with AES.  With this patch, we're relying on the security of
Chacha20.  Given that this is being used in TLS for more and more
mobile connections, I'm comfortable with that choice.

For someone who doesn't trust the security of our underlying
algorithms, and want to trust solely in the correctness of our entropy
estimation algorithms, they can always use /dev/random.


So to be clear about what we're doing, ChaCha20 uses as its internal
state 16 bytes of constant, followed by 32 bytes of key, followed by
16 bytes of counter.  It is a stream cipher where each successive
block in the keystream is generated by bumping the counter, and
encryption is done by XOR'ing the keystream with the plaintext.

If you ignore the backtracking protection introduced in patch 7/7 in
this series (which uses unreleased portions of the previous keystream
to mutate the key for future CRNG outputs), what we're doing is using
the keystream as the output for the CRNG/CSRNG, and morally, this is
no different from a CTR-DRBG as defined in SP 800-90A.

Cheers,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 6/7] random: make /dev/urandom scalable for silly userspace programs

2016-06-13 Thread Theodore Ts'o
On a system with a 4 socket (NUMA) system where a large number of
application threads were all trying to read from /dev/urandom, this
can result in the system spending 80% of its time contending on the
global urandom spinlock.  The application should have used its own
PRNG, but let's try to help it from running, lemming-like, straight
over the locking cliff.

Reported-by: Andi Kleen <a...@linux.intel.com>
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 62 +++
 1 file changed, 58 insertions(+), 4 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 841f9a8..d640865 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -434,6 +434,8 @@ struct crng_state primary_crng = {
  */
 static int crng_init = 0;
 #define crng_ready() (likely(crng_init > 0))
+static void _extract_crng(struct crng_state *crng,
+ __u8 out[CHACHA20_BLOCK_SIZE]);
 static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE]);
 static void process_random_ready_list(void);
 
@@ -754,6 +756,16 @@ static void credit_entropy_bits_safe(struct entropy_store 
*r, int nbits)
 
 static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
 
+#ifdef CONFIG_NUMA
+/*
+ * Hack to deal with crazy userspace progams when they are all trying
+ * to access /dev/urandom in parallel.  The programs are almost
+ * certainly doing something terribly wrong, but we'll work around
+ * their brain damage.
+ */
+static struct crng_state **crng_node_pool __read_mostly;
+#endif
+
 static void crng_initialize(struct crng_state *crng)
 {
int i;
@@ -815,7 +827,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
if (num == 0)
return;
} else
-   extract_crng(buf.block);
+   _extract_crng(_crng, buf.block);
spin_lock_irqsave(_crng.lock, flags);
for (i = 0; i < 8; i++) {
unsigned long   rv;
@@ -835,19 +847,26 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
spin_unlock_irqrestore(_crng.lock, flags);
 }
 
+static inline void maybe_reseed_primary_crng(void)
+{
+   if (crng_init > 2 &&
+   time_after(jiffies, primary_crng.init_time + CRNG_RESEED_INTERVAL))
+   crng_reseed(_crng, _pool);
+}
+
 static inline void crng_wait_ready(void)
 {
wait_event_interruptible(crng_init_wait, crng_ready());
 }
 
-static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
+static void _extract_crng(struct crng_state *crng,
+ __u8 out[CHACHA20_BLOCK_SIZE])
 {
unsigned long v, flags;
-   struct crng_state *crng = _crng;
 
if (crng_init > 1 &&
time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL))
-   crng_reseed(crng, _pool);
+   crng_reseed(crng, crng == _crng ? _pool : NULL);
spin_lock_irqsave(>lock, flags);
if (arch_get_random_long())
crng->state[14] ^= v;
@@ -857,6 +876,19 @@ static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
spin_unlock_irqrestore(>lock, flags);
 }
 
+static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
+{
+   struct crng_state *crng = NULL;
+
+#ifdef CONFIG_NUMA
+   if (crng_node_pool)
+   crng = crng_node_pool[numa_node_id()];
+   if (crng == NULL)
+#endif
+   crng = _crng;
+   _extract_crng(crng, out);
+}
+
 static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
 {
ssize_t ret = 0, i;
@@ -1575,9 +1607,31 @@ static void init_std_data(struct entropy_store *r)
  */
 static int rand_initialize(void)
 {
+#ifdef CONFIG_NUMA
+   int i;
+   int num_nodes = num_possible_nodes();
+   struct crng_state *crng;
+   struct crng_state **pool;
+#endif
+
init_std_data(_pool);
init_std_data(_pool);
crng_initialize(_crng);
+
+#ifdef CONFIG_NUMA
+   pool = kmalloc(num_nodes * sizeof(void *),
+  GFP_KERNEL|__GFP_NOFAIL|__GFP_ZERO);
+   for (i=0; i < num_nodes; i++) {
+   crng = kmalloc_node(sizeof(struct crng_state),
+   GFP_KERNEL | __GFP_NOFAIL, i);
+   spin_lock_init(>lock);
+   crng_initialize(crng);
+   pool[i] = crng;
+
+   }
+   mb();
+   crng_node_pool = pool;
+#endif
return 0;
 }
 early_initcall(rand_initialize);
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/7] random: add interrupt callback to VMBus IRQ handler

2016-06-13 Thread Theodore Ts'o
From: Stephan Mueller <smuel...@chronox.de>

The Hyper-V Linux Integration Services use the VMBus implementation for
communication with the Hypervisor. VMBus registers its own interrupt
handler that completely bypasses the common Linux interrupt handling.
This implies that the interrupt entropy collector is not triggered.

This patch adds the interrupt entropy collection callback into the VMBus
interrupt handler function.

Cc: sta...@kernel.org
Signed-off-by: Stephan Mueller <stephan.muel...@atsec.com>
Signed-off-by: Stephan Mueller <smuel...@chronox.de>
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c  | 1 +
 drivers/hv/vmbus_drv.c | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 74596d3..64e35a4 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -946,6 +946,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
/* award one bit for the contents of the fast pool */
credit_entropy_bits(r, credit + 1);
 }
+EXPORT_SYMBOL_GPL(add_interrupt_randomness);
 
 #ifdef CONFIG_BLOCK
 void add_disk_randomness(struct gendisk *disk)
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 952f20f..e82f7e1 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -42,6 +42,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "hyperv_vmbus.h"
 
 static struct acpi_device  *hv_acpi_dev;
@@ -806,6 +807,8 @@ static void vmbus_isr(void)
else
tasklet_schedule(hv_context.msg_dpc[cpu]);
}
+
+   add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR, 0);
 }
 
 
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/7] random: initialize the non-blocking pool via add_hwgenerator_randomness()

2016-06-13 Thread Theodore Ts'o
If we have a hardware RNG and are using the in-kernel rngd, we should
use this to initialize the non-blocking pool so that getrandom(2)
doesn't block unnecessarily.

Cc: sta...@kernel.org
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 0158d3b..4e2627a 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1849,12 +1849,18 @@ void add_hwgenerator_randomness(const char *buffer, 
size_t count,
 {
struct entropy_store *poolp = _pool;
 
-   /* Suspend writing if we're above the trickle threshold.
-* We'll be woken up again once below random_write_wakeup_thresh,
-* or when the calling thread is about to terminate.
-*/
-   wait_event_interruptible(random_write_wait, kthread_should_stop() ||
+   if (unlikely(nonblocking_pool.initialized == 0))
+   poolp = _pool;
+   else {
+   /* Suspend writing if we're above the trickle
+* threshold.  We'll be woken up again once below
+* random_write_wakeup_thresh, or when the calling
+* thread is about to terminate.
+*/
+   wait_event_interruptible(random_write_wait,
+kthread_should_stop() ||
ENTROPY_BITS(_pool) <= random_write_wakeup_bits);
+   }
mix_pool_bytes(poolp, buffer, count);
credit_entropy_bits(poolp, entropy);
 }
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/7] random: print a warning for the first ten uninitialized random users

2016-06-13 Thread Theodore Ts'o
Since systemd is consistently using /dev/urandom before it is
initialized, we can't see the other potentially dangerous users of
/dev/urandom immediately after boot.  So print the first ten such
complaints instead.

Cc: sta...@kernel.org
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 4e2627a..74596d3 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1458,12 +1458,16 @@ random_read(struct file *file, char __user *buf, size_t 
nbytes, loff_t *ppos)
 static ssize_t
 urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
 {
+   static int maxwarn = 10;
int ret;
 
-   if (unlikely(nonblocking_pool.initialized == 0))
-   printk_once(KERN_NOTICE "random: %s urandom read "
-   "with %d bits of entropy available\n",
-   current->comm, nonblocking_pool.entropy_total);
+   if (unlikely(nonblocking_pool.initialized == 0) &&
+   maxwarn > 0) {
+   maxwarn--;
+   printk(KERN_NOTICE "random: %s: uninitialized urandom read "
+  "(%d bytes read, %d bits of entropy available)\n",
+  current->comm, nbytes, nonblocking_pool.entropy_total);
+   }
 
nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
ret = extract_entropy_user(_pool, buf, nbytes);
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/7] random: properly align get_random_int_hash

2016-06-13 Thread Theodore Ts'o
From: Eric Biggers <ebigge...@gmail.com>

get_random_long() reads from the get_random_int_hash array using an
unsigned long pointer.  For this code to be guaranteed correct on all
architectures, the array must be aligned to an unsigned long boundary.

Cc: sta...@kernel.org
Signed-off-by: Eric Biggers <ebigge...@gmail.com>
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 64e35a4..83f5cd0 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1778,13 +1778,15 @@ int random_int_secret_init(void)
return 0;
 }
 
+static DEFINE_PER_CPU(__u32 [MD5_DIGEST_WORDS], get_random_int_hash)
+   __aligned(sizeof(unsigned long));
+
 /*
  * Get a random word for internal kernel use only. Similar to urandom but
  * with the goal of minimal entropy pool depletion. As a result, the random
  * value is not cryptographically secure but for several uses the cost of
  * depleting entropy is too high
  */
-static DEFINE_PER_CPU(__u32 [MD5_DIGEST_WORDS], get_random_int_hash);
 unsigned int get_random_int(void)
 {
__u32 *hash;
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/7] random: replace non-blocking pool with a Chacha20-based CRNG

2016-06-13 Thread Theodore Ts'o
The CRNG is faster, and we don't pretend to track entropy usage in the
CRNG any more.

Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 crypto/chacha20_generic.c |  61 
 drivers/char/random.c | 374 +-
 include/crypto/chacha20.h |   1 +
 lib/Makefile  |   2 +-
 lib/chacha20.c|  79 ++
 5 files changed, 353 insertions(+), 164 deletions(-)
 create mode 100644 lib/chacha20.c

diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
index da9c899..1cab831 100644
--- a/crypto/chacha20_generic.c
+++ b/crypto/chacha20_generic.c
@@ -15,72 +15,11 @@
 #include 
 #include 
 
-static inline u32 rotl32(u32 v, u8 n)
-{
-   return (v << n) | (v >> (sizeof(v) * 8 - n));
-}
-
 static inline u32 le32_to_cpuvp(const void *p)
 {
return le32_to_cpup(p);
 }
 
-static void chacha20_block(u32 *state, void *stream)
-{
-   u32 x[16], *out = stream;
-   int i;
-
-   for (i = 0; i < ARRAY_SIZE(x); i++)
-   x[i] = state[i];
-
-   for (i = 0; i < 20; i += 2) {
-   x[0]  += x[4];x[12] = rotl32(x[12] ^ x[0],  16);
-   x[1]  += x[5];x[13] = rotl32(x[13] ^ x[1],  16);
-   x[2]  += x[6];x[14] = rotl32(x[14] ^ x[2],  16);
-   x[3]  += x[7];x[15] = rotl32(x[15] ^ x[3],  16);
-
-   x[8]  += x[12];   x[4]  = rotl32(x[4]  ^ x[8],  12);
-   x[9]  += x[13];   x[5]  = rotl32(x[5]  ^ x[9],  12);
-   x[10] += x[14];   x[6]  = rotl32(x[6]  ^ x[10], 12);
-   x[11] += x[15];   x[7]  = rotl32(x[7]  ^ x[11], 12);
-
-   x[0]  += x[4];x[12] = rotl32(x[12] ^ x[0],   8);
-   x[1]  += x[5];x[13] = rotl32(x[13] ^ x[1],   8);
-   x[2]  += x[6];x[14] = rotl32(x[14] ^ x[2],   8);
-   x[3]  += x[7];x[15] = rotl32(x[15] ^ x[3],   8);
-
-   x[8]  += x[12];   x[4]  = rotl32(x[4]  ^ x[8],   7);
-   x[9]  += x[13];   x[5]  = rotl32(x[5]  ^ x[9],   7);
-   x[10] += x[14];   x[6]  = rotl32(x[6]  ^ x[10],  7);
-   x[11] += x[15];   x[7]  = rotl32(x[7]  ^ x[11],  7);
-
-   x[0]  += x[5];x[15] = rotl32(x[15] ^ x[0],  16);
-   x[1]  += x[6];x[12] = rotl32(x[12] ^ x[1],  16);
-   x[2]  += x[7];x[13] = rotl32(x[13] ^ x[2],  16);
-   x[3]  += x[4];x[14] = rotl32(x[14] ^ x[3],  16);
-
-   x[10] += x[15];   x[5]  = rotl32(x[5]  ^ x[10], 12);
-   x[11] += x[12];   x[6]  = rotl32(x[6]  ^ x[11], 12);
-   x[8]  += x[13];   x[7]  = rotl32(x[7]  ^ x[8],  12);
-   x[9]  += x[14];   x[4]  = rotl32(x[4]  ^ x[9],  12);
-
-   x[0]  += x[5];x[15] = rotl32(x[15] ^ x[0],   8);
-   x[1]  += x[6];x[12] = rotl32(x[12] ^ x[1],   8);
-   x[2]  += x[7];x[13] = rotl32(x[13] ^ x[2],   8);
-   x[3]  += x[4];x[14] = rotl32(x[14] ^ x[3],   8);
-
-   x[10] += x[15];   x[5]  = rotl32(x[5]  ^ x[10],  7);
-   x[11] += x[12];   x[6]  = rotl32(x[6]  ^ x[11],  7);
-   x[8]  += x[13];   x[7]  = rotl32(x[7]  ^ x[8],   7);
-   x[9]  += x[14];   x[4]  = rotl32(x[4]  ^ x[9],   7);
-   }
-
-   for (i = 0; i < ARRAY_SIZE(x); i++)
-   out[i] = cpu_to_le32(x[i] + state[i]);
-
-   state[12]++;
-}
-
 static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src,
 unsigned int bytes)
 {
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 83f5cd0..841f9a8 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -261,6 +261,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -413,6 +414,29 @@ static struct fasync_struct *fasync;
 static DEFINE_SPINLOCK(random_ready_list_lock);
 static LIST_HEAD(random_ready_list);
 
+struct crng_state {
+   __u32   state[16];
+   unsigned long   init_time;
+   spinlock_t  lock;
+};
+
+struct crng_state primary_crng = {
+   .lock = __SPIN_LOCK_UNLOCKED(primary_crng.lock),
+};
+
+/*
+ * crng_init =  0 --> Uninitialized
+ * 1 --> Initialized
+ * 2 --> Initialized from input_pool
+ *
+ * crng_init is protected by primary_crng->lock, and only increases
+ * its value (from 0->1->2).
+ */
+static int crng_init = 0;
+#define crng_ready() (likely(crng_init > 0))
+static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE]);
+static void process_random_ready_list(void);
+
 /**
  *
  * OS independent entropy store.   Here are the functions which handle
@@ -442,10 +466,15 @@ struct entropy_store {
__u8 last_data[EXTRACT_SIZE];
 };
 
+static ssize_t extract_entropy(struct entropy_store *r, void *buf,
+  size_t nbytes, int min, in

[PATCH 7/7] random: add backtracking protection to the CRNG

2016-06-13 Thread Theodore Ts'o
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 54 ++-
 1 file changed, 49 insertions(+), 5 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index d640865..963a6a9 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -436,7 +436,8 @@ static int crng_init = 0;
 #define crng_ready() (likely(crng_init > 0))
 static void _extract_crng(struct crng_state *crng,
  __u8 out[CHACHA20_BLOCK_SIZE]);
-static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE]);
+static void _crng_backtrack_protect(struct crng_state *crng,
+   __u8 tmp[CHACHA20_BLOCK_SIZE], int used);
 static void process_random_ready_list(void);
 
 /**
@@ -826,8 +827,11 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
num = extract_entropy(r, , 32, 16, 0);
if (num == 0)
return;
-   } else
+   } else {
_extract_crng(_crng, buf.block);
+   _crng_backtrack_protect(_crng, buf.block,
+   CHACHA20_KEY_SIZE);
+   }
spin_lock_irqsave(_crng.lock, flags);
for (i = 0; i < 8; i++) {
unsigned long   rv;
@@ -889,9 +893,46 @@ static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
_extract_crng(crng, out);
 }
 
+/*
+ * Use the leftover bytes from the CRNG block output (if there is
+ * enough) to mutate the CRNG key to provide backtracking protection.
+ */
+static void _crng_backtrack_protect(struct crng_state *crng,
+   __u8 tmp[CHACHA20_BLOCK_SIZE], int used)
+{
+   unsigned long   flags;
+   __u32   *s, *d;
+   int i;
+
+   used = round_up(used, sizeof(__u32));
+   if (used + CHACHA20_KEY_SIZE > CHACHA20_BLOCK_SIZE) {
+   extract_crng(tmp);
+   used = 0;
+   }
+   spin_lock_irqsave(>lock, flags);
+   s = (__u32 *) [used];
+   d = >state[4];
+   for (i=0; i < 8; i++)
+   *d++ ^= *s++;
+   spin_unlock_irqrestore(>lock, flags);
+}
+
+static void crng_backtrack_protect(__u8 tmp[CHACHA20_BLOCK_SIZE], int used)
+{
+   struct crng_state *crng = NULL;
+
+#ifdef CONFIG_NUMA
+   if (crng_node_pool)
+   crng = crng_node_pool[numa_node_id()];
+   if (crng == NULL)
+#endif
+   crng = _crng;
+   _crng_backtrack_protect(crng, tmp, used);
+}
+
 static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
 {
-   ssize_t ret = 0, i;
+   ssize_t ret = 0, i = CHACHA20_BLOCK_SIZE;
__u8 tmp[CHACHA20_BLOCK_SIZE];
int large_request = (nbytes > 256);
 
@@ -916,6 +957,7 @@ static ssize_t extract_crng_user(void __user *buf, size_t 
nbytes)
buf += i;
ret += i;
}
+   crng_backtrack_protect(tmp, i);
 
/* Wipe data just written to memory */
memzero_explicit(tmp, sizeof(tmp));
@@ -1473,8 +1515,10 @@ void get_random_bytes(void *buf, int nbytes)
if (nbytes > 0) {
extract_crng(tmp);
memcpy(buf, tmp, nbytes);
-   memzero_explicit(tmp, nbytes);
-   }
+   crng_backtrack_protect(tmp, nbytes);
+   } else
+   crng_backtrack_protect(tmp, CHACHA20_BLOCK_SIZE);
+   memzero_explicit(tmp, sizeof(tmp));
 }
 EXPORT_SYMBOL(get_random_bytes);
 
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH-v3 0/5] random: replace urandom pool with a CRNG

2016-05-30 Thread Theodore Ts'o
On Mon, May 30, 2016 at 10:53:22AM -0700, Andi Kleen wrote:
> 
> It should work the same on larger systems, the solution scales
> naturally to lots of sockets. It's not clear it'll help enough on systems
> with a lot more cores per socket, like a Xeon Phi. But for now it should
> be good enough.

One change which I'm currently making is to use kmalloc_node() instead
of kmalloc() for the per-NUMA node, and I suspect *that* is going
to make a quite a lot of different on those systems where the ratio of
remote to local memory access times is larger (as I assume it probably
would be on really big systems).

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/5] random: make /dev/urandom scalable for silly userspace programs

2016-05-30 Thread Theodore Ts'o
On Mon, May 30, 2016 at 08:03:59AM +0200, Stephan Mueller wrote:
> >  static int rand_initialize(void)
> >  {
> > +#ifdef CONFIG_NUMA
> > +   int i;
> > +   int num_nodes = num_possible_nodes();
> > +   struct crng_state *crng;
> > +
> > +   crng_node_pool = kmalloc(num_nodes * sizeof(void *),
> > +GFP_KERNEL|__GFP_NOFAIL);
> > +
> > +   for (i=0; i < num_nodes; i++) {
> > +   crng = kmalloc(sizeof(struct crng_state),
> > +  GFP_KERNEL | __GFP_NOFAIL);
> > +   initialize_crng(crng);
> 
> Could you please help me understand the logic flow here: The NUMA secondary 
> DRNGs are initialized before the input/blocking pools and the primary DRNG.
> 
> The initialization call uses get_random_bytes() for the secondary DRNGs. But 
> since the primary DRNG is not yet initialized, where does the 
> get_random_bytes 
> gets its randomness from?

Yeah, I screwed up.  The hunk of code staring with "crng_node_pool =
kmalloc(..." and the for loop afterwards should be moved to after
_initialize_crng().  Serves me right for not testing CONFIG_NUMA
before sending out the patches.

This is *not* about adding entropy; as you've noted, this is done very
early in boot up, before there has been any chance for any kind of
entropy to be collected in any of the input pools.  It's more of a
insurance policy just in case on some platform, if it turns out that
assuming a bit's worth of entropy per interrupt was hopelessly
over-optimistic, at least the starting point will be different across
different kernels (and maybe different boot times, but on the sorts of
platforms where I'm most concerned, there may not be a real-time clock
and almost certainly not a architectural hwrng in the CPU).

   - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH-v3 0/5] random: replace urandom pool with a CRNG

2016-05-29 Thread Theodore Ts'o
By using a CRNG to replace the urandom pool, we address a number of
complaints which Stephan Mueller has been concerned about.  We now use
a much more aggressive interrupt sampling system to quickly initialize
a CRNG which gets used in place of the original non-blocking pool.
This tends to get initialized *very* quickly (before the devices are
finished being proved.)  Like Stephan's proposal, this assumes that we
can get a bit of entropy per interrupt, which may be problematic on
some architectures.  So after we do this quick-and-dirty
initialization, we then fall back to the slower, more conservative
interrupt sampling system to fill the input pool, and we will do a
catastrophic reseeding once we get 128 bits using the slower but more
conservative system, and every five minutes afterwards, if possible.

In addition, on NUMA systems we make the CRNG state per-NUMA socket, to
address the NUMA locking contention problem which Andi Kleen has been
complaining about.  I'm not entirely sure this will work well on the
crazy big SGI systems, but they are rare.  Whether they are rarer than
abusive userspace programs that are continuously pounding /dev/urandom
is unclear.  If necessary we can make a config option to turn off the
per-NUMA socket hack if it proves to be problematic.

Note: I didn't propose this for merging in 4.7 because I wanted to
   further refine the reseeding logic and because I wanted to get more
   feedback.  My plan is to merge these changes for the 4.8 merge
   window.

These patches are also available at:
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random.git

Changes since -v2:
  * Rebased to v4.7-rc1
  * Improved/reworked CRNG reseeding and backtracking protection
  * Preseed the CRNG state from system data
  * Added fix to properly align the get_random_int_hash[] array

Eric Biggers (1):
  random: properly align get_random_int_hash

Stephan Mueller (1):
  random: add interrupt callback to VMBus IRQ handler

Theodore Ts'o (3):
  random: replace non-blocking pool with a Chacha20-based CRNG
  random: make /dev/urandom scalable for silly userspace programs
  random: add backtracking protection to the CRNG

 crypto/chacha20_generic.c |  61 ---
 drivers/char/random.c | 446 --
 drivers/hv/vmbus_drv.c|   3 +
 include/crypto/chacha20.h |   1 +
 lib/Makefile  |   2 +-
 lib/chacha20.c|  79 
 6 files changed, 438 insertions(+), 154 deletions(-)
 create mode 100644 lib/chacha20.c

-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/5] random: properly align get_random_int_hash

2016-05-29 Thread Theodore Ts'o
From: Eric Biggers <ebigge...@gmail.com>

get_random_long() reads from the get_random_int_hash array using an
unsigned long pointer.  For this code to be guaranteed correct on all
architectures, the array must be aligned to an unsigned long boundary.

Signed-off-by: Eric Biggers <ebigge...@gmail.com>
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 860862f..90fb569 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -2033,13 +2033,15 @@ int random_int_secret_init(void)
return 0;
 }
 
+static DEFINE_PER_CPU(__u32 [MD5_DIGEST_WORDS], get_random_int_hash)
+   __aligned(sizeof(unsigned long));
+
 /*
  * Get a random word for internal kernel use only. Similar to urandom but
  * with the goal of minimal entropy pool depletion. As a result, the random
  * value is not cryptographically secure but for several uses the cost of
  * depleting entropy is too high
  */
-static DEFINE_PER_CPU(__u32 [MD5_DIGEST_WORDS], get_random_int_hash);
 unsigned int get_random_int(void)
 {
__u32 *hash;
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/5] random: make /dev/urandom scalable for silly userspace programs

2016-05-29 Thread Theodore Ts'o
On a system with a 4 socket (NUMA) system where a large number of
application threads were all trying to read from /dev/urandom, this
can result in the system spending 80% of its time contending on the
global urandom spinlock.  The application should have used its own
PRNG, but let's try to help it from running, lemming-like, straight
over the locking cliff.

Reported-by: Andi Kleen <a...@linux.intel.com>
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 57 +++
 1 file changed, 53 insertions(+), 4 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 3a4d865..1778c84 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -434,6 +434,8 @@ struct crng_state primary_crng = {
  */
 static int crng_init = 0;
 #define crng_ready() (likely(crng_init >= 2))
+static void _extract_crng(struct crng_state *crng,
+ __u8 out[CHACHA20_BLOCK_SIZE]);
 static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE]);
 static void process_random_ready_list(void);
 
@@ -754,6 +756,17 @@ static void credit_entropy_bits_safe(struct entropy_store 
*r, int nbits)
 
 static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
 
+#ifdef CONFIG_NUMA
+/*
+ * Hack to deal with crazy userspace progams when they are all trying
+ * to access /dev/urandom in parallel.  The programs are almost
+ * certainly doing something terribly wrong, but we'll work around
+ * their brain damage.
+ */
+static struct crng_state **crng_node_pool __read_mostly;
+#endif
+
+
 static void _initialize_crng(struct crng_state *crng)
 {
int i;
@@ -774,11 +787,13 @@ static void _initialize_crng(struct crng_state *crng)
crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
 }
 
+#ifdef CONFIG_NUMA
 static void initialize_crng(struct crng_state *crng)
 {
_initialize_crng(crng);
spin_lock_init(>lock);
 }
+#endif
 
 static int crng_fast_load(__u32 pool[4])
 {
@@ -818,7 +833,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
if (num == 0)
return;
} else
-   extract_crng(buf.block);
+   _extract_crng(_crng, buf.block);
spin_lock_irqsave(_crng.lock, flags);
for (i = 0; i < 8; i++) {
unsigned long   rv;
@@ -838,19 +853,26 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
spin_unlock_irqrestore(_crng.lock, flags);
 }
 
+static inline void maybe_reseed_primary_crng(void)
+{
+   if (crng_init > 2 &&
+   time_after(jiffies, primary_crng.init_time + CRNG_RESEED_INTERVAL))
+   crng_reseed(_crng, _pool);
+}
+
 static inline void crng_wait_ready(void)
 {
wait_event_interruptible(crng_init_wait, crng_ready());
 }
 
-static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
+static void _extract_crng(struct crng_state *crng,
+ __u8 out[CHACHA20_BLOCK_SIZE])
 {
unsigned long v, flags;
-   struct crng_state *crng = _crng;
 
if (crng_init > 2 &&
time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL))
-   crng_reseed(crng, _pool);
+   crng_reseed(crng, crng == _crng ? _pool : NULL);
spin_lock_irqsave(>lock, flags);
if (arch_get_random_long())
crng->state[14] ^= v;
@@ -860,6 +882,17 @@ static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
spin_unlock_irqrestore(>lock, flags);
 }
 
+static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
+{
+#ifndef CONFIG_NUMA
+   struct crng_state *crng = _crng;
+#else
+   struct crng_state *crng = crng_node_pool[numa_node_id()];
+#endif
+
+   _extract_crng(crng, out);
+}
+
 static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
 {
ssize_t ret = 0, i;
@@ -1573,6 +1606,22 @@ static void init_std_data(struct entropy_store *r)
  */
 static int rand_initialize(void)
 {
+#ifdef CONFIG_NUMA
+   int i;
+   int num_nodes = num_possible_nodes();
+   struct crng_state *crng;
+
+   crng_node_pool = kmalloc(num_nodes * sizeof(void *),
+GFP_KERNEL|__GFP_NOFAIL);
+
+   for (i=0; i < num_nodes; i++) {
+   crng = kmalloc(sizeof(struct crng_state),
+  GFP_KERNEL | __GFP_NOFAIL);
+   initialize_crng(crng);
+   crng_node_pool[i] = crng;
+
+   }
+#endif
init_std_data(_pool);
init_std_data(_pool);
_initialize_crng(_crng);
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/5] random: replace non-blocking pool with a Chacha20-based CRNG

2016-05-29 Thread Theodore Ts'o
The CRNG is faster, and we don't pretend to track entropy usage in the
CRNG any more.

Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 crypto/chacha20_generic.c |  61 
 drivers/char/random.c | 350 ++
 include/crypto/chacha20.h |   1 +
 lib/Makefile  |   2 +-
 lib/chacha20.c|  79 +++
 5 files changed, 340 insertions(+), 153 deletions(-)
 create mode 100644 lib/chacha20.c

diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
index da9c899..1cab831 100644
--- a/crypto/chacha20_generic.c
+++ b/crypto/chacha20_generic.c
@@ -15,72 +15,11 @@
 #include 
 #include 
 
-static inline u32 rotl32(u32 v, u8 n)
-{
-   return (v << n) | (v >> (sizeof(v) * 8 - n));
-}
-
 static inline u32 le32_to_cpuvp(const void *p)
 {
return le32_to_cpup(p);
 }
 
-static void chacha20_block(u32 *state, void *stream)
-{
-   u32 x[16], *out = stream;
-   int i;
-
-   for (i = 0; i < ARRAY_SIZE(x); i++)
-   x[i] = state[i];
-
-   for (i = 0; i < 20; i += 2) {
-   x[0]  += x[4];x[12] = rotl32(x[12] ^ x[0],  16);
-   x[1]  += x[5];x[13] = rotl32(x[13] ^ x[1],  16);
-   x[2]  += x[6];x[14] = rotl32(x[14] ^ x[2],  16);
-   x[3]  += x[7];x[15] = rotl32(x[15] ^ x[3],  16);
-
-   x[8]  += x[12];   x[4]  = rotl32(x[4]  ^ x[8],  12);
-   x[9]  += x[13];   x[5]  = rotl32(x[5]  ^ x[9],  12);
-   x[10] += x[14];   x[6]  = rotl32(x[6]  ^ x[10], 12);
-   x[11] += x[15];   x[7]  = rotl32(x[7]  ^ x[11], 12);
-
-   x[0]  += x[4];x[12] = rotl32(x[12] ^ x[0],   8);
-   x[1]  += x[5];x[13] = rotl32(x[13] ^ x[1],   8);
-   x[2]  += x[6];x[14] = rotl32(x[14] ^ x[2],   8);
-   x[3]  += x[7];x[15] = rotl32(x[15] ^ x[3],   8);
-
-   x[8]  += x[12];   x[4]  = rotl32(x[4]  ^ x[8],   7);
-   x[9]  += x[13];   x[5]  = rotl32(x[5]  ^ x[9],   7);
-   x[10] += x[14];   x[6]  = rotl32(x[6]  ^ x[10],  7);
-   x[11] += x[15];   x[7]  = rotl32(x[7]  ^ x[11],  7);
-
-   x[0]  += x[5];x[15] = rotl32(x[15] ^ x[0],  16);
-   x[1]  += x[6];x[12] = rotl32(x[12] ^ x[1],  16);
-   x[2]  += x[7];x[13] = rotl32(x[13] ^ x[2],  16);
-   x[3]  += x[4];x[14] = rotl32(x[14] ^ x[3],  16);
-
-   x[10] += x[15];   x[5]  = rotl32(x[5]  ^ x[10], 12);
-   x[11] += x[12];   x[6]  = rotl32(x[6]  ^ x[11], 12);
-   x[8]  += x[13];   x[7]  = rotl32(x[7]  ^ x[8],  12);
-   x[9]  += x[14];   x[4]  = rotl32(x[4]  ^ x[9],  12);
-
-   x[0]  += x[5];x[15] = rotl32(x[15] ^ x[0],   8);
-   x[1]  += x[6];x[12] = rotl32(x[12] ^ x[1],   8);
-   x[2]  += x[7];x[13] = rotl32(x[13] ^ x[2],   8);
-   x[3]  += x[4];x[14] = rotl32(x[14] ^ x[3],   8);
-
-   x[10] += x[15];   x[5]  = rotl32(x[5]  ^ x[10],  7);
-   x[11] += x[12];   x[6]  = rotl32(x[6]  ^ x[11],  7);
-   x[8]  += x[13];   x[7]  = rotl32(x[7]  ^ x[8],   7);
-   x[9]  += x[14];   x[4]  = rotl32(x[4]  ^ x[9],   7);
-   }
-
-   for (i = 0; i < ARRAY_SIZE(x); i++)
-   out[i] = cpu_to_le32(x[i] + state[i]);
-
-   state[12]++;
-}
-
 static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src,
 unsigned int bytes)
 {
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 0158d3b..3a4d865 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -261,6 +261,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -413,6 +414,29 @@ static struct fasync_struct *fasync;
 static DEFINE_SPINLOCK(random_ready_list_lock);
 static LIST_HEAD(random_ready_list);
 
+struct crng_state {
+   __u32   state[16];
+   unsigned long   init_time;
+   spinlock_t  lock;
+};
+
+struct crng_state primary_crng = {
+   .lock = __SPIN_LOCK_UNLOCKED(primary_crng.lock),
+};
+
+/*
+ * crng_init =  0 --> Uninitialized
+ * 2 --> Initialized
+ * 3 --> Initialized from input_pool
+ *
+ * crng_init is protected by primary_crng->lock, and only increases
+ * its value (from 0->1->2->3).
+ */
+static int crng_init = 0;
+#define crng_ready() (likely(crng_init >= 2))
+static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE]);
+static void process_random_ready_list(void);
+
 /**
  *
  * OS independent entropy store.   Here are the functions which handle
@@ -442,10 +466,15 @@ struct entropy_store {
__u8 last_data[EXTRACT_SIZE];
 };
 
+static ssize_t extract_entropy(struct entropy_store *r, void *buf,
+  size_t nbytes, int min,

[PATCH 4/5] random: add backtracking protection to the CRNG

2016-05-29 Thread Theodore Ts'o
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
 drivers/char/random.c | 52 ++-
 1 file changed, 47 insertions(+), 5 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index bf370a3..860862f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -436,7 +436,8 @@ static int crng_init = 0;
 #define crng_ready() (likely(crng_init >= 2))
 static void _extract_crng(struct crng_state *crng,
  __u8 out[CHACHA20_BLOCK_SIZE]);
-static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE]);
+static void _crng_backtrack_protect(struct crng_state *crng,
+   __u8 tmp[CHACHA20_BLOCK_SIZE], int used);
 static void process_random_ready_list(void);
 
 /**
@@ -832,8 +833,11 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
num = extract_entropy(r, , 32, 16, 0);
if (num == 0)
return;
-   } else
+   } else {
_extract_crng(_crng, buf.block);
+   _crng_backtrack_protect(_crng, buf.block,
+   CHACHA20_KEY_SIZE);
+   }
spin_lock_irqsave(_crng.lock, flags);
for (i = 0; i < 8; i++) {
unsigned long   rv;
@@ -893,9 +897,44 @@ static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
_extract_crng(crng, out);
 }
 
+/*
+ * Use the leftover bytes from the CRNG block output (if there is
+ * enough) to mutate the CRNG key to provide backtracking protection.
+ */
+static void _crng_backtrack_protect(struct crng_state *crng,
+   __u8 tmp[CHACHA20_BLOCK_SIZE], int used)
+{
+   unsigned long   flags;
+   __u32   *s, *d;
+   int i;
+
+   used = round_up(used, sizeof(__u32));
+   if (used + CHACHA20_KEY_SIZE > CHACHA20_BLOCK_SIZE) {
+   extract_crng(tmp);
+   used = 0;
+   }
+   spin_lock_irqsave(>lock, flags);
+   s = (__u32 *) [used];
+   d = >state[4];
+   for (i=0; i < 8; i++)
+   *d++ ^= *s++;
+   spin_unlock_irqrestore(>lock, flags);
+}
+
+static void crng_backtrack_protect(__u8 tmp[CHACHA20_BLOCK_SIZE], int used)
+{
+#ifdef CONFIG_NUMA
+   struct crng_state *crng = crng_node_pool[numa_node_id()];
+#else
+   struct crng_state *crng = _crng;
+#endif
+
+   _crng_backtrack_protect(crng, tmp, used);
+}
+
 static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
 {
-   ssize_t ret = 0, i;
+   ssize_t ret = 0, i = CHACHA20_BLOCK_SIZE;
__u8 tmp[CHACHA20_BLOCK_SIZE];
int large_request = (nbytes > 256);
 
@@ -920,6 +959,7 @@ static ssize_t extract_crng_user(void __user *buf, size_t 
nbytes)
buf += i;
ret += i;
}
+   crng_backtrack_protect(tmp, i);
 
/* Wipe data just written to memory */
memzero_explicit(tmp, sizeof(tmp));
@@ -1473,8 +1513,10 @@ void get_random_bytes(void *buf, int nbytes)
if (nbytes > 0) {
extract_crng(tmp);
memcpy(buf, tmp, nbytes);
-   memzero_explicit(tmp, nbytes);
-   }
+   crng_backtrack_protect(tmp, nbytes);
+   } else
+   crng_backtrack_protect(tmp, CHACHA20_BLOCK_SIZE);
+   memzero_explicit(tmp, sizeof(tmp));
 }
 EXPORT_SYMBOL(get_random_bytes);
 
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: AES-NI: slower than aes-generic?

2016-05-29 Thread Theodore Ts'o
On Sun, May 29, 2016 at 09:51:59PM +0200, Stephan Mueller wrote:
> 
> I personally am not sure that taking some arbitrary cipher and turning it 
> into 
> a DRNG by simply using a self-feeding loop based on the ideas of X9.31 
> Appendix A2.4 is good. Chacha20 is a good cipher, but is it equally good for 
> a 
> DRNG? I do not know. There are too little assessments from mathematicians out 
> there regarding that topic.

If ChCha20 is a good (stream) cipher, it must be a good DRNG by
definition.  In other words, if you can predict the output of
ChaCha20-base DRNG with any accuracy greater than chance, this can be
used as a wedge to attack the stream cipher..

I will note that OpenBSD's "ARC4" random number generator is currently
using ChaCha20, BTW.

Regards,

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: AES-NI: slower than aes-generic?

2016-05-26 Thread Theodore Ts'o
On Thu, May 26, 2016 at 08:49:39PM +0200, Stephan Mueller wrote:
> 
> Using the kernel crypto API one can relieve the CPU of the crypto work, if a 
> hardware or assembler implementation is available. This may be of particular 
> interest for smaller systems. So, for smaller systems (where kernel bloat is 
> bad, but where now these days more and more hardware crypto support is 
> added), 
> we must weigh the kernel bloat (of 3 or 4 additional C files for the basic 
> kernel crypto API logic) against relieving the CPU of work.

There are a number of caveats with using hardware acceleration; one is
that many hardware accelerators are optimized for bulk data
encryption, and so key scheduling, or switching between key schedules,
can have a higher overhead that a pure software implementation.

There have also been situations where the hardware crypto engine is
actually slower than a highly optimized software implementation.  This
has been the case for certain ARM SOC's, for example.

This is not that big of deal, if you are developing a cryptographic
application (such as file system level encryption, for example) for a
specific hardware platform (such as a specific Nexus device).  But if
you are trying to develop a generic service that has to work on a wide
variety of CPU architectures, and specific CPU/SOC implementations,
life is a lot more complicated.  I've worked on both problems, let me
assure you the second is way tricker than the first.

> Then, the use of the DRBG offers users to choose between a Hash/HMAC and CTR 
> implementation to suit their needs. The DRBG code is agnostic of the 
> underlying cipher. So, you could even use Blowfish instead of AES or 
> whirlpool 
> instead of SHA -- these changes are just one entry in drbg_cores[] away 
> without any code change.

I really question how much this matters in practice.  Unless you are a
US Government Agency, where you might be laboring under a Federal
mandate to use DUAL-EC DRBG (for example), most users really don't
care about the details of the algorithm used in their random number
generator.  Giving users choice (or lots of knobs) isn't necessarily
always a feature, as the many TLS downgrade attacks have demonstrated.

This is why from my perspectve it's more important to implement an
interface which is always there, and which by default is secure, to
minimize the chances that random JV-team kernel developers working for
a Linux distribution or some consumer electronics manufacturer won't
actually make things worse.  As the Debian's attempt to "improve" the
security of OpenSSL demonstrates, it doesn't always end well.  :-)

If we implement something which happens to result in a 2 minute stall
in boot times, the danger is that a clueless engineer at Sony, or LGE,
or Motorola, or BMW, or Toyota, etc, will "fix" the problem without
telling anyone about what they did, and we might not notice right away
that the fix was in fact catastrophically bad.

These aren't the standard things which academics tend to worry about,
which tend to assume that attackers will be able to read arbitrary
kernel memory, and recovering such an exposure of the entropy pool is
_the_ most important thing to worry about (as opposed to say, the
contents of the user's private keys in the ssh-agent process).  But
this will perhaps explain why worrying about accomodating users who
care about whether Blowfish or AES should be used in their random
number generator isn't near the top of my personal priority list.

Cheers,

- Ted

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


  1   2   >