Re: [PATCH 0/6] crypto: x86/chacha20 - SIMD performance improvements

2018-11-20 Thread Jason A. Donenfeld
Hi Martin,

On Tue, Nov 20, 2018 at 5:29 PM Martin Willi  wrote:
> Thanks for the offer, no need at this time. But I certainly would
> welcome if you could do some (Wireguard) benching with that code to see
> if it works for you.

I certainly will test it in a few different network circumstances,
especially since real testing like this is sometimes more telling than
busy-loop benchmarks.

> > Actually, similarly here, a 10nm Cannon Lake machine should be
> > arriving at my house this week, which should make for some
> > interesting testing ground for non-throttled zmm, if you'd like to
> > play with it.
>
> Maybe in a future iteration, thanks. In fact would it be interesting to
> know if Cannon Lake can handle that throttling better.

Everything I've read on the Internet seems to indicate that's the
case, so one of the first things I'll be doing is seeing if that's
true. There are also the AVX512 IFMA instructions to play with!

Jason


Re: [PATCH 0/6] crypto: x86/chacha20 - SIMD performance improvements

2018-11-19 Thread Jason A. Donenfeld
Hi Martin,

On Mon, Nov 19, 2018 at 8:52 AM Martin Willi  wrote:
>
> Adding AVX-512VL support is relatively simple. I have a patchset mostly
> ready that is more than competitive with the code from Zinc. I'll clean
> that up and do more testing before posting it later this week.

Terrific. Depending on how it turns out, it'll be nice to try
integrating this into Zinc. I have a massive Xeon Gold 5120 machine
that I can give you access to if you'd like to do some testing and
benching. Poke me on IRC -- I'm zx2c4.

> I don't think that having AVX-512F is that important until it is really
> usable on CPUs in the market.

Actually, similarly here, a 10nm Cannon Lake machine should be
arriving at my house this week, which should make for some interesting
testing ground for non-throttled zmm, if you'd like to play with it.

Jason


Re: [PATCH 0/6] crypto: x86/chacha20 - SIMD performance improvements

2018-11-15 Thread Jason A. Donenfeld
Hi Martin,

This is nice work, and given that it's quite clean -- and that it's
usually hard to screw up chacha in subtle ways when test vectors pass
(unlike, say, poly1305 or curve25519), I'd be inclined to roll with
your implementation if it can eventually become competitive with Andy
Polyakov's, which I'm currently working on for Zinc (which no longer
has pre-generated code, addressing the biggest hurdle; v9 will be sent
shortly). Specifically, I'm not quite sure the improvements here tip
the balance apply to all avx2 microarchitectures, and most
importantly, there are still no AVX-512 paths, which means it's
considerably slower on all newer generation Intel chips. Andy's has
the AVX-512VL implementation for Skylake (using ymm, so as not to hit
throttling) and AVX-512F for Cannon Lake and beyond (using zmm). I've
attached some measurements below showing how stark the difference is.

The take away is that while Andy's implementation is still ahead in
terms of performance today, I'd certainly encourage your efforts to
gain parity with that, and I'd be happy have that when the performance
and fuzzing time is right for it. So please do keep chipping away at
it; I think it's a potentially useful effort.

Regards,
Jason

size old zinc
  
0 64 54
16 386 372
32 388 396
48 388 420
64 366 350
80 708 666
96 708 692
112 706 736
128 692 648
144 1036 682
160 1036 708
176 1036 730
192 1016 658
208 1360 684
224 1362 708
240 1360 732
256 644 500
272 990 526
288 988 556
304 988 576
320 972 500
336 1314 532
352 1316 558
368 1318 578
384 1308 506
400 1644 532
416 1644 556
432 1644 594
448 1624 508
464 1970 534
480 1970 556
496 1968 582
512 660 624
528 1016 682
544 1016 702
560 1018 728
576 998 654
592 1344 680
608 1344 708
624 1344 730
640 1326 654
656 1670 686
672 1670 708
688 1670 732
704 1652 658
720 1998 682
736 1998 710
752 1996 734
768 1256 662
784 1606 688
800 1606 714
816 1606 736
832 1584 660
848 1948 688
864 1950 714
880 1948 736
896 1912 688
912 2258 718
928 2258 744
944 2256 768
960 2238 692
976 2584 718
992 2584 744
1008 2584 770



On Thu, Nov 15, 2018 at 6:21 PM Herbert Xu  wrote:
>
> On Sun, Nov 11, 2018 at 10:36:24AM +0100, Martin Willi wrote:
> > This patchset improves performance of the ChaCha20 SIMD implementations
> > for x86_64. For some specific encryption lengths, performance is more
> > than doubled. Two mechanisms are used to achieve this:
> >
> > * Instead of calculating the minimal number of required blocks for a
> >   given encryption length, functions producing more blocks are used
> >   more aggressively. Calculating a 4-block function can be faster than
> >   calculating a 2-block and a 1-block function, even if only three
> >   blocks are actually required.
> >
> > * In addition to the 8-block AVX2 function, a 4-block and a 2-block
> >   function are introduced.
> >
> > Patches 1-3 add support for partial lengths to the existing 1-, 4- and
> > 8-block functions. Patch 4 makes use of that by engaging the next higher
> > level block functions more aggressively. Patch 5 and 6 add the new AVX2
> > functions for 2 and 4 blocks. Patches are based on cryptodev and would
> > need adjustments to apply on top of the Adiantum patchset.
> >
> > Note that the more aggressive use of larger block functions calculate
> > blocks that may get discarded. This may have a negative impact on energy
> > usage or the processors thermal budget. However, with the new block
> > functions we can avoid this over-calculation for many lengths, so the
> > performance win can be considered more important.
> >
> > Below are performance numbers measured with tcrypt using additional
> > encryption lengths; numbers in kOps/s, on my i7-5557U. old is the
> > existing, new the implementation with this patchset. As comparison
> > the numbers for zinc in v6:
> >
> >  len  old  new zinc
> >8 5908 5818 5818
> >   16 5917 5828 5726
> >   24 5916 5869 5757
> >   32 5920 5789 5813
> >   40 5868 5799 5710
> >   48 5877 5761 5761
> >   56 5869 5797 5742
> >   64 5897 5862 5685
> >   72 3381 4979 3520
> >   80 3364 5541 3475
> >   88 3350 4977 3424
> >   96 3342 5530 3371
> >  104 3328 4923 3313
> >  112 3317 5528 3207
> >  120 3313 4970 3150
> >  128 3492 5535 3568
> >  136 2487 4570 3690
> >  144 2481 5047 3599
> >  152 2473 4565 3566
> >  160 2459 5022 3515
> >  168 2461 4550 3437
> >  176 2454 5020 3325
> >  184 2449 4535 3279
> >  192 2538 5011 3762
> >  200 1962 4537 3702
> >  208 1962 4971 3622
> >  216 1954 4487 3518
> >  224 1949 4936 3445
> >  232 1948 4497 3422
> >  240 1941 4947 3317
> >  248 1940 4481 3279
> >  256 3798 4964 3723
> >  264 2638 3577 3639
> >  272 2637 3567 3597
> >  280 2628 3563 3565
> >  288 2630 3795 3484
> >  296 2621 3580 3422
> >  304 2612 3569 3352
> >  312 2602 3599 3308
> >  320 2694 3821 3694
> >  328 2060 3538 3681
> >  336 2054 3565 3599
> >  344 2054 3553 3523
> >  352 2049 3809 3419
> >  360 2045 3575 3403
> >  368 2035 3560 3334
> >  376 2036 3555 3257
> >  384 2092 3785 3715
> > 

Re: .S_shipped unnecessary?

2018-11-08 Thread Jason A. Donenfeld
Hey Ard,

On Fri, Nov 9, 2018 at 12:42 AM Ard Biesheuvel
 wrote:
> Wonderful! Any problems doing that for x86_64 ?

The x86_64 is still a WIP, but hopefully we'll succeed.

> I agree 100%. When I added this the first time, it was at the request
> of the ARM maintainer, who was reluctant to rely on Perl for some
> reason.
>
> Recently, we have had to add a kludge to prevent spurious rebuilds of
> the .S_shipped files as well.
>
> I'd be perfectly happy to get rid of this entirely, and always
> generate the .S from the .pl, which to me is kind of the point of
> carrying these files in the first place.

Terrific. I'll move ahead in that direction then. It makes things _so_
much cleaner, and doesn't introduce new build modes ("should the
generated _ship go into the build directory or the source directory?
what kind of artifact is it? how to address $(srcdir) vs $(src) in
that context? bla bla") that really over complicate things.

Jason


.S_shipped unnecessary?

2018-11-08 Thread Jason A. Donenfeld
Hi Ard, Eric, and others,

As promised, the next Zinc patchset will have less generated code! After a
bit of work with Andy and Samuel, I'll be bundling the perlasm.

One thing I'm wondering about, though, is the wisdom behind the current
.S_shipped pattern. Usually the _shipped is for big firmware blobs that are
hard (or impossible) to build independently. But in this case, the .S is
generated from the .pl significantly faster than gcc even compiles a basic
C file. And, since perl is needed to build the kernel anyway, it's not like
it will be impossible to find the right tools. Rather than clutter up commits
with the .pl _and_ the .S_shipped, what would you think if I just generated
the .S each time as an ordinary build artifact. AFAICT, this is fairly usual,
and it's hard to see downsides. Hence, why I'm writing this email: are there
any downsides to that?

Thanks,
Jason


Re: [PATCH net-next v6 07/23] zinc: ChaCha20 ARM and ARM64 implementations

2018-09-26 Thread Jason A. Donenfeld
On Wed, Sep 26, 2018 at 5:42 PM Ard Biesheuvel
 wrote:
>
> On Wed, 26 Sep 2018 at 16:02, Ard Biesheuvel  
> wrote:
> Actually, looking at the code again, the abstraction does appear to be
> fine, it is just the chacha20 code that does not make use of it.

So what you have in mind is something like calling simd_relax() every
4096 bytes or so?


Re: [PATCH v2 0/5] crypto: Speck support

2018-04-24 Thread Jason A. Donenfeld
Hi Eric,

On Tue, Apr 24, 2018 at 8:16 PM, Eric Biggers  wrote:
> So, what do you propose replacing it with?

Something more cryptographically justifiable.

> outside crypto review, vs. the many cryptanalysis papers on Speck.  (In that
> respect the controversy about Speck has actually become an advantage, as it 
> has
> received much more cryptanalysis than other lightweight block ciphers.)

That's the thing that worries me, actually. Many of the design
decisions behind Speck haven't been justified.

> The reason we chose Speck had nothing to do with the proposed ISO standard or
> any sociopolitical factors, but rather because it was the only algorithm we
> could find that met the performance and security requirements.

> Note that Linux
> doesn't bow down to any particular standards organization, and it offers
> algorithms that were specified in various places, even some with no more than 
> a
> publication by the author.  In fact, support for SM4 was just added too, which
> is a Chinese government standard.  Are you going to send a patch to remove 
> that
> too, or is it just NSA designed algorithms that are not okay?

No need to be belittling; I have much less tinfoil strapped around my
head than perhaps you think. I'm not blindly opposed to
government-designed algorithms. Take SHA2, for example -- built by the
NSA.

But I do care quite a bit about using ciphers that have acceptance of
the academic community and a large body of literature documenting its
design decisions and analyzing it. Some of the best symmetric
cryptographers in academia have expressed reservations about it, and
it was just rejected from a major standard's body. Linux, of course,
is free to disagree -- or "bow down" as you oddly put it -- but I'd
make sure you've got a pretty large bucket of justifications for that
disagreement.

> (in fact, you'd
> probably have a different opinion of it if the authors had simply worked
> somewhere else and published the exact same algorithm);

Again, no need to patronize. I don't actually have a bias like that.

> But I hope you can understand that all *technical* indicators are that Speck 
> is
> secure enough

That's the thing I'm worried about.

Jason


Re: [PATCH v2 0/5] crypto: Speck support

2018-04-24 Thread Jason A. Donenfeld
Can we please not Speck?

It was just rejected by the ISO/IEC.

https://twitter.com/TomerAshur/status/988659711091228673


[PATCH] crypto: remove Speck

2018-04-24 Thread Jason A. Donenfeld
This NSA-designed cipher was rejected for inclusion in international
standards by ISO/IEC. Before anyone actually starts using it by
accident, let's just not ship it at all.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 arch/arm/crypto/Kconfig |6 -
 arch/arm/crypto/Makefile|2 -
 arch/arm/crypto/speck-neon-core.S   |  432 
 arch/arm/crypto/speck-neon-glue.c   |  288 --
 arch/arm64/crypto/Kconfig   |6 -
 arch/arm64/crypto/Makefile  |3 -
 arch/arm64/crypto/speck-neon-core.S |  352 ---
 arch/arm64/crypto/speck-neon-glue.c |  282 -
 arch/s390/defconfig |1 -
 crypto/Kconfig  |   14 -
 crypto/Makefile |1 -
 crypto/speck.c  |  307 --
 crypto/testmgr.c|   36 -
 crypto/testmgr.h| 1486 ---
 include/crypto/speck.h  |   62 --
 15 files changed, 3278 deletions(-)
 delete mode 100644 arch/arm/crypto/speck-neon-core.S
 delete mode 100644 arch/arm/crypto/speck-neon-glue.c
 delete mode 100644 arch/arm64/crypto/speck-neon-core.S
 delete mode 100644 arch/arm64/crypto/speck-neon-glue.c
 delete mode 100644 crypto/speck.c
 delete mode 100644 include/crypto/speck.h

diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
index 925d1364727a..b8e69fe282b8 100644
--- a/arch/arm/crypto/Kconfig
+++ b/arch/arm/crypto/Kconfig
@@ -121,10 +121,4 @@ config CRYPTO_CHACHA20_NEON
select CRYPTO_BLKCIPHER
select CRYPTO_CHACHA20
 
-config CRYPTO_SPECK_NEON
-   tristate "NEON accelerated Speck cipher algorithms"
-   depends on KERNEL_MODE_NEON
-   select CRYPTO_BLKCIPHER
-   select CRYPTO_SPECK
-
 endif
diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
index 8de542c48ade..bd5bceef0605 100644
--- a/arch/arm/crypto/Makefile
+++ b/arch/arm/crypto/Makefile
@@ -10,7 +10,6 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
 obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
 obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
 obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
 
 ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
 ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
@@ -54,7 +53,6 @@ ghash-arm-ce-y:= ghash-ce-core.o ghash-ce-glue.o
 crct10dif-arm-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o
 crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
 chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
-speck-neon-y := speck-neon-core.o speck-neon-glue.o
 
 ifdef REGENERATE_ARM_CRYPTO
 quiet_cmd_perl = PERL$@
diff --git a/arch/arm/crypto/speck-neon-core.S 
b/arch/arm/crypto/speck-neon-core.S
deleted file mode 100644
index 3c1e203e53b9..
--- a/arch/arm/crypto/speck-neon-core.S
+++ /dev/null
@@ -1,432 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
- *
- * Copyright (c) 2018 Google, Inc
- *
- * Author: Eric Biggers <ebigg...@google.com>
- */
-
-#include 
-
-   .text
-   .fpuneon
-
-   // arguments
-   ROUND_KEYS  .reqr0  // const {u64,u32} *round_keys
-   NROUNDS .reqr1  // int nrounds
-   DST .reqr2  // void *dst
-   SRC .reqr3  // const void *src
-   NBYTES  .reqr4  // unsigned int nbytes
-   TWEAK   .reqr5  // void *tweak
-
-   // registers which hold the data being encrypted/decrypted
-   X0  .reqq0
-   X0_L.reqd0
-   X0_H.reqd1
-   Y0  .reqq1
-   Y0_H.reqd3
-   X1  .reqq2
-   X1_L.reqd4
-   X1_H.reqd5
-   Y1  .reqq3
-   Y1_H.reqd7
-   X2  .reqq4
-   X2_L.reqd8
-   X2_H.reqd9
-   Y2  .reqq5
-   Y2_H.reqd11
-   X3  .reqq6
-   X3_L.reqd12
-   X3_H.reqd13
-   Y3  .reqq7
-   Y3_H.reqd15
-
-   // the round key, duplicated in all lanes
-   ROUND_KEY   .reqq8
-   ROUND_KEY_L .reqd16
-   ROUND_KEY_H .reqd17
-
-   // index vector for vtbl-based 8-bit rotates
-   ROTATE_TABLE.reqd18
-
-   // multiplication table for updating XTS tweaks
-   GF128MUL_TABLE  .reqd19
-   GF64MUL_TABLE   .reqd19
-
-   // current XTS tweak value(s)
-   TWEAKV  .reqq10
-   TWEAKV_L.reqd20
-   TWEAKV_H.reqd21
-
-   TMP0.reqq12
-   TMP0_L  .reqd24
-   TMP0_H  .reqd25
-   TMP1 

Re: [PATCH] fscrypt: add support for ChaCha20 contents encryption

2017-12-07 Thread Jason A. Donenfeld
Hi Eric,

Nice to see more use of ChaCha20. However...

Can we skip over the "sort of worse than XTS, but not having _real_
authentication sucks anyway in either case, so whatever" and move
directly to, "linux finally supports authenticated encryption for disk
encryption!"? This would be a big deal and would actually be a
noticeable security improvement, instead of a potentially dubious step
sidewaysbackish.

Bcachefs supports ChaCha20Poly1305, which is pretty neat. From what
I've read, performance is acceptable too.
http://bcachefs.org/Encryption/

Jason


Re: [PATCH 1/2] random: always call random ready function

2017-10-19 Thread Jason A. Donenfeld
Good tips, thanks. I'll wait for more comments before resubmitting v2,
but in-progress changes live here:
https://git.zx2c4.com/linux-dev/log/?h=jd/cleaner-add-random-ready


[PATCH 2/2] crypto/drbg: account for no longer returning -EALREADY

2017-10-19 Thread Jason A. Donenfeld
We now structure things in a way that assumes the seeding function is
always eventually called.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 crypto/drbg.c | 20 +---
 1 file changed, 5 insertions(+), 15 deletions(-)

diff --git a/crypto/drbg.c b/crypto/drbg.c
index 70018397e59a..501e013cb96c 100644
--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -1411,18 +1411,8 @@ static int drbg_prepare_hrng(struct drbg_state *drbg)
 
err = add_random_ready_callback(>random_ready);
 
-   switch (err) {
-   case 0:
-   break;
-
-   case -EALREADY:
-   err = 0;
-   /* fall through */
-
-   default:
-   drbg->random_ready.func = NULL;
+   if (err)
return err;
-   }
 
drbg->jent = crypto_alloc_rng("jitterentropy_rng", 0, 0);
 
@@ -1432,7 +1422,7 @@ static int drbg_prepare_hrng(struct drbg_state *drbg)
 */
drbg->reseed_threshold = 50;
 
-   return err;
+   return 0;
 }
 
 /*
@@ -1526,9 +1516,9 @@ static int drbg_instantiate(struct drbg_state *drbg, 
struct drbg_string *pers,
  */
 static int drbg_uninstantiate(struct drbg_state *drbg)
 {
-   if (drbg->random_ready.func) {
-   del_random_ready_callback(>random_ready);
-   cancel_work_sync(>seed_work);
+   del_random_ready_callback(>random_ready);
+   cancel_work_sync(>seed_work);
+   if (drbg->jent) {
crypto_free_rng(drbg->jent);
drbg->jent = NULL;
}
-- 
2.14.2



Re: [PATCH 1/2] random: always call random ready function

2017-10-19 Thread Jason A. Donenfeld
These are mostly untested, but I wanted to spec it out for a taste of
what it looks like.


[PATCH 1/2] random: always call random ready function

2017-10-19 Thread Jason A. Donenfeld
As this interface becomes more heavily used, it will be painful for
callers to always need to check for -EALREADY.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 drivers/char/random.c | 24 
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 8ad92707e45f..e161acf35d4a 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1556,40 +1556,45 @@ EXPORT_SYMBOL(wait_for_random_bytes);
 
 /*
  * Add a callback function that will be invoked when the nonblocking
- * pool is initialised.
+ * pool is initialised, or calls that function if it already is.
  *
  * returns: 0 if callback is successfully added
- * -EALREADY if pool is already initialised (callback not called)
  * -ENOENT if module for callback is not alive
  */
 int add_random_ready_callback(struct random_ready_callback *rdy)
 {
struct module *owner;
unsigned long flags;
-   int err = -EALREADY;
 
-   if (crng_ready())
-   return err;
+   BUG_ON(!rdy->func);
+
+   if (crng_ready()) {
+   rdy->func(rdy);
+   rdy->func = NULL;
+   return 0;
+   }
 
owner = rdy->owner;
if (!try_module_get(owner))
return -ENOENT;
 
spin_lock_irqsave(_ready_list_lock, flags);
-   if (crng_ready())
+   if (crng_ready()) {
+   rdy->func(rdy);
+   rdy->func = NULL;
goto out;
+   }
 
owner = NULL;
 
list_add(>list, _ready_list);
-   err = 0;
 
 out:
spin_unlock_irqrestore(_ready_list_lock, flags);
 
module_put(owner);
 
-   return err;
+   return 0;
 }
 EXPORT_SYMBOL(add_random_ready_callback);
 
@@ -1601,6 +1606,9 @@ void del_random_ready_callback(struct 
random_ready_callback *rdy)
unsigned long flags;
struct module *owner = NULL;
 
+   if (!rdy->func)
+   return;
+
spin_lock_irqsave(_ready_list_lock, flags);
if (!list_empty(>list)) {
list_del_init(>list);
-- 
2.14.2



Re: [PATCH] chacha20-ssse3/avx2: satisfy stack validation 2.0

2017-10-10 Thread Jason A. Donenfeld
Hi Herbert,

Can we get this reviewed for rc5 please?

Thanks,
Jason


Re: [PATCH] chacha20-ssse3/avx2: satisfy stack validation 2.0

2017-10-08 Thread Jason A. Donenfeld
s/objdump/objtool/g obviously.


[PATCH] chacha20-ssse3/avx2: satisfy stack validation 2.0

2017-10-08 Thread Jason A. Donenfeld
The new stack validator in objdump doesn't like directly assigning r11
to rsp, warning with something like:

warning: objtool: chacha20_4block_xor_ssse3()+0xa: unsupported stack pointer 
realignment
warning: objtool: chacha20_8block_xor_avx2()+0x6: unsupported stack pointer 
realignment

This fixes things up to use code similar to gcc's DRAP register, so that
objdump remains happy.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Fixes: baa41469a7b9 ("objtool: Implement stack validation 2.0")
---
 arch/x86/crypto/chacha20-avx2-x86_64.S  | 4 ++--
 arch/x86/crypto/chacha20-ssse3-x86_64.S | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/crypto/chacha20-avx2-x86_64.S 
b/arch/x86/crypto/chacha20-avx2-x86_64.S
index 3a2dc3dc6cac..f3cd26f48332 100644
--- a/arch/x86/crypto/chacha20-avx2-x86_64.S
+++ b/arch/x86/crypto/chacha20-avx2-x86_64.S
@@ -45,7 +45,7 @@ ENTRY(chacha20_8block_xor_avx2)
 
vzeroupper
# 4 * 32 byte stack, 32-byte aligned
-   mov %rsp, %r8
+   lea 8(%rsp),%r10
and $~31, %rsp
sub $0x80, %rsp
 
@@ -443,6 +443,6 @@ ENTRY(chacha20_8block_xor_avx2)
vmovdqu %ymm15,0x01e0(%rsi)
 
vzeroupper
-   mov %r8,%rsp
+   lea -8(%r10),%rsp
ret
 ENDPROC(chacha20_8block_xor_avx2)
diff --git a/arch/x86/crypto/chacha20-ssse3-x86_64.S 
b/arch/x86/crypto/chacha20-ssse3-x86_64.S
index 3f511a7d73b8..512a2b500fd1 100644
--- a/arch/x86/crypto/chacha20-ssse3-x86_64.S
+++ b/arch/x86/crypto/chacha20-ssse3-x86_64.S
@@ -160,7 +160,7 @@ ENTRY(chacha20_4block_xor_ssse3)
# done with the slightly better performing SSSE3 byte shuffling,
# 7/12-bit word rotation uses traditional shift+OR.
 
-   mov %rsp,%r11
+   lea 8(%rsp),%r10
sub $0x80,%rsp
and $~63,%rsp
 
@@ -625,6 +625,6 @@ ENTRY(chacha20_4block_xor_ssse3)
pxor%xmm1,%xmm15
movdqu  %xmm15,0xf0(%rsi)
 
-   mov %r11,%rsp
+   lea -8(%r10),%rsp
ret
 ENDPROC(chacha20_4block_xor_ssse3)
-- 
2.14.2



[PATCH] crypto/rng: ensure that the RNG is ready before using

2017-07-16 Thread Jason A. Donenfeld
Otherwise, we might be seeding the RNG using bad randomness, which is
dangerous. The one use of this function from within the kernel -- not
from userspace -- is being removed (keys/big_key), so that call site
isn't relevant in assessing this.

Cc: Herbert Xu <herb...@gondor.apana.org.au>
Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 crypto/rng.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/crypto/rng.c b/crypto/rng.c
index 5e8469244960..b4a618668161 100644
--- a/crypto/rng.c
+++ b/crypto/rng.c
@@ -43,12 +43,14 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 
*seed, unsigned int slen)
if (!buf)
return -ENOMEM;
 
-   get_random_bytes(buf, slen);
+   err = get_random_bytes_wait(buf, slen);
+   if (err)
+   goto out;
seed = buf;
}
 
err = crypto_rng_alg(tfm)->seed(tfm, seed, slen);
-
+out:
kzfree(buf);
return err;
 }
-- 
2.13.3



Re: [kernel-hardening] [PATCH] random: warn when kernel uses unseeded randomness

2017-06-21 Thread Jason A. Donenfeld
Hi Ted,

On Wed, Jun 21, 2017 at 10:38 PM, Theodore Ts'o <ty...@mit.edu> wrote:
> I agree completely with all of this.  The following patch replaces the
> current topmost patch on the random.git tree:
> For developers who want to work on improving this situation,
> CONFIG_WARN_UNSEEDED_RANDOM has been renamed to
> CONFIG_WARN_ALL_UNSEEDED_RANDOM.  By default the kernel will always
> print the first use of unseeded randomness.  This way, hopefully the
> security obsessed will be happy that there is _some_ indication when
> the kernel boots there may be a potential issue with that architecture
> or subarchitecture.  To see all uses of unseeded randomness,
> developers can enable CONFIG_WARN_ALL_UNSEEDED_RANDOM.

Seems fine to me.

Acked-by: Jason A. Donenfeld <ja...@zx2c4.com>

Jason


[PATCH] random: warn when kernel uses unseeded randomness

2017-06-20 Thread Jason A. Donenfeld
This enables an important dmesg notification about when drivers have
used the crng without it being seeded first. Prior, these errors would
occur silently, and so there hasn't been a great way of diagnosing these
types of bugs for obscure setups. By adding this as a config option, we
can leave it on by default, so that we learn where these issues happen,
in the field, will still allowing some people to turn it off, if they
really know what they're doing and do not want the log entries.

However, we don't leave it _completely_ by default. An earlier version
of this patch simply had `default y`. I'd really love that, but it turns
out, this problem with unseeded randomness being used is really quite
present and is going to take a long time to fix. Thus, as a compromise
between log-messages-for-all and nobody-knows, this is `default y`,
except it is also `depends on DEBUG_KERNEL`. This will ensure that the
curious see the messages while others don't have to.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Signed-off-by: Theodore Ts'o <ty...@mit.edu>
---
Hi Ted,

This patch is meant to replace d06bfd1989fe97623b32d6df4ffa6e4338c99dc8,
which is currently in your dev tree. It switches from using `default y`
and `depends on DEBUG_KERNEL` to using the more simple `default DEBUG_KERNEL`.
This kind of change I think should satisfy most potential objections, by
being present for those who might find it useful, but invisble for those
who don't want the spam.

If you'd like to replace the earlier commit with this one, feel free. If
not, that's fine too.

Jason

 drivers/char/random.c | 15 +--
 lib/Kconfig.debug | 15 +++
 2 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 3853dd4f92e7..fa5bbd5a7ca0 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -288,7 +288,6 @@
 #define SEC_XFER_SIZE  512
 #define EXTRACT_SIZE   10
 
-#define DEBUG_RANDOM_BOOT 0
 
 #define LONGS(x) (((x) + sizeof(unsigned long) - 1)/sizeof(unsigned long))
 
@@ -1481,7 +1480,7 @@ void get_random_bytes(void *buf, int nbytes)
 {
__u8 tmp[CHACHA20_BLOCK_SIZE];
 
-#if DEBUG_RANDOM_BOOT > 0
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
if (!crng_ready())
printk(KERN_NOTICE "random: %pF get_random_bytes called "
   "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
@@ -2075,6 +2074,12 @@ u64 get_random_u64(void)
return ret;
 #endif
 
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
+   if (!crng_ready())
+   printk(KERN_NOTICE "random: %pF get_random_u64 called "
+  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
+#endif
+
batch = _cpu_var(batched_entropy_u64);
if (use_lock)
read_lock_irqsave(_entropy_reset_lock, flags);
@@ -2101,6 +2106,12 @@ u32 get_random_u32(void)
if (arch_get_random_int())
return ret;
 
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
+   if (!crng_ready())
+   printk(KERN_NOTICE "random: %pF get_random_u32 called "
+  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
+#endif
+
batch = _cpu_var(batched_entropy_u32);
if (use_lock)
read_lock_irqsave(_entropy_reset_lock, flags);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index e4587ebe52c7..41cf12288369 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1209,6 +1209,21 @@ config STACKTRACE
  It is also used by various kernel debugging features that require
  stack trace generation.
 
+config WARN_UNSEEDED_RANDOM
+   bool "Warn when kernel uses unseeded randomness"
+   default DEBUG_KERNEL
+   help
+ Some parts of the kernel contain bugs relating to their use of
+ cryptographically secure random numbers before it's actually possible
+ to generate those numbers securely. This setting ensures that these
+ flaws don't go unnoticed, by enabling a message, should this ever
+ occur. This will allow people with obscure setups to know when things
+ are going wrong, so that they might contact developers about fixing
+ it.
+
+ Say Y here, unless you simply do not care about using unseeded
+ randomness and do not want a potential warning message in your logs.
+
 config DEBUG_KOBJECT
bool "kobject debugging"
depends on DEBUG_KERNEL
-- 
2.13.1



Re: [PATCH] random: silence compiler warnings and fix race

2017-06-20 Thread Jason A. Donenfeld
On Wed, Jun 21, 2017 at 1:38 AM, Theodore Ts'o  wrote:
> The punch was in response to this statement, which I personally found
> fairly infuriating:
>
>>> I more or less agree with you that we should just turn this on for all
>>> users and they'll just have to live with the spam and report odd
>>> entries, and overtime we'll fix all the violations.

Holy cow, please cool it. I think the "or less" part was relevant, as
was the subsequent sentence which characterized that sentiment as
"hubris". Also, the subsequent email when I made explicit the fact
that I was more in agreement with you than you interpreted. So, in
case you're really really really not getting the message I'm trying to
make so explicitly clear to you: I AGREE WITH YOU. So, no more
punching, pretty please?

>
> There seems to be a fundamental misapprehension that it will be easy
> to "fix all the violations".

I don't think it will be easy. I agree with you.

> But for other
> platforms, it really, REALLY isn't that easy to fix.

Other platforms will be hard. I agree with you.

> So personally, I think

I'm going to roll Kees' suggestion into a PATCH and send it to you.
You can decide if you want to apply it. I'll be satisfied with
whatever you choose and will follow your lead.

Jason


Re: [kernel-hardening] Re: [PATCH] random: silence compiler warnings and fix race

2017-06-20 Thread Jason A. Donenfeld
On Tue, Jun 20, 2017 at 8:14 PM, Kees Cook  wrote:
> How about doing this:
>
>default DEBUG_KERNEL
>
> Most distro kernel select DEBUG_KERNEL because it unhides a bunch of
> other useful configs. Since it doesn't strictly _depend_ on
> DEBUG_KERNEL, I think it's probably a mistake to enforce a false
> dependency. Using it as a hint for the default seems maybe like a good
> middle ground. (And if people can't agree on that, then I guess
> "default n"...)

I didn't know you could do that with Kconfig. Great idea. I'll make
this change and submit a new patch to Ted.

Jason


Re: [PATCH] random: silence compiler warnings and fix race

2017-06-20 Thread Jason A. Donenfeld
On Tue, Jun 20, 2017 at 11:36 AM, Theodore Ts'o  wrote:
>> But I think there's another camp that would mutiny in the face of this
>> kind of hubris.
>
> Blocking the boot for hours and hours until we have enough entropy to
> initialize the CRNG is ***not*** an acceptable way of making the
> warning messages go away.  Do that and the users **will** mutiny.
>
> It's this sort of attitude which is why Linus has in the past said
> that security people are sometimes insane

Uh, talk about a totally unnecessary punch... In case my last email
wasn't clear, I fully recognize that `default y` is a tad too extreme,
which is why from one of the earliest revisions in this series, I
moved directly to the compromise solution (`depends DEBUG_KERNEL`)
without even waiting for people to complain first.


Re: [PATCH] random: silence compiler warnings and fix race

2017-06-20 Thread Jason A. Donenfeld
On Tue, Jun 20, 2017 at 10:33 AM, Jeffrey Walton  wrote:
> I think it is a bad idea to suppress all messages from a security
> engineering point of view.
>
> Many folks don't run debug kernels. Most of the users who want or need
> to know of the issues won't realize its happening. Consider, the
> reason we learned of systemd's problems was due to dmesg's.
>
> Suppressing all messages for all configurations cast a wider net than
> necessary. Configurations that could potentially be detected and fixed
> likely will go unnoticed. If the problem is not brought to light, then
> it won't be fixed.

I more or less agree with you that we should just turn this on for all
users and they'll just have to live with the spam and report odd
entries, and overtime we'll fix all the violations.

But I think there's another camp that would mutiny in the face of this
kind of hubris.

That's why I moved pretty readily toward the compromise position of
default y, but depends on DEBUG_KERNEL. My hope was that it'd to an
extent satisfy both camps, and also disappoint both camps in an equal
way.


Re: [PATCH] random: silence compiler warnings and fix race

2017-06-20 Thread Jason A. Donenfeld
Hey Ted,

On Tue, Jun 20, 2017 at 02:03:44AM -0400, Theodore Ts'o wrote:
> I actually had set up an earlier version of your patch for on Saturday
> while I was in Beijing.  (Like Linus, I'm attending the LinuxCon China
> conference Monday and Tuesday.)  I had even created the signed tag,
> I've since respun the commit to reflect your newer patch (see the
> random_for_linus_stable tag on random.git) and rebased the dev branch
> on top of that.  Please take a look and comment.

So it looks like you've gone with 4a072c71f49. If that looks good
(moving the lock, etc) to you, then great, we're done. If there are
still locking objections (are there?), then we'll need to revisit.

> but I didn't send the pull request to Linus because I was waiting to
> see about how discussions over the locking strategy and the spammy log
> messages on PowerPC was going to get resolved.
> The other open issue I want to resolve before sending a pull request
> this week is whether we want to change the default for
> CONFIG_WARN_UNSEEDED_RANDOM so that the answer is 'n'.

In the v1 of this patch many moons ago, it was just vanilla, default y,
but due to the spamminess, I thought folks would revolt. So I made a
change:

Specifically, I added `depends on DEBUG_KERNEL`. This means that these
useful warnings will only poke other kernel developers. This is probably
exactly what we want. If the various associated developers see a warning
coming from their particular subsystem, they'll be more motivated to
fix it. Ordinary users on distribution kernels shouldn't see the
warnings or the spam at all, since typically users aren't using
DEBUG_KERNEL.

Then, to make things _even less_ annoying to kernel developers, you
added a nice patch on top to squelch repeated messages.

So, I still think this current strategy is a good one, of default y, but
depends on DEBUG_KERNEL.

Regards,
Jason


Re: [PATCH] random: silence compiler warnings and fix race

2017-06-19 Thread Jason A. Donenfeld
Hello Ted,

With rc6 already released and rc7 coming up, I'd really appreciate you
stepping in here and either ACKing the above commit, or giving your
two cents about it in case I need to roll something different.

Thanks,
Jason

On Thu, Jun 15, 2017 at 12:45 AM, Jason A. Donenfeld <ja...@zx2c4.com> wrote:
> Odd versions of gcc for the sh4 architecture will actually warn about
> flags being used while uninitialized, so we set them to zero. Non crazy
> gccs will optimize that out again, so it doesn't make a difference.
>
> Next, over aggressive gccs could inline the expression that defines
> use_lock, which could then introduce a race resulting in a lock
> imbalance. By using READ_ONCE, we prevent that fate. Finally, we make
> that assignment const, so that gcc can still optimize a nice amount.
>
> Finally, we fix a potential deadlock between primary_crng.lock and
> batched_entropy_reset_lock, where they could be called in opposite
> order. Moving the call to invalidate_batched_entropy to outside the lock
> rectifies this issue.
>
> Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
> ---
> Ted -- the first part of this is the fixup patch we discussed earlier.
> Then I added on top a fix for a potentially related race.
>
> I'm not totally convinced that moving this block to outside the spinlock
> is 100% okay, so please give this a close look before merging.
>
>
>  drivers/char/random.c | 12 ++--
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index e870f329db88..01a260f67437 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -803,13 +803,13 @@ static int crng_fast_load(const char *cp, size_t len)
> p[crng_init_cnt % CHACHA20_KEY_SIZE] ^= *cp;
> cp++; crng_init_cnt++; len--;
> }
> +   spin_unlock_irqrestore(_crng.lock, flags);
> if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
> invalidate_batched_entropy();
> crng_init = 1;
> wake_up_interruptible(_init_wait);
> pr_notice("random: fast init done\n");
> }
> -   spin_unlock_irqrestore(_crng.lock, flags);
> return 1;
>  }
>
> @@ -841,6 +841,7 @@ static void crng_reseed(struct crng_state *crng, struct 
> entropy_store *r)
> }
> memzero_explicit(, sizeof(buf));
> crng->init_time = jiffies;
> +   spin_unlock_irqrestore(_crng.lock, flags);
> if (crng == _crng && crng_init < 2) {
> invalidate_batched_entropy();
> crng_init = 2;
> @@ -848,7 +849,6 @@ static void crng_reseed(struct crng_state *crng, struct 
> entropy_store *r)
> wake_up_interruptible(_init_wait);
> pr_notice("random: crng init done\n");
> }
> -   spin_unlock_irqrestore(_crng.lock, flags);
>  }
>
>  static inline void crng_wait_ready(void)
> @@ -2041,8 +2041,8 @@ static DEFINE_PER_CPU(struct batched_entropy, 
> batched_entropy_u64);
>  u64 get_random_u64(void)
>  {
> u64 ret;
> -   bool use_lock = crng_init < 2;
> -   unsigned long flags;
> +   bool use_lock = READ_ONCE(crng_init) < 2;
> +   unsigned long flags = 0;
> struct batched_entropy *batch;
>
>  #if BITS_PER_LONG == 64
> @@ -2073,8 +2073,8 @@ static DEFINE_PER_CPU(struct batched_entropy, 
> batched_entropy_u32);
>  u32 get_random_u32(void)
>  {
> u32 ret;
> -   bool use_lock = crng_init < 2;
> -   unsigned long flags;
> +   bool use_lock = READ_ONCE(crng_init) < 2;
> +   unsigned long flags = 0;
> struct batched_entropy *batch;
>
> if (arch_get_random_int())
> --
> 2.13.1
>


Re: [PATCH] random: silence compiler warnings and fix race

2017-06-19 Thread Jason A. Donenfeld
On Mon, Jun 19, 2017 at 9:45 AM, Sebastian Andrzej Siewior
 wrote:
> ehm. You sure? I simply delayed the lock-dropping _after_ the state
> variable was been modified. So it was basically what your patch did
> except it was unlocked later…

Yes, I'm sure. You moved the call to invalidate_batched_entropy() to
be after the assignment of crng_init. However, the call to
invalidate_batched_entropy() must be made _before_ the assignment of
crng_init.

>> > Are use about that? I am not sure that the gcc will inline "crng_init"
>> > read twice. It is not a local variable. READ_ONCE() is usually used
>> > where gcc could cache a memory access but you do not want this. But hey!
>> > If someone knows better I am here to learn.
>>
>> The whole purpose is that I _want_ it to cache the memory access so
>> that it is _not_ inlined. So, based on your understanding, it does
>> exactly what I intended it to do. The reason is that I'd like to avoid
>> a lock imbalance, which could happen if the read is inlined.
>
> So it was good as it was which means you can drop that READ_ONCE().

Except READ_ONCE ensures that the compiler will never inline it, so it
actually needs to stay.


Re: [kernel-hardening] Re: [PATCH v4 13/13] random: warn when kernel uses unseeded randomness

2017-06-18 Thread Jason A. Donenfeld
On Sun, Jun 18, 2017 at 7:55 PM, Stephan Müller  wrote:
> But you bring up an interesting point: if it is true you say that it is hard
> for people to use differnent types of APIs regarding entropy and random
> numbers right (which I would concur with), and considering that you imply that
> get_random_bytes, get_random_u32 and get_random_u64 have the same security
> strength, why do we have these three APIs to begin with? The get_random_bytes
> API would then be more than enough.

Because there are efficiences we can benefit from for getting integer
sized outputs.

Use get_random_{u32,u64} when you want a secure random number.
Use get_random_bytes when you want a longer secure random bytestring.


Re: [kernel-hardening] Re: [PATCH v4 13/13] random: warn when kernel uses unseeded randomness

2017-06-18 Thread Jason A. Donenfeld
On Sun, Jun 18, 2017 at 5:46 PM, Theodore Ts'o  wrote:
> You are effectively proposing that there ought to be a middle range of
> security between prandom_32, get_random_u32/get_random_u64 and
> get_random_bytes().  I think that's going to lead to all sorts of
> complexity and bugs from people not understanding when they should use
> get_random_u32 vs get_random_bytes versus prandom_u32.  And then we'll
> end up needing to audit all of the callsites for get_random_u32() so
> they don't violate this new usage rule that you are proposing.

I agree with you wholeheartedly.

get_random_* provides the secure random numbers.
prandom_* provides the insecure random numbers.

Introducing some kind of middle ground will result in needless
complexity and inevitable bugs.


Re: [kernel-hardening] Re: [PATCH v4 06/13] iscsi: ensure RNG is seeded before use

2017-06-16 Thread Jason A. Donenfeld
Hi Lee,

On Fri, Jun 16, 2017 at 11:58 PM, Lee Duncan  wrote:
> It seems like what you are doing is basically "good", i.e. if there is
> not enough random data, don't use it. But what happens in that case? The
> authentication fails? How does the user know to wait and try again?

The process just remains in interruptible (kill-able) sleep until
there is enough entropy, so the process doesn't need to do anything.
If the waiting is interrupted by a signal, it returns -ESYSRESTART,
which follows the usual semantics of restartable syscalls.

Jason


Re: [PATCH] random: silence compiler warnings and fix race

2017-06-16 Thread Jason A. Donenfeld
On Fri, Jun 16, 2017 at 4:35 PM, Sebastian Andrzej Siewior
 wrote:
> I wouldn't just push the lock one up as is but move that write part to
> crng_init to remain within the locked section. Like that:

We can't quite do that, because invalidate_batched_entropy() needs to
be called _before_ crng_init. Otherwise a concurrent call to
get_random_u32/u64() will have crng_init being the wrong value when
the batched entropy is still old.

> Are use about that? I am not sure that the gcc will inline "crng_init"
> read twice. It is not a local variable. READ_ONCE() is usually used
> where gcc could cache a memory access but you do not want this. But hey!
> If someone knows better I am here to learn.

The whole purpose is that I _want_ it to cache the memory access so
that it is _not_ inlined. So, based on your understanding, it does
exactly what I intended it to do. The reason is that I'd like to avoid
a lock imbalance, which could happen if the read is inlined.

Jason


Re: [PATCH v5 01/13] random: invalidate batched entropy after crng init

2017-06-16 Thread Jason A. Donenfeld
On Fri, Jun 16, 2017 at 10:31 AM, Sebastian Andrzej Siewior
 wrote:
> am talking about. Again:

I actually figured that out myself after sending the initial email, so
then I wrote a follow-up patch which I attached to this thread. You
should have received it. Can you take a look?

https://lkml.org/lkml/2017/6/14/942


[PATCH] random: silence compiler warnings and fix race

2017-06-14 Thread Jason A. Donenfeld
Odd versions of gcc for the sh4 architecture will actually warn about
flags being used while uninitialized, so we set them to zero. Non crazy
gccs will optimize that out again, so it doesn't make a difference.

Next, over aggressive gccs could inline the expression that defines
use_lock, which could then introduce a race resulting in a lock
imbalance. By using READ_ONCE, we prevent that fate. Finally, we make
that assignment const, so that gcc can still optimize a nice amount.

Finally, we fix a potential deadlock between primary_crng.lock and
batched_entropy_reset_lock, where they could be called in opposite
order. Moving the call to invalidate_batched_entropy to outside the lock
rectifies this issue.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
Ted -- the first part of this is the fixup patch we discussed earlier.
Then I added on top a fix for a potentially related race.

I'm not totally convinced that moving this block to outside the spinlock
is 100% okay, so please give this a close look before merging.


 drivers/char/random.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index e870f329db88..01a260f67437 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -803,13 +803,13 @@ static int crng_fast_load(const char *cp, size_t len)
p[crng_init_cnt % CHACHA20_KEY_SIZE] ^= *cp;
cp++; crng_init_cnt++; len--;
}
+   spin_unlock_irqrestore(_crng.lock, flags);
if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
invalidate_batched_entropy();
crng_init = 1;
wake_up_interruptible(_init_wait);
pr_notice("random: fast init done\n");
}
-   spin_unlock_irqrestore(_crng.lock, flags);
return 1;
 }
 
@@ -841,6 +841,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
}
memzero_explicit(, sizeof(buf));
crng->init_time = jiffies;
+   spin_unlock_irqrestore(_crng.lock, flags);
if (crng == _crng && crng_init < 2) {
invalidate_batched_entropy();
crng_init = 2;
@@ -848,7 +849,6 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
wake_up_interruptible(_init_wait);
pr_notice("random: crng init done\n");
}
-   spin_unlock_irqrestore(_crng.lock, flags);
 }
 
 static inline void crng_wait_ready(void)
@@ -2041,8 +2041,8 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u64);
 u64 get_random_u64(void)
 {
u64 ret;
-   bool use_lock = crng_init < 2;
-   unsigned long flags;
+   bool use_lock = READ_ONCE(crng_init) < 2;
+   unsigned long flags = 0;
struct batched_entropy *batch;
 
 #if BITS_PER_LONG == 64
@@ -2073,8 +2073,8 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u32);
 u32 get_random_u32(void)
 {
u32 ret;
-   bool use_lock = crng_init < 2;
-   unsigned long flags;
+   bool use_lock = READ_ONCE(crng_init) < 2;
+   unsigned long flags = 0;
struct batched_entropy *batch;
 
if (arch_get_random_int())
-- 
2.13.1



Re: [PATCH v5 01/13] random: invalidate batched entropy after crng init

2017-06-14 Thread Jason A. Donenfeld
There's a potential race that I fixed in my v5 of that patch set, but
Ted only took v4, and for whatever reason has been to busy to submit
the additional patch I already posted showing the diff between v4
Hopefully he actually gets around to it and sends this for the next
rc. Here it is:

https://patchwork.kernel.org/patch/9774563/


[PATCH] rsa-pkcs1pad: use constant time memory comparison for MACs

2017-06-11 Thread Jason A. Donenfeld
Otherwise, we enable all sorts of forgeries via timing attack.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Suggested-by: Stephan Müller <smuel...@chronox.de>
Cc: sta...@vger.kernel.org
Cc: Herbert Xu <herb...@gondor.apana.org.au>
Cc: linux-crypto@vger.kernel.org
---
 crypto/rsa-pkcs1pad.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
index 8baab4307f7b..7830d304dff6 100644
--- a/crypto/rsa-pkcs1pad.c
+++ b/crypto/rsa-pkcs1pad.c
@@ -496,7 +496,7 @@ static int pkcs1pad_verify_complete(struct akcipher_request 
*req, int err)
goto done;
pos++;
 
-   if (memcmp(out_buf + pos, digest_info->data, digest_info->size))
+   if (crypto_memneq(out_buf + pos, digest_info->data, digest_info->size))
goto done;
 
pos += digest_info->size;
-- 
2.13.1



Re: [PATCH v4 12/13] bluetooth/smp: ensure RNG is properly seeded before ECDH use

2017-06-08 Thread Jason A. Donenfeld
On Thu, Jun 8, 2017 at 7:05 PM, Marcel Holtmann  wrote:
> on a powered down controller, you can not do any crypto. SMP is only during a 
> connection and the RPAs are only generated when needed. So yes, doing this 
> once in hci_power_on is plenty. However we might want to limit this to LE 
> capable controllers since for BR/EDR only controllers this is not needed. For 
> A2MP I need to check that we need the random numbers seeded there. However 
> this hidden behind the high speed feature.

Okay so it sounds like certain controllers will use this and certain
won't, and so it might be slightly complicated to follow the mouse
through the tube ideally. In that case, I'd recommend continuing with
this current patchset, which just adds the wait (which is a no-op if
it's already seeded) close to the actual call sites.

Can you review that patch and give your Signed-off-by if it looks good?

Jason


Re: [kernel-hardening] Re: [PATCH v4 06/13] iscsi: ensure RNG is seeded before use

2017-06-08 Thread Jason A. Donenfeld
On Thu, Jun 8, 2017 at 4:43 AM, Theodore Ts'o <ty...@mit.edu> wrote:
> What was the testing that was done for commit?  It looks safe, but I'm
> unfamiliar enough with how the iSCSI authentication works that I'd
> prefer getting an ack'ed by from the iSCSI maintainers or
> alternativel, information about how to kick off some kind of automated
> test suite ala xfstests for file systems.

Only very basic testing from my end.

I'm thus adding the iSCSI list to see if they'll have a look (patch reattached).

Jason
From 1adecf785526a2a96104767807140b9e1a9e2a27 Mon Sep 17 00:00:00 2001
From: "Jason A. Donenfeld" <ja...@zx2c4.com>
Date: Mon, 5 Jun 2017 05:09:54 +0200
Subject: [PATCH] iscsi: ensure RNG is seeded before use

It's not safe to use weak random data here, especially for the challenge
response randomness. Since we're always in process context, it's safe to
simply wait until we have enough randomness to carry out the
authentication correctly.

While we're at it, we clean up a small memleak during an error
condition.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: "Nicholas A. Bellinger" <n...@linux-iscsi.org>
Cc: Lee Duncan <ldun...@suse.com>
Cc: Chris Leech <cle...@redhat.com>
---
 drivers/target/iscsi/iscsi_target_auth.c  | 14 +++---
 drivers/target/iscsi/iscsi_target_login.c | 22 ++
 2 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c
index 903b667f8e01..f9bc8ec6fb6b 100644
--- a/drivers/target/iscsi/iscsi_target_auth.c
+++ b/drivers/target/iscsi/iscsi_target_auth.c
@@ -47,18 +47,21 @@ static void chap_binaryhex_to_asciihex(char *dst, char *src, int src_len)
 	}
 }
 
-static void chap_gen_challenge(
+static int chap_gen_challenge(
 	struct iscsi_conn *conn,
 	int caller,
 	char *c_str,
 	unsigned int *c_len)
 {
+	int ret;
 	unsigned char challenge_asciihex[CHAP_CHALLENGE_LENGTH * 2 + 1];
 	struct iscsi_chap *chap = conn->auth_protocol;
 
 	memset(challenge_asciihex, 0, CHAP_CHALLENGE_LENGTH * 2 + 1);
 
-	get_random_bytes(chap->challenge, CHAP_CHALLENGE_LENGTH);
+	ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);
+	if (unlikely(ret))
+		return ret;
 	chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
 CHAP_CHALLENGE_LENGTH);
 	/*
@@ -69,6 +72,7 @@ static void chap_gen_challenge(
 
 	pr_debug("[%s] Sending CHAP_C=0x%s\n\n", (caller) ? "server" : "client",
 			challenge_asciihex);
+	return 0;
 }
 
 static int chap_check_algorithm(const char *a_str)
@@ -143,6 +147,7 @@ static struct iscsi_chap *chap_server_open(
 	case CHAP_DIGEST_UNKNOWN:
 	default:
 		pr_err("Unsupported CHAP_A value\n");
+		kfree(conn->auth_protocol);
 		return NULL;
 	}
 
@@ -156,7 +161,10 @@ static struct iscsi_chap *chap_server_open(
 	/*
 	 * Generate Challenge.
 	 */
-	chap_gen_challenge(conn, 1, aic_str, aic_len);
+	if (chap_gen_challenge(conn, 1, aic_str, aic_len) < 0) {
+		kfree(conn->auth_protocol);
+		return NULL;
+	}
 
 	return chap;
 }
diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
index 92b96b51d506..e9bdc8b86e7d 100644
--- a/drivers/target/iscsi/iscsi_target_login.c
+++ b/drivers/target/iscsi/iscsi_target_login.c
@@ -245,22 +245,26 @@ int iscsi_check_for_session_reinstatement(struct iscsi_conn *conn)
 	return 0;
 }
 
-static void iscsi_login_set_conn_values(
+static int iscsi_login_set_conn_values(
 	struct iscsi_session *sess,
 	struct iscsi_conn *conn,
 	__be16 cid)
 {
+	int ret;
 	conn->sess		= sess;
 	conn->cid		= be16_to_cpu(cid);
 	/*
 	 * Generate a random Status sequence number (statsn) for the new
 	 * iSCSI connection.
 	 */
-	get_random_bytes(>stat_sn, sizeof(u32));
+	ret = get_random_bytes_wait(>stat_sn, sizeof(u32));
+	if (unlikely(ret))
+		return ret;
 
 	mutex_lock(_id_lock);
 	conn->auth_id		= iscsit_global->auth_id++;
 	mutex_unlock(_id_lock);
+	return 0;
 }
 
 __printf(2, 3) int iscsi_change_param_sprintf(
@@ -306,7 +310,11 @@ static int iscsi_login_zero_tsih_s1(
 		return -ENOMEM;
 	}
 
-	iscsi_login_set_conn_values(sess, conn, pdu->cid);
+	ret = iscsi_login_set_conn_values(sess, conn, pdu->cid);
+	if (unlikely(ret)) {
+		kfree(sess);
+		return ret;
+	}
 	sess->init_task_tag	= pdu->itt;
 	memcpy(>isid, pdu->isid, 6);
 	sess->exp_cmd_sn	= be32_to_cpu(pdu->cmdsn);
@@ -497,8 +505,7 @@ static int iscsi_login_non_zero_tsih_s1(
 {
 	struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
 
-	iscsi_login_set_conn_values(NULL, conn, pdu->cid);
-	return 0;
+	return iscsi_login_set_conn_values(NULL, conn, pdu->cid);
 }
 
 /*
@@ -554,9 +561,8 @@ static int iscsi_login_non_zero_tsih_s2(
 		atomic_set(>session_continuation, 1);
 	spin_unlock_bh(>conn_lock);
 
-	iscsi_login_set_conn_values(sess, conn, pdu-&

Re: [PATCH v4 12/13] bluetooth/smp: ensure RNG is properly seeded before ECDH use

2017-06-08 Thread Jason A. Donenfeld
Hello Marcel,

On Thu, Jun 8, 2017 at 7:04 AM, Marcel Holtmann  wrote:
> yes, there are plenty of commands needed before a controller becomes usable.

That doesn't clearly address with precision what Ted was wondering.
Specifically, the inquiry is: can you confirm with certainty whether
or not all calls to get_random_bytes() in the bluetooth directory are
*necessarily* going to come after a call to hci_power_on()?

Jason


Re: [PATCH v4 13/13] random: warn when kernel uses unseeded randomness

2017-06-08 Thread Jason A. Donenfeld
On Thu, Jun 8, 2017 at 10:19 AM, Theodore Ts'o  wrote:
> At the very least we probably should do a logical "uniq" on the output
> (e.g., if we have complained about the previous callsite, don't whinge
> about it again).

That seems okay to me.


Re: [PATCH v4 04/13] security/keys: ensure RNG is seeded before use

2017-06-07 Thread Jason A. Donenfeld
On Thu, Jun 8, 2017 at 2:50 AM, Jason A. Donenfeld <ja...@zx2c4.com> wrote:
> On Thu, Jun 8, 2017 at 2:31 AM, Theodore Ts'o <ty...@mit.edu> wrote:
>> I'm guessing you changed key_alloc_serial() to return an int back when
>> you were thinking that you might use get_random_bytes_wait(), which
>> could return -ERESTARTSYS.
>>
>> Now that you're not doing this, but using get_random_u32() instead,
>> there's no point to change the function signature of
>> key_alloc_serial() and add an error check in key_alloc() that will
>> never fail, right?  That's just adding a dead code path.  Which the
>> compiler can probably optimize away, but why make the code slightly
>> harder to read than necessasry?
>
> Good catch, and thanks for reading these so thoroughly that you caught
> the churn artifacts. Do you want me to clean this up and resubmit, or
> are you planning on adjusting it in the dev branch?

Fixed it up here if you just want to grab this instead:

https://git.kernel.org/pub/scm/linux/kernel/git/zx2c4/linux.git/patch/?id=a0361e55bce30ace529ed8b28bd452e3ac0ee91f


Re: [PATCH v4 01/13] random: invalidate batched entropy after crng init

2017-06-07 Thread Jason A. Donenfeld
On Thu, Jun 8, 2017 at 1:58 AM, Theodore Ts'o  wrote:
> Thanks, applied.  This will be on the for_stable that I will be
> sending to Linus sometime during 4.12-rcX.

I think you might have just missed the kbuild test robot complaining
about an incorrect compiler warning, when using an ancient gcc for the
sh platform. I fixed that bug in v5, which I posted about an hour
ago. The only change was a single commit, so you can just cherry-pick
that as you wish:

https://git.kernel.org/pub/scm/linux/kernel/git/zx2c4/linux.git/patch/?id=2f390fd961d1f97738d7af4f4797542e05f4607e

Other than that one commit change, it was the same.

Or, if it's easier, the tag for-random-4.12 in
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random.git is for you.


Re: [PATCH v4 04/13] security/keys: ensure RNG is seeded before use

2017-06-07 Thread Jason A. Donenfeld
On Thu, Jun 8, 2017 at 2:31 AM, Theodore Ts'o  wrote:
> I'm guessing you changed key_alloc_serial() to return an int back when
> you were thinking that you might use get_random_bytes_wait(), which
> could return -ERESTARTSYS.
>
> Now that you're not doing this, but using get_random_u32() instead,
> there's no point to change the function signature of
> key_alloc_serial() and add an error check in key_alloc() that will
> never fail, right?  That's just adding a dead code path.  Which the
> compiler can probably optimize away, but why make the code slightly
> harder to read than necessasry?

Good catch, and thanks for reading these so thoroughly that you caught
the churn artifacts. Do you want me to clean this up and resubmit, or
are you planning on adjusting it in the dev branch?


Re: [kernel-hardening] [PATCH v4 05/13] crypto/rng: ensure that the RNG is ready before using

2017-06-07 Thread Jason A. Donenfeld
On Thu, Jun 8, 2017 at 2:41 AM, Theodore Ts'o  wrote:
> The use in keys/big_key is _being_ removed, so this commit is
> dependent on that commit landing, correct?  (Order matters, because
> otherwise we don't want to potentially screw up doing a kernel bisect
> and causing their kernel to deadlock during the boot while they are
> trying to track down an unreleated problem.)

Yes. It's actually landing with get_random_bytes, to avoid a
dependency problem when merging. After these both lands, I'll submit a
third changing that over to get_random_bytes_wait in the right place.


Re: [kernel-hardening] [PATCH v4 08/13] cifs: use get_random_u32 for 32-bit lock random

2017-06-07 Thread Jason A. Donenfeld
On Thu, Jun 8, 2017 at 2:25 AM, Theodore Ts'o  wrote:
> There's a bigger problem here, which is that cifs_lock_secret is a
> 32-bit value which is being used to obscure flock->fl_owner before it
> is sent across the wire.  But flock->fl_owner is a pointer to the
> struct file *, so 64-bit architecture, the high 64-bits of a kernel
> pointer is being exposed to anyone using tcpdump.  (Oops, I'm showing
> my age; I guess all the cool kids are using Wireshark these days.)
>
> Worse, the obscuring is being done using XOR.  How an active attacker
> might be able to trivially reverse engineer the 32-bit "secret" is
> left as an exercise to the reader.  The bottom line is if the goal is
> to hide the memory location of a struct file from an attacker,
> cifs_lock_secret is about as useful as a TSA agent doing security
> theatre at an airport.  Which is to say, it makes the civilians feel
> good.  :-)

High five for taking the deep dive and actually reading how this all
works. Nice bug!

> Not waiting
> for the CRNG to be fully initialized is the *least* of its problems.

The kernel is vast and filled with tons of bugs of many sorts. On this
reasoning, maybe I should spend my time auditing web apps instead,
which are usually the "front door" of bugs? I like the puzzles of
random.c. I also had a real world need for wait_for_random_bytes() in
a module I'm writing.

But anyway, your general point is a really good one. Tons of callers
of the random functions are doing it wrong in one way or another.
Spending time looking at those is probably a good idea...

> Anyway, I'll include this commit in the dev branch of the random tree,
> since it's not going to make things worse.

Great, thanks.


Re: [kernel-hardening] [PATCH v4 08/13] cifs: use get_random_u32 for 32-bit lock random

2017-06-07 Thread Jason A. Donenfeld
On Thu, Jun 8, 2017 at 2:25 AM, Theodore Ts'o  wrote:
> There's a bigger problem here, which is that cifs_lock_secret is a
> 32-bit value which is being used to obscure flock->fl_owner before it
> is sent across the wire.  But flock->fl_owner is a pointer to the
> struct file *, so 64-bit architecture, the high 64-bits of a kernel
> pointer is being exposed to anyone using tcpdump.  (Oops, I'm showing
> my age; I guess all the cool kids are using Wireshark these days.)
>
> Worse, the obscuring is being done using XOR.  How an active attacker
> might be able to trivially reverse engineer the 32-bit "secret" is
> left as an exercise to the reader.  The bottom line is if the goal is
> to hide the memory location of a struct file from an attacker,
> cifs_lock_secret is about as useful as a TSA agent doing security
> theatre at an airport.  Which is to say, it makes the civilians feel
> good.  :-)

High five for taking the deep dive and actually reading how this all
works. Nice bug!

> Not waiting
> for the CRNG to be fully initialized is the *least* of its problems.

The kernel is vast and filled with tons of bugs of many sorts. On this
reasoning, maybe I should spend my time auditing web apps instead,
which are usually the "front door" of bugs? I like the puzzles of
random.c. I also had a real world need for wait_for_random_bytes() in
a module I'm writing.

But anyway, your general point is a really good one. Tons of callers
of the random functions are doing it wrong in one way or another.
Spending time looking at those is probably a good idea...

> Anyway, I'll include this commit in the dev branch of the random tree,
> since it's not going to make things worse.

Great, thanks.


[PATCH v5 02/13] random: add synchronous API for the urandom pool

2017-06-07 Thread Jason A. Donenfeld
This enables users of get_random_{bytes,u32,u64,int,long} to wait until
the pool is ready before using this function, in case they actually want
to have reliable randomness.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 drivers/char/random.c  | 41 +++--
 include/linux/random.h |  1 +
 2 files changed, 32 insertions(+), 10 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index d35da1603e12..77587ac1ecb2 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -851,11 +851,6 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
spin_unlock_irqrestore(_crng.lock, flags);
 }
 
-static inline void crng_wait_ready(void)
-{
-   wait_event_interruptible(crng_init_wait, crng_ready());
-}
-
 static void _extract_crng(struct crng_state *crng,
  __u8 out[CHACHA20_BLOCK_SIZE])
 {
@@ -1477,7 +1472,10 @@ static ssize_t extract_entropy_user(struct entropy_store 
*r, void __user *buf,
  * number of good random numbers, suitable for key generation, seeding
  * TCP sequence numbers, etc.  It does not rely on the hardware random
  * number generator.  For random bytes direct from the hardware RNG
- * (when available), use get_random_bytes_arch().
+ * (when available), use get_random_bytes_arch(). In order to ensure
+ * that the randomness provided by this function is okay, the function
+ * wait_for_random_bytes() should be called and return 0 at least once
+ * at any point prior.
  */
 void get_random_bytes(void *buf, int nbytes)
 {
@@ -1507,6 +1505,24 @@ void get_random_bytes(void *buf, int nbytes)
 EXPORT_SYMBOL(get_random_bytes);
 
 /*
+ * Wait for the urandom pool to be seeded and thus guaranteed to supply
+ * cryptographically secure random numbers. This applies to: the /dev/urandom
+ * device, the get_random_bytes function, and the get_random_{u32,u64,int,long}
+ * family of functions. Using any of these functions without first calling
+ * this function forfeits the guarantee of security.
+ *
+ * Returns: 0 if the urandom pool has been seeded.
+ *  -ERESTARTSYS if the function was interrupted by a signal.
+ */
+int wait_for_random_bytes(void)
+{
+   if (likely(crng_ready()))
+   return 0;
+   return wait_event_interruptible(crng_init_wait, crng_ready());
+}
+EXPORT_SYMBOL(wait_for_random_bytes);
+
+/*
  * Add a callback function that will be invoked when the nonblocking
  * pool is initialised.
  *
@@ -1860,6 +1876,8 @@ const struct file_operations urandom_fops = {
 SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
unsigned int, flags)
 {
+   int ret;
+
if (flags & ~(GRND_NONBLOCK|GRND_RANDOM))
return -EINVAL;
 
@@ -1872,9 +1890,9 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, 
count,
if (!crng_ready()) {
if (flags & GRND_NONBLOCK)
return -EAGAIN;
-   crng_wait_ready();
-   if (signal_pending(current))
-   return -ERESTARTSYS;
+   ret = wait_for_random_bytes();
+   if (unlikely(ret))
+   return ret;
}
return urandom_read(NULL, buf, count, NULL);
 }
@@ -2035,7 +2053,10 @@ static rwlock_t batched_entropy_reset_lock = 
__RW_LOCK_UNLOCKED(batched_entropy_
 /*
  * Get a random word for internal kernel use only. The quality of the random
  * number is either as good as RDRAND or as good as /dev/urandom, with the
- * goal of being quite fast and not depleting entropy.
+ * goal of being quite fast and not depleting entropy. In order to ensure
+ * that the randomness provided by this function is okay, the function
+ * wait_for_random_bytes() should be called and return 0 at least once
+ * at any point prior.
  */
 static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
 u64 get_random_u64(void)
diff --git a/include/linux/random.h b/include/linux/random.h
index ed5c3838780d..e29929347c95 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -34,6 +34,7 @@ extern void add_input_randomness(unsigned int type, unsigned 
int code,
 extern void add_interrupt_randomness(int irq, int irq_flags) __latent_entropy;
 
 extern void get_random_bytes(void *buf, int nbytes);
+extern int wait_for_random_bytes(void);
 extern int add_random_ready_callback(struct random_ready_callback *rdy);
 extern void del_random_ready_callback(struct random_ready_callback *rdy);
 extern void get_random_bytes_arch(void *buf, int nbytes);
-- 
2.13.0



[PATCH v5 04/13] security/keys: ensure RNG is seeded before use

2017-06-07 Thread Jason A. Donenfeld
Otherwise, we might use bad random numbers which, particularly in the
case of IV generation, could be quite bad. It makes sense to use the
synchronous API here, because we're always in process context (as the
code is littered with GFP_KERNEL and the like). However, we can't change
to using a blocking function in key serial allocation, because this will
block booting in some configurations, so here we use the more
appropriate get_random_u32, which will use RDRAND if available.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: David Howells <dhowe...@redhat.com>
Cc: Mimi Zohar <zo...@linux.vnet.ibm.com>
Cc: David Safford <saff...@us.ibm.com>
---
 security/keys/encrypted-keys/encrypted.c |  8 +---
 security/keys/key.c  | 16 
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/security/keys/encrypted-keys/encrypted.c 
b/security/keys/encrypted-keys/encrypted.c
index 0010955d7876..d51a28fc5cd5 100644
--- a/security/keys/encrypted-keys/encrypted.c
+++ b/security/keys/encrypted-keys/encrypted.c
@@ -777,10 +777,12 @@ static int encrypted_init(struct encrypted_key_payload 
*epayload,
 
__ekey_init(epayload, format, master_desc, datalen);
if (!hex_encoded_iv) {
-   get_random_bytes(epayload->iv, ivsize);
+   ret = get_random_bytes_wait(epayload->iv, ivsize);
+   if (unlikely(ret))
+   return ret;
 
-   get_random_bytes(epayload->decrypted_data,
-epayload->decrypted_datalen);
+   ret = get_random_bytes_wait(epayload->decrypted_data,
+   epayload->decrypted_datalen);
} else
ret = encrypted_key_decrypt(epayload, format, hex_encoded_iv);
return ret;
diff --git a/security/keys/key.c b/security/keys/key.c
index 455c04d80bbb..b72078e532f2 100644
--- a/security/keys/key.c
+++ b/security/keys/key.c
@@ -134,17 +134,15 @@ void key_user_put(struct key_user *user)
  * Allocate a serial number for a key.  These are assigned randomly to avoid
  * security issues through covert channel problems.
  */
-static inline void key_alloc_serial(struct key *key)
+static inline int key_alloc_serial(struct key *key)
 {
struct rb_node *parent, **p;
struct key *xkey;
 
-   /* propose a random serial number and look for a hole for it in the
-* serial number tree */
+   /* propose a non-negative random serial number and look for a hole for
+* it in the serial number tree */
do {
-   get_random_bytes(>serial, sizeof(key->serial));
-
-   key->serial >>= 1; /* negative numbers are not permitted */
+   key->serial = get_random_u32() >> 1;
} while (key->serial < 3);
 
spin_lock(_serial_lock);
@@ -170,7 +168,7 @@ static inline void key_alloc_serial(struct key *key)
rb_insert_color(>serial_node, _serial_tree);
 
spin_unlock(_serial_lock);
-   return;
+   return 0;
 
/* we found a key with the proposed serial number - walk the tree from
 * that point looking for the next unused serial number */
@@ -314,7 +312,9 @@ struct key *key_alloc(struct key_type *type, const char 
*desc,
 
/* publish the key by giving it a serial number */
atomic_inc(>nkeys);
-   key_alloc_serial(key);
+   ret = key_alloc_serial(key);
+   if (ret < 0)
+   goto security_error;
 
 error:
return key;
-- 
2.13.0



[PATCH v5 00/13] Unseeded In-Kernel Randomness Fixes

2017-06-07 Thread Jason A. Donenfeld
It looks like critique of this has come to an end. Could somebody take
this into their tree for 4.12?

As discussed in [1], there is a problem with get_random_bytes being
used before the RNG has actually been seeded. The solution for fixing
this appears to be multi-pronged. One of those prongs involves adding
a simple blocking API so that modules that use the RNG in process
context can just sleep (in an interruptable manner) until the RNG is
ready to be used. This winds up being a very useful API that covers
a few use cases, several of which are included in this patch set.

[1] http://www.openwall.com/lists/kernel-hardening/2017/06/02/2

Changes v4->v5:
  - Old versions of gcc warned on an uninitialized variable, so set
this to silence warning.

Jason A. Donenfeld (13):
  random: invalidate batched entropy after crng init
  random: add synchronous API for the urandom pool
  random: add get_random_{bytes,u32,u64,int,long,once}_wait family
  security/keys: ensure RNG is seeded before use
  crypto/rng: ensure that the RNG is ready before using
  iscsi: ensure RNG is seeded before use
  ceph: ensure RNG is seeded before using
  cifs: use get_random_u32 for 32-bit lock random
  rhashtable: use get_random_u32 for hash_rnd
  net/neighbor: use get_random_u32 for 32-bit hash random
  net/route: use get_random_int for random counter
  bluetooth/smp: ensure RNG is properly seeded before ECDH use
  random: warn when kernel uses unseeded randomness

 crypto/rng.c  |  6 +-
 drivers/char/random.c | 93 +++
 drivers/target/iscsi/iscsi_target_auth.c  | 14 -
 drivers/target/iscsi/iscsi_target_login.c | 22 +---
 fs/cifs/cifsfs.c  |  2 +-
 include/linux/net.h   |  2 +
 include/linux/once.h  |  2 +
 include/linux/random.h| 26 +
 lib/Kconfig.debug | 16 ++
 lib/rhashtable.c  |  2 +-
 net/bluetooth/hci_request.c   |  6 ++
 net/bluetooth/smp.c   | 18 --
 net/ceph/ceph_common.c|  6 +-
 net/core/neighbour.c  |  3 +-
 net/ipv4/route.c  |  3 +-
 security/keys/encrypted-keys/encrypted.c  |  8 ++-
 security/keys/key.c   | 16 +++---
 17 files changed, 198 insertions(+), 47 deletions(-)

-- 
2.13.0



[PATCH v5 07/13] ceph: ensure RNG is seeded before using

2017-06-07 Thread Jason A. Donenfeld
Ceph uses the RNG for various nonce generations, and it shouldn't accept
using bad randomness. So, we wait for the RNG to be properly seeded. We
do this by calling wait_for_random_bytes() in a function that is
certainly called in process context, early on, so that all subsequent
calls to get_random_bytes are necessarily acceptable.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Ilya Dryomov <idryo...@gmail.com>
Cc: "Yan, Zheng" <z...@redhat.com>
Cc: Sage Weil <s...@redhat.com>
---
 net/ceph/ceph_common.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
index 47e94b560ba0..0368a04995b3 100644
--- a/net/ceph/ceph_common.c
+++ b/net/ceph/ceph_common.c
@@ -598,7 +598,11 @@ struct ceph_client *ceph_create_client(struct ceph_options 
*opt, void *private)
 {
struct ceph_client *client;
struct ceph_entity_addr *myaddr = NULL;
-   int err = -ENOMEM;
+   int err;
+
+   err = wait_for_random_bytes();
+   if (err < 0)
+   return ERR_PTR(err);
 
client = kzalloc(sizeof(*client), GFP_KERNEL);
if (client == NULL)
-- 
2.13.0



[PATCH v5 05/13] crypto/rng: ensure that the RNG is ready before using

2017-06-07 Thread Jason A. Donenfeld
Otherwise, we might be seeding the RNG using bad randomness, which is
dangerous. The one use of this function from within the kernel -- not
from userspace -- is being removed (keys/big_key), so that call site
isn't relevant in assessing this.

Cc: Herbert Xu <herb...@gondor.apana.org.au>
Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 crypto/rng.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/crypto/rng.c b/crypto/rng.c
index f46dac5288b9..e042437e64b4 100644
--- a/crypto/rng.c
+++ b/crypto/rng.c
@@ -48,12 +48,14 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 
*seed, unsigned int slen)
if (!buf)
return -ENOMEM;
 
-   get_random_bytes(buf, slen);
+   err = get_random_bytes_wait(buf, slen);
+   if (err)
+   goto out;
seed = buf;
}
 
err = crypto_rng_alg(tfm)->seed(tfm, seed, slen);
-
+out:
kzfree(buf);
return err;
 }
-- 
2.13.0



[PATCH v5 08/13] cifs: use get_random_u32 for 32-bit lock random

2017-06-07 Thread Jason A. Donenfeld
Using get_random_u32 here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is sometimes when this is used.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Steve French <sfre...@samba.org>
---
 fs/cifs/cifsfs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index 9a1667e0e8d6..fe0c8dcc7dc7 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -1359,7 +1359,7 @@ init_cifs(void)
spin_lock_init(_tcp_ses_lock);
spin_lock_init(_Lock);
 
-   get_random_bytes(_lock_secret, sizeof(cifs_lock_secret));
+   cifs_lock_secret = get_random_u32();
 
if (cifs_max_pending < 2) {
cifs_max_pending = 2;
-- 
2.13.0



[PATCH v5 06/13] iscsi: ensure RNG is seeded before use

2017-06-07 Thread Jason A. Donenfeld
It's not safe to use weak random data here, especially for the challenge
response randomness. Since we're always in process context, it's safe to
simply wait until we have enough randomness to carry out the
authentication correctly.

While we're at it, we clean up a small memleak during an error
condition.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: "Nicholas A. Bellinger" <n...@linux-iscsi.org>
Cc: Lee Duncan <ldun...@suse.com>
Cc: Chris Leech <cle...@redhat.com>
---
 drivers/target/iscsi/iscsi_target_auth.c  | 14 +++---
 drivers/target/iscsi/iscsi_target_login.c | 22 ++
 2 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target_auth.c 
b/drivers/target/iscsi/iscsi_target_auth.c
index 903b667f8e01..f9bc8ec6fb6b 100644
--- a/drivers/target/iscsi/iscsi_target_auth.c
+++ b/drivers/target/iscsi/iscsi_target_auth.c
@@ -47,18 +47,21 @@ static void chap_binaryhex_to_asciihex(char *dst, char 
*src, int src_len)
}
 }
 
-static void chap_gen_challenge(
+static int chap_gen_challenge(
struct iscsi_conn *conn,
int caller,
char *c_str,
unsigned int *c_len)
 {
+   int ret;
unsigned char challenge_asciihex[CHAP_CHALLENGE_LENGTH * 2 + 1];
struct iscsi_chap *chap = conn->auth_protocol;
 
memset(challenge_asciihex, 0, CHAP_CHALLENGE_LENGTH * 2 + 1);
 
-   get_random_bytes(chap->challenge, CHAP_CHALLENGE_LENGTH);
+   ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);
+   if (unlikely(ret))
+   return ret;
chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
CHAP_CHALLENGE_LENGTH);
/*
@@ -69,6 +72,7 @@ static void chap_gen_challenge(
 
pr_debug("[%s] Sending CHAP_C=0x%s\n\n", (caller) ? "server" : "client",
challenge_asciihex);
+   return 0;
 }
 
 static int chap_check_algorithm(const char *a_str)
@@ -143,6 +147,7 @@ static struct iscsi_chap *chap_server_open(
case CHAP_DIGEST_UNKNOWN:
default:
pr_err("Unsupported CHAP_A value\n");
+   kfree(conn->auth_protocol);
return NULL;
}
 
@@ -156,7 +161,10 @@ static struct iscsi_chap *chap_server_open(
/*
 * Generate Challenge.
 */
-   chap_gen_challenge(conn, 1, aic_str, aic_len);
+   if (chap_gen_challenge(conn, 1, aic_str, aic_len) < 0) {
+   kfree(conn->auth_protocol);
+   return NULL;
+   }
 
return chap;
 }
diff --git a/drivers/target/iscsi/iscsi_target_login.c 
b/drivers/target/iscsi/iscsi_target_login.c
index 92b96b51d506..e9bdc8b86e7d 100644
--- a/drivers/target/iscsi/iscsi_target_login.c
+++ b/drivers/target/iscsi/iscsi_target_login.c
@@ -245,22 +245,26 @@ int iscsi_check_for_session_reinstatement(struct 
iscsi_conn *conn)
return 0;
 }
 
-static void iscsi_login_set_conn_values(
+static int iscsi_login_set_conn_values(
struct iscsi_session *sess,
struct iscsi_conn *conn,
__be16 cid)
 {
+   int ret;
conn->sess  = sess;
conn->cid   = be16_to_cpu(cid);
/*
 * Generate a random Status sequence number (statsn) for the new
 * iSCSI connection.
 */
-   get_random_bytes(>stat_sn, sizeof(u32));
+   ret = get_random_bytes_wait(>stat_sn, sizeof(u32));
+   if (unlikely(ret))
+   return ret;
 
mutex_lock(_id_lock);
conn->auth_id   = iscsit_global->auth_id++;
mutex_unlock(_id_lock);
+   return 0;
 }
 
 __printf(2, 3) int iscsi_change_param_sprintf(
@@ -306,7 +310,11 @@ static int iscsi_login_zero_tsih_s1(
return -ENOMEM;
}
 
-   iscsi_login_set_conn_values(sess, conn, pdu->cid);
+   ret = iscsi_login_set_conn_values(sess, conn, pdu->cid);
+   if (unlikely(ret)) {
+   kfree(sess);
+   return ret;
+   }
sess->init_task_tag = pdu->itt;
memcpy(>isid, pdu->isid, 6);
sess->exp_cmd_sn= be32_to_cpu(pdu->cmdsn);
@@ -497,8 +505,7 @@ static int iscsi_login_non_zero_tsih_s1(
 {
struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
 
-   iscsi_login_set_conn_values(NULL, conn, pdu->cid);
-   return 0;
+   return iscsi_login_set_conn_values(NULL, conn, pdu->cid);
 }
 
 /*
@@ -554,9 +561,8 @@ static int iscsi_login_non_zero_tsih_s2(
atomic_set(>session_continuation, 1);
spin_unlock_bh(>conn_lock);
 
-   iscsi_login_set_conn_values(sess, conn, pdu->cid);
-
-   if (iscsi_copy_param_list(>param_list,
+   if (iscsi_login_set_conn_values(sess, conn, pdu->cid) < 0 ||
+   

[PATCH v5 11/13] net/route: use get_random_int for random counter

2017-06-07 Thread Jason A. Donenfeld
Using get_random_int here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is when many of these structures
are assigned.

Also, semantically, it's not really proper to have been assigning an
atomic_t in this way before, even if in practice it works fine.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: David Miller <da...@davemloft.net>
---
 net/ipv4/route.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 6883b3d4ba8f..32a3332ec9cf 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -2944,8 +2944,7 @@ static __net_init int rt_genid_init(struct net *net)
 {
atomic_set(>ipv4.rt_genid, 0);
atomic_set(>fnhe_genid, 0);
-   get_random_bytes(>ipv4.dev_addr_genid,
-sizeof(net->ipv4.dev_addr_genid));
+   atomic_set(>ipv4.dev_addr_genid, get_random_int());
return 0;
 }
 
-- 
2.13.0



[PATCH v5 12/13] bluetooth/smp: ensure RNG is properly seeded before ECDH use

2017-06-07 Thread Jason A. Donenfeld
This protocol uses lots of complex cryptography that relies on securely
generated random numbers. Thus, it's important that the RNG is actually
seeded before use. Fortuantely, it appears we're always operating in
process context (there are many GFP_KERNEL allocations and other
sleeping operations), and so we can simply demand that the RNG is seeded
before we use it.

We take two strategies in this commit. The first is for the library code
that's called from other modules like hci or mgmt: here we just change
the call to get_random_bytes_wait, and return the result of the wait to
the caller, along with the other error codes of those functions like
usual. Then there's the SMP protocol handler itself, which makes many
many many calls to get_random_bytes during different phases. For this,
rather than have to change all the calls to get_random_bytes_wait and
propagate the error result, it's actually enough to just put a single
call to wait_for_random_bytes() at the beginning of the handler, to
ensure that all the subsequent invocations are safe, without having to
actually change them. Likewise, for the random address changing
function, we'd rather know early on in the function whether the RNG
initialization has been interrupted, rather than later, so we call
wait_for_random_bytes() at the top, so that later on the call to
get_random_bytes() is acceptable.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Marcel Holtmann <mar...@holtmann.org>
Cc: Gustavo Padovan <gust...@padovan.org>
Cc: Johan Hedberg <johan.hedb...@gmail.com>
---
 net/bluetooth/hci_request.c |  6 ++
 net/bluetooth/smp.c | 18 ++
 2 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
index b5faff458d8b..4078057c4fd7 100644
--- a/net/bluetooth/hci_request.c
+++ b/net/bluetooth/hci_request.c
@@ -1406,6 +1406,12 @@ int hci_update_random_address(struct hci_request *req, 
bool require_privacy,
struct hci_dev *hdev = req->hdev;
int err;
 
+   if (require_privacy) {
+   err = wait_for_random_bytes();
+   if (unlikely(err))
+   return err;
+   }
+
/* If privacy is enabled use a resolvable private address. If
 * current RPA has expired or there is something else than
 * the current RPA in use, then generate a new one.
diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
index 14585edc9439..5fef1bc96f42 100644
--- a/net/bluetooth/smp.c
+++ b/net/bluetooth/smp.c
@@ -537,7 +537,9 @@ int smp_generate_rpa(struct hci_dev *hdev, const u8 
irk[16], bdaddr_t *rpa)
 
smp = chan->data;
 
-   get_random_bytes(>b[3], 3);
+   err = get_random_bytes_wait(>b[3], 3);
+   if (unlikely(err))
+   return err;
 
rpa->b[5] &= 0x3f;  /* Clear two most significant bits */
rpa->b[5] |= 0x40;  /* Set second most significant bit */
@@ -570,7 +572,9 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 
rand[16])
} else {
while (true) {
/* Seed private key with random number */
-   get_random_bytes(smp->local_sk, 32);
+   err = get_random_bytes_wait(smp->local_sk, 32);
+   if (unlikely(err))
+   return err;
 
/* Generate local key pair for Secure Connections */
if (!generate_ecdh_keys(smp->local_pk, smp->local_sk))
@@ -589,7 +593,9 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 
rand[16])
SMP_DBG("OOB Public Key Y: %32phN", smp->local_pk + 32);
SMP_DBG("OOB Private Key:  %32phN", smp->local_sk);
 
-   get_random_bytes(smp->local_rand, 16);
+   err = get_random_bytes_wait(smp->local_rand, 16);
+   if (unlikely(err))
+   return err;
 
err = smp_f4(smp->tfm_cmac, smp->local_pk, smp->local_pk,
 smp->local_rand, 0, hash);
@@ -2831,7 +2837,11 @@ static int smp_sig_channel(struct l2cap_chan *chan, 
struct sk_buff *skb)
struct hci_conn *hcon = conn->hcon;
struct smp_chan *smp;
__u8 code, reason;
-   int err = 0;
+   int err;
+
+   err = wait_for_random_bytes();
+   if (unlikely(err))
+   return err;
 
if (skb->len < 1)
return -EILSEQ;
-- 
2.13.0



[PATCH v5 13/13] random: warn when kernel uses unseeded randomness

2017-06-07 Thread Jason A. Donenfeld
This enables an important dmesg notification about when drivers have
used the crng without it being seeded first. Prior, these errors would
occur silently, and so there hasn't been a great way of diagnosing these
types of bugs for obscure setups. By adding this as a config option, we
can leave it on by default, so that we learn where these issues happen,
in the field, will still allowing some people to turn it off, if they
really know what they're doing and do not want the log entries.

However, we don't leave it _completely_ by default. An earlier version
of this patch simply had `default y`. I'd really love that, but it turns
out, this problem with unseeded randomness being used is really quite
present and is going to take a long time to fix. Thus, as a compromise
between log-messages-for-all and nobody-knows, this is `default y`,
except it is also `depends on DEBUG_KERNEL`. This will ensure that the
curious see the messages while others don't have to.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 drivers/char/random.c | 15 +--
 lib/Kconfig.debug | 16 
 2 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 77587ac1ecb2..5252690610ec 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -288,7 +288,6 @@
 #define SEC_XFER_SIZE  512
 #define EXTRACT_SIZE   10
 
-#define DEBUG_RANDOM_BOOT 0
 
 #define LONGS(x) (((x) + sizeof(unsigned long) - 1)/sizeof(unsigned long))
 
@@ -1481,7 +1480,7 @@ void get_random_bytes(void *buf, int nbytes)
 {
__u8 tmp[CHACHA20_BLOCK_SIZE];
 
-#if DEBUG_RANDOM_BOOT > 0
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
if (!crng_ready())
printk(KERN_NOTICE "random: %pF get_random_bytes called "
   "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
@@ -2075,6 +2074,12 @@ u64 get_random_u64(void)
return ret;
 #endif
 
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
+   if (!crng_ready())
+   printk(KERN_NOTICE "random: %pF get_random_u64 called "
+  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
+#endif
+
batch = _cpu_var(batched_entropy_u64);
if (use_lock)
read_lock_irqsave(_entropy_reset_lock, flags);
@@ -2101,6 +2106,12 @@ u32 get_random_u32(void)
if (arch_get_random_int())
return ret;
 
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
+   if (!crng_ready())
+   printk(KERN_NOTICE "random: %pF get_random_u32 called "
+  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
+#endif
+
batch = _cpu_var(batched_entropy_u32);
if (use_lock)
read_lock_irqsave(_entropy_reset_lock, flags);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index e4587ebe52c7..c4159605bfbf 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1209,6 +1209,22 @@ config STACKTRACE
  It is also used by various kernel debugging features that require
  stack trace generation.
 
+config WARN_UNSEEDED_RANDOM
+   bool "Warn when kernel uses unseeded randomness"
+   default y
+   depends on DEBUG_KERNEL
+   help
+ Some parts of the kernel contain bugs relating to their use of
+ cryptographically secure random numbers before it's actually possible
+ to generate those numbers securely. This setting ensures that these
+ flaws don't go unnoticed, by enabling a message, should this ever
+ occur. This will allow people with obscure setups to know when things
+ are going wrong, so that they might contact developers about fixing
+ it.
+
+ Say Y here, unless you simply do not care about using unseeded
+ randomness and do not want a potential warning message in your logs.
+
 config DEBUG_KOBJECT
bool "kobject debugging"
depends on DEBUG_KERNEL
-- 
2.13.0



[PATCH v5 09/13] rhashtable: use get_random_u32 for hash_rnd

2017-06-07 Thread Jason A. Donenfeld
This is much faster and just as secure. It also has the added benefit of
probably returning better randomness at early-boot on systems with
architectural RNGs.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Thomas Graf <tg...@suug.ch>
Cc: Herbert Xu <herb...@gondor.apana.org.au>
---
 lib/rhashtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index d9e7274a04cd..a1eb7c947f46 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -235,7 +235,7 @@ static struct bucket_table *bucket_table_alloc(struct 
rhashtable *ht,
 
INIT_LIST_HEAD(>walkers);
 
-   get_random_bytes(>hash_rnd, sizeof(tbl->hash_rnd));
+   tbl->hash_rnd = get_random_u32();
 
for (i = 0; i < nbuckets; i++)
INIT_RHT_NULLS_HEAD(tbl->buckets[i], ht, i);
-- 
2.13.0



[PATCH v5 10/13] net/neighbor: use get_random_u32 for 32-bit hash random

2017-06-07 Thread Jason A. Donenfeld
Using get_random_u32 here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is when many of these structures
are assigned.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: David Miller <da...@davemloft.net>
---
 net/core/neighbour.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index d274f81fcc2c..9784133b0cdb 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -312,8 +312,7 @@ static struct neighbour *neigh_alloc(struct neigh_table 
*tbl, struct net_device
 
 static void neigh_get_hash_rnd(u32 *x)
 {
-   get_random_bytes(x, sizeof(*x));
-   *x |= 1;
+   *x = get_random_u32() | 1;
 }
 
 static struct neigh_hash_table *neigh_hash_alloc(unsigned int shift)
-- 
2.13.0



[PATCH v5 01/13] random: invalidate batched entropy after crng init

2017-06-07 Thread Jason A. Donenfeld
It's possible that get_random_{u32,u64} is used before the crng has
initialized, in which case, its output might not be cryptographically
secure. For this problem, directly, this patch set is introducing the
*_wait variety of functions, but even with that, there's a subtle issue:
what happens to our batched entropy that was generated before
initialization. Prior to this commit, it'd stick around, supplying bad
numbers. After this commit, we force the entropy to be re-extracted
after each phase of the crng has initialized.

In order to avoid a race condition with the position counter, we
introduce a simple rwlock for this invalidation. Since it's only during
this awkward transition period, after things are all set up, we stop
using it, so that it doesn't have an impact on performance.

This should probably be backported to 4.11.

(Also: adding my copyright to the top. With the patch series from
January, this patch, and then the ones that come after, I think there's
a relevant amount of code in here to add my name to the top.)

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
---
 drivers/char/random.c | 37 +
 1 file changed, 37 insertions(+)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index a561f0c2f428..d35da1603e12 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1,6 +1,9 @@
 /*
  * random.c -- A strong random number generator
  *
+ * Copyright (C) 2017 Jason A. Donenfeld <ja...@zx2c4.com>. All
+ * Rights Reserved.
+ *
  * Copyright Matt Mackall <m...@selenic.com>, 2003, 2004, 2005
  *
  * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999.  All
@@ -762,6 +765,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
 static struct crng_state **crng_node_pool __read_mostly;
 #endif
 
+static void invalidate_batched_entropy(void);
+
 static void crng_initialize(struct crng_state *crng)
 {
int i;
@@ -799,6 +804,7 @@ static int crng_fast_load(const char *cp, size_t len)
cp++; crng_init_cnt++; len--;
}
if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
+   invalidate_batched_entropy();
crng_init = 1;
wake_up_interruptible(_init_wait);
pr_notice("random: fast init done\n");
@@ -836,6 +842,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
memzero_explicit(, sizeof(buf));
crng->init_time = jiffies;
if (crng == _crng && crng_init < 2) {
+   invalidate_batched_entropy();
crng_init = 2;
process_random_ready_list();
wake_up_interruptible(_init_wait);
@@ -2023,6 +2030,7 @@ struct batched_entropy {
};
unsigned int position;
 };
+static rwlock_t batched_entropy_reset_lock = 
__RW_LOCK_UNLOCKED(batched_entropy_reset_lock);
 
 /*
  * Get a random word for internal kernel use only. The quality of the random
@@ -2033,6 +2041,8 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u64);
 u64 get_random_u64(void)
 {
u64 ret;
+   const bool use_lock = READ_ONCE(crng_init) < 2;
+   unsigned long flags = 0;
struct batched_entropy *batch;
 
 #if BITS_PER_LONG == 64
@@ -2045,11 +2055,15 @@ u64 get_random_u64(void)
 #endif
 
batch = _cpu_var(batched_entropy_u64);
+   if (use_lock)
+   read_lock_irqsave(_entropy_reset_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
extract_crng((u8 *)batch->entropy_u64);
batch->position = 0;
}
ret = batch->entropy_u64[batch->position++];
+   if (use_lock)
+   read_unlock_irqrestore(_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u64);
return ret;
 }
@@ -2059,22 +2073,45 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u32);
 u32 get_random_u32(void)
 {
u32 ret;
+   const bool use_lock = READ_ONCE(crng_init) < 2;
+   unsigned long flags = 0;
struct batched_entropy *batch;
 
if (arch_get_random_int())
return ret;
 
batch = _cpu_var(batched_entropy_u32);
+   if (use_lock)
+   read_lock_irqsave(_entropy_reset_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
extract_crng((u8 *)batch->entropy_u32);
batch->position = 0;
}
ret = batch->entropy_u32[batch->position++];
+   if (use_lock)
+   read_unlock_irqrestore(_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u32);
return ret;
 }
 EXPORT_SYMBOL(get_random_u32);
 
+/* It's important to invalidate all potential batched entropy that might
+ * be stored before the crng is initialized, which we can do lazily by
+ * simply r

[PATCH v5 01/13] random: invalidate batched entropy after crng init

2017-06-07 Thread Jason A. Donenfeld
It's possible that get_random_{u32,u64} is used before the crng has
initialized, in which case, its output might not be cryptographically
secure. For this problem, directly, this patch set is introducing the
*_wait variety of functions, but even with that, there's a subtle issue:
what happens to our batched entropy that was generated before
initialization. Prior to this commit, it'd stick around, supplying bad
numbers. After this commit, we force the entropy to be re-extracted
after each phase of the crng has initialized.

In order to avoid a race condition with the position counter, we
introduce a simple rwlock for this invalidation. Since it's only during
this awkward transition period, after things are all set up, we stop
using it, so that it doesn't have an impact on performance.

This should probably be backported to 4.11.

(Also: adding my copyright to the top. With the patch series from
January, this patch, and then the ones that come after, I think there's
a relevant amount of code in here to add my name to the top.)

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
---
 drivers/char/random.c | 37 +
 1 file changed, 37 insertions(+)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index a561f0c2f428..d35da1603e12 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1,6 +1,9 @@
 /*
  * random.c -- A strong random number generator
  *
+ * Copyright (C) 2017 Jason A. Donenfeld <ja...@zx2c4.com>. All
+ * Rights Reserved.
+ *
  * Copyright Matt Mackall <m...@selenic.com>, 2003, 2004, 2005
  *
  * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999.  All
@@ -762,6 +765,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
 static struct crng_state **crng_node_pool __read_mostly;
 #endif
 
+static void invalidate_batched_entropy(void);
+
 static void crng_initialize(struct crng_state *crng)
 {
int i;
@@ -799,6 +804,7 @@ static int crng_fast_load(const char *cp, size_t len)
cp++; crng_init_cnt++; len--;
}
if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
+   invalidate_batched_entropy();
crng_init = 1;
wake_up_interruptible(_init_wait);
pr_notice("random: fast init done\n");
@@ -836,6 +842,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
memzero_explicit(, sizeof(buf));
crng->init_time = jiffies;
if (crng == _crng && crng_init < 2) {
+   invalidate_batched_entropy();
crng_init = 2;
process_random_ready_list();
wake_up_interruptible(_init_wait);
@@ -2023,6 +2030,7 @@ struct batched_entropy {
};
unsigned int position;
 };
+static rwlock_t batched_entropy_reset_lock = 
__RW_LOCK_UNLOCKED(batched_entropy_reset_lock);
 
 /*
  * Get a random word for internal kernel use only. The quality of the random
@@ -2033,6 +2041,8 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u64);
 u64 get_random_u64(void)
 {
u64 ret;
+   const bool use_lock = READ_ONCE(crng_init) < 2;
+   unsigned long flags = 0;
struct batched_entropy *batch;
 
 #if BITS_PER_LONG == 64
@@ -2045,11 +2055,15 @@ u64 get_random_u64(void)
 #endif
 
batch = _cpu_var(batched_entropy_u64);
+   if (use_lock)
+   read_lock_irqsave(_entropy_reset_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
extract_crng((u8 *)batch->entropy_u64);
batch->position = 0;
}
ret = batch->entropy_u64[batch->position++];
+   if (use_lock)
+   read_unlock_irqrestore(_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u64);
return ret;
 }
@@ -2059,22 +2073,45 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u32);
 u32 get_random_u32(void)
 {
u32 ret;
+   const bool use_lock = READ_ONCE(crng_init) < 2;
+   unsigned long flags = 0;
struct batched_entropy *batch;
 
if (arch_get_random_int())
return ret;
 
batch = _cpu_var(batched_entropy_u32);
+   if (use_lock)
+   read_lock_irqsave(_entropy_reset_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
extract_crng((u8 *)batch->entropy_u32);
batch->position = 0;
}
ret = batch->entropy_u32[batch->position++];
+   if (use_lock)
+   read_unlock_irqrestore(_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u32);
return ret;
 }
 EXPORT_SYMBOL(get_random_u32);
 
+/* It's important to invalidate all potential batched entropy that might
+ * be stored before the crng is initialized, which we can do lazily by
+ * simply r

[PATCH v5 00/13] Unseeded In-Kernel Randomness Fixes

2017-06-07 Thread Jason A. Donenfeld
It looks like critique of this has come to an end. Could somebody take
this into their tree for 4.12?

As discussed in [1], there is a problem with get_random_bytes being
used before the RNG has actually been seeded. The solution for fixing
this appears to be multi-pronged. One of those prongs involves adding
a simple blocking API so that modules that use the RNG in process
context can just sleep (in an interruptable manner) until the RNG is
ready to be used. This winds up being a very useful API that covers
a few use cases, several of which are included in this patch set.

[1] http://www.openwall.com/lists/kernel-hardening/2017/06/02/2

Changes v4->v5:
  - Old versions of gcc warned on an uninitialized variable, so set
this to silence warning.

Jason A. Donenfeld (13):
  random: invalidate batched entropy after crng init
  random: add synchronous API for the urandom pool
  random: add get_random_{bytes,u32,u64,int,long,once}_wait family
  security/keys: ensure RNG is seeded before use
  crypto/rng: ensure that the RNG is ready before using
  iscsi: ensure RNG is seeded before use
  ceph: ensure RNG is seeded before using
  cifs: use get_random_u32 for 32-bit lock random
  rhashtable: use get_random_u32 for hash_rnd
  net/neighbor: use get_random_u32 for 32-bit hash random
  net/route: use get_random_int for random counter
  bluetooth/smp: ensure RNG is properly seeded before ECDH use
  random: warn when kernel uses unseeded randomness

 crypto/rng.c  |  6 +-
 drivers/char/random.c | 93 +++
 drivers/target/iscsi/iscsi_target_auth.c  | 14 -
 drivers/target/iscsi/iscsi_target_login.c | 22 +---
 fs/cifs/cifsfs.c  |  2 +-
 include/linux/net.h   |  2 +
 include/linux/once.h  |  2 +
 include/linux/random.h| 26 +
 lib/Kconfig.debug | 16 ++
 lib/rhashtable.c  |  2 +-
 net/bluetooth/hci_request.c   |  6 ++
 net/bluetooth/smp.c   | 18 --
 net/ceph/ceph_common.c|  6 +-
 net/core/neighbour.c  |  3 +-
 net/ipv4/route.c  |  3 +-
 security/keys/encrypted-keys/encrypted.c  |  8 ++-
 security/keys/key.c   | 16 +++---
 17 files changed, 198 insertions(+), 47 deletions(-)

-- 
2.13.0



Re: [PATCH v3 03/13] random: invalidate batched entropy after crng init

2017-06-07 Thread Jason A. Donenfeld
Strange, not all compilers do this warning. Fixing with:

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 12758db..5252690 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -2061,8 +2061,8 @@ static DEFINE_PER_CPU(struct batched_entropy,
batched_entropy_u64);
u64 get_random_u64(void)
{
   u64 ret;
-   bool use_lock = crng_init < 2;
-   unsigned long flags;
+   const bool use_lock = READ_ONCE(crng_init) < 2;
+   unsigned long flags = 0;
   struct batched_entropy *batch;

#if BITS_PER_LONG == 64
@@ -2099,8 +2099,8 @@ static DEFINE_PER_CPU(struct batched_entropy,
batched_entropy_u32);
u32 get_random_u32(void)
{
   u32 ret;
-   bool use_lock = crng_init < 2;
-   unsigned long flags;
+   const bool use_lock = READ_ONCE(crng_init) < 2;
+   unsigned long flags = 0;
   struct batched_entropy *batch;

   if (arch_get_random_int())

Const, because it's more correct. READ_ONCE to be certain that the
compiler doesn't try to paste the expression into both uses. And
finally flags=0 to shutup the compiler.

If anybody has a better way of approaching this, feel free to chime in.


Re: [PATCH v4 00/13] Unseeded In-Kernel Randomness Fixes

2017-06-07 Thread Jason A. Donenfeld
Hi Ted,

Could I get your Signed-off-by on this patchset, so that somebody can
add it to their tree?

Thanks,
Jason


Re: [kernel-hardening] Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-06 Thread Jason A. Donenfeld
On Tue, Jun 6, 2017 at 7:57 PM, Stephan Müller  wrote:
> Finally, I am very surprised that I get hardly any answers on patches to
> random.c let alone that any changes to random.c will be applied at all.

FWIW, this is my biggest concern too. You seem willing to work on this
difficult problem. I'm simultaneously working on a different but
related issue. I hope we're enabled to contribute.


[PATCH v4 03/13] random: add get_random_{bytes,u32,u64,int,long,once}_wait family

2017-06-06 Thread Jason A. Donenfeld
These functions are simple convenience wrappers that call
wait_for_random_bytes before calling the respective get_random_*
function.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 include/linux/net.h|  2 ++
 include/linux/once.h   |  2 ++
 include/linux/random.h | 25 +
 3 files changed, 29 insertions(+)

diff --git a/include/linux/net.h b/include/linux/net.h
index abcfa46a2bd9..dda2cc939a53 100644
--- a/include/linux/net.h
+++ b/include/linux/net.h
@@ -274,6 +274,8 @@ do {
\
 
 #define net_get_random_once(buf, nbytes)   \
get_random_once((buf), (nbytes))
+#define net_get_random_once_wait(buf, nbytes)  \
+   get_random_once_wait((buf), (nbytes))
 
 int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec,
   size_t num, size_t len);
diff --git a/include/linux/once.h b/include/linux/once.h
index 285f12cb40e6..9c98aaa87cbc 100644
--- a/include/linux/once.h
+++ b/include/linux/once.h
@@ -53,5 +53,7 @@ void __do_once_done(bool *done, struct static_key *once_key,
 
 #define get_random_once(buf, nbytes)\
DO_ONCE(get_random_bytes, (buf), (nbytes))
+#define get_random_once_wait(buf, nbytes)\
+   DO_ONCE(get_random_bytes_wait, (buf), (nbytes))  \
 
 #endif /* _LINUX_ONCE_H */
diff --git a/include/linux/random.h b/include/linux/random.h
index e29929347c95..4aecc339558d 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -58,6 +58,31 @@ static inline unsigned long get_random_long(void)
 #endif
 }
 
+/* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbytes).
+ * Returns the result of the call to wait_for_random_bytes. */
+static inline int get_random_bytes_wait(void *buf, int nbytes)
+{
+   int ret = wait_for_random_bytes();
+   if (unlikely(ret))
+   return ret;
+   get_random_bytes(buf, nbytes);
+   return 0;
+}
+
+#define declare_get_random_var_wait(var) \
+   static inline int get_random_ ## var ## _wait(var *out) { \
+   int ret = wait_for_random_bytes(); \
+   if (unlikely(ret)) \
+   return ret; \
+   *out = get_random_ ## var(); \
+   return 0; \
+   }
+declare_get_random_var_wait(u32)
+declare_get_random_var_wait(u64)
+declare_get_random_var_wait(int)
+declare_get_random_var_wait(long)
+#undef declare_get_random_var
+
 unsigned long randomize_page(unsigned long start, unsigned long range);
 
 u32 prandom_u32(void);
-- 
2.13.0



[PATCH v4 01/13] random: invalidate batched entropy after crng init

2017-06-06 Thread Jason A. Donenfeld
It's possible that get_random_{u32,u64} is used before the crng has
initialized, in which case, its output might not be cryptographically
secure. For this problem, directly, this patch set is introducing the
*_wait variety of functions, but even with that, there's a subtle issue:
what happens to our batched entropy that was generated before
initialization. Prior to this commit, it'd stick around, supplying bad
numbers. After this commit, we force the entropy to be re-extracted
after each phase of the crng has initialized.

In order to avoid a race condition with the position counter, we
introduce a simple rwlock for this invalidation. Since it's only during
this awkward transition period, after things are all set up, we stop
using it, so that it doesn't have an impact on performance.

This should probably be backported to 4.11.

(Also: adding my copyright to the top. With the patch series from
January, this patch, and then the ones that come after, I think there's
a relevant amount of code in here to add my name to the top.)

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
---
 drivers/char/random.c | 37 +
 1 file changed, 37 insertions(+)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 0ab024918907..2291e6224ed3 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1,6 +1,9 @@
 /*
  * random.c -- A strong random number generator
  *
+ * Copyright (C) 2017 Jason A. Donenfeld <ja...@zx2c4.com>. All
+ * Rights Reserved.
+ *
  * Copyright Matt Mackall <m...@selenic.com>, 2003, 2004, 2005
  *
  * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999.  All
@@ -762,6 +765,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
 static struct crng_state **crng_node_pool __read_mostly;
 #endif
 
+static void invalidate_batched_entropy(void);
+
 static void crng_initialize(struct crng_state *crng)
 {
int i;
@@ -799,6 +804,7 @@ static int crng_fast_load(const char *cp, size_t len)
cp++; crng_init_cnt++; len--;
}
if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
+   invalidate_batched_entropy();
crng_init = 1;
wake_up_interruptible(_init_wait);
pr_notice("random: fast init done\n");
@@ -836,6 +842,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
memzero_explicit(, sizeof(buf));
crng->init_time = jiffies;
if (crng == _crng && crng_init < 2) {
+   invalidate_batched_entropy();
crng_init = 2;
process_random_ready_list();
wake_up_interruptible(_init_wait);
@@ -2019,6 +2026,7 @@ struct batched_entropy {
};
unsigned int position;
 };
+static rwlock_t batched_entropy_reset_lock = 
__RW_LOCK_UNLOCKED(batched_entropy_reset_lock);
 
 /*
  * Get a random word for internal kernel use only. The quality of the random
@@ -2029,6 +2037,8 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u64);
 u64 get_random_u64(void)
 {
u64 ret;
+   bool use_lock = crng_init < 2;
+   unsigned long flags;
struct batched_entropy *batch;
 
 #if BITS_PER_LONG == 64
@@ -2041,11 +2051,15 @@ u64 get_random_u64(void)
 #endif
 
batch = _cpu_var(batched_entropy_u64);
+   if (use_lock)
+   read_lock_irqsave(_entropy_reset_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
extract_crng((u8 *)batch->entropy_u64);
batch->position = 0;
}
ret = batch->entropy_u64[batch->position++];
+   if (use_lock)
+   read_unlock_irqrestore(_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u64);
return ret;
 }
@@ -2055,22 +2069,45 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u32);
 u32 get_random_u32(void)
 {
u32 ret;
+   bool use_lock = crng_init < 2;
+   unsigned long flags;
struct batched_entropy *batch;
 
if (arch_get_random_int())
return ret;
 
batch = _cpu_var(batched_entropy_u32);
+   if (use_lock)
+   read_lock_irqsave(_entropy_reset_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
extract_crng((u8 *)batch->entropy_u32);
batch->position = 0;
}
ret = batch->entropy_u32[batch->position++];
+   if (use_lock)
+   read_unlock_irqrestore(_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u32);
return ret;
 }
 EXPORT_SYMBOL(get_random_u32);
 
+/* It's important to invalidate all potential batched entropy that might
+ * be stored before the crng is initialized, which we can do lazily by
+ * simply resetting the counter to zero so 

[PATCH v4 05/13] crypto/rng: ensure that the RNG is ready before using

2017-06-06 Thread Jason A. Donenfeld
Otherwise, we might be seeding the RNG using bad randomness, which is
dangerous. The one use of this function from within the kernel -- not
from userspace -- is being removed (keys/big_key), so that call site
isn't relevant in assessing this.

Cc: Herbert Xu <herb...@gondor.apana.org.au>
Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 crypto/rng.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/crypto/rng.c b/crypto/rng.c
index f46dac5288b9..e042437e64b4 100644
--- a/crypto/rng.c
+++ b/crypto/rng.c
@@ -48,12 +48,14 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 
*seed, unsigned int slen)
if (!buf)
return -ENOMEM;
 
-   get_random_bytes(buf, slen);
+   err = get_random_bytes_wait(buf, slen);
+   if (err)
+   goto out;
seed = buf;
}
 
err = crypto_rng_alg(tfm)->seed(tfm, seed, slen);
-
+out:
kzfree(buf);
return err;
 }
-- 
2.13.0



[PATCH v4 04/13] security/keys: ensure RNG is seeded before use

2017-06-06 Thread Jason A. Donenfeld
Otherwise, we might use bad random numbers which, particularly in the
case of IV generation, could be quite bad. It makes sense to use the
synchronous API here, because we're always in process context (as the
code is littered with GFP_KERNEL and the like). However, we can't change
to using a blocking function in key serial allocation, because this will
block booting in some configurations, so here we use the more
appropriate get_random_u32, which will use RDRAND if available.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: David Howells <dhowe...@redhat.com>
Cc: Mimi Zohar <zo...@linux.vnet.ibm.com>
Cc: David Safford <saff...@us.ibm.com>
---
 security/keys/encrypted-keys/encrypted.c |  8 +---
 security/keys/key.c  | 16 
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/security/keys/encrypted-keys/encrypted.c 
b/security/keys/encrypted-keys/encrypted.c
index 0010955d7876..d51a28fc5cd5 100644
--- a/security/keys/encrypted-keys/encrypted.c
+++ b/security/keys/encrypted-keys/encrypted.c
@@ -777,10 +777,12 @@ static int encrypted_init(struct encrypted_key_payload 
*epayload,
 
__ekey_init(epayload, format, master_desc, datalen);
if (!hex_encoded_iv) {
-   get_random_bytes(epayload->iv, ivsize);
+   ret = get_random_bytes_wait(epayload->iv, ivsize);
+   if (unlikely(ret))
+   return ret;
 
-   get_random_bytes(epayload->decrypted_data,
-epayload->decrypted_datalen);
+   ret = get_random_bytes_wait(epayload->decrypted_data,
+   epayload->decrypted_datalen);
} else
ret = encrypted_key_decrypt(epayload, format, hex_encoded_iv);
return ret;
diff --git a/security/keys/key.c b/security/keys/key.c
index 455c04d80bbb..b72078e532f2 100644
--- a/security/keys/key.c
+++ b/security/keys/key.c
@@ -134,17 +134,15 @@ void key_user_put(struct key_user *user)
  * Allocate a serial number for a key.  These are assigned randomly to avoid
  * security issues through covert channel problems.
  */
-static inline void key_alloc_serial(struct key *key)
+static inline int key_alloc_serial(struct key *key)
 {
struct rb_node *parent, **p;
struct key *xkey;
 
-   /* propose a random serial number and look for a hole for it in the
-* serial number tree */
+   /* propose a non-negative random serial number and look for a hole for
+* it in the serial number tree */
do {
-   get_random_bytes(>serial, sizeof(key->serial));
-
-   key->serial >>= 1; /* negative numbers are not permitted */
+   key->serial = get_random_u32() >> 1;
} while (key->serial < 3);
 
spin_lock(_serial_lock);
@@ -170,7 +168,7 @@ static inline void key_alloc_serial(struct key *key)
rb_insert_color(>serial_node, _serial_tree);
 
spin_unlock(_serial_lock);
-   return;
+   return 0;
 
/* we found a key with the proposed serial number - walk the tree from
 * that point looking for the next unused serial number */
@@ -314,7 +312,9 @@ struct key *key_alloc(struct key_type *type, const char 
*desc,
 
/* publish the key by giving it a serial number */
atomic_inc(>nkeys);
-   key_alloc_serial(key);
+   ret = key_alloc_serial(key);
+   if (ret < 0)
+   goto security_error;
 
 error:
return key;
-- 
2.13.0



[PATCH v4 08/13] cifs: use get_random_u32 for 32-bit lock random

2017-06-06 Thread Jason A. Donenfeld
Using get_random_u32 here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is sometimes when this is used.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Steve French <sfre...@samba.org>
---
 fs/cifs/cifsfs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index 9a1667e0e8d6..fe0c8dcc7dc7 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -1359,7 +1359,7 @@ init_cifs(void)
spin_lock_init(_tcp_ses_lock);
spin_lock_init(_Lock);
 
-   get_random_bytes(_lock_secret, sizeof(cifs_lock_secret));
+   cifs_lock_secret = get_random_u32();
 
if (cifs_max_pending < 2) {
cifs_max_pending = 2;
-- 
2.13.0



[PATCH v4 10/13] net/neighbor: use get_random_u32 for 32-bit hash random

2017-06-06 Thread Jason A. Donenfeld
Using get_random_u32 here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is when many of these structures
are assigned.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: David Miller <da...@davemloft.net>
---
 net/core/neighbour.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index d274f81fcc2c..9784133b0cdb 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -312,8 +312,7 @@ static struct neighbour *neigh_alloc(struct neigh_table 
*tbl, struct net_device
 
 static void neigh_get_hash_rnd(u32 *x)
 {
-   get_random_bytes(x, sizeof(*x));
-   *x |= 1;
+   *x = get_random_u32() | 1;
 }
 
 static struct neigh_hash_table *neigh_hash_alloc(unsigned int shift)
-- 
2.13.0



[PATCH v4 07/13] ceph: ensure RNG is seeded before using

2017-06-06 Thread Jason A. Donenfeld
Ceph uses the RNG for various nonce generations, and it shouldn't accept
using bad randomness. So, we wait for the RNG to be properly seeded. We
do this by calling wait_for_random_bytes() in a function that is
certainly called in process context, early on, so that all subsequent
calls to get_random_bytes are necessarily acceptable.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Ilya Dryomov <idryo...@gmail.com>
Cc: "Yan, Zheng" <z...@redhat.com>
Cc: Sage Weil <s...@redhat.com>
---
 net/ceph/ceph_common.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
index 4fd02831beed..26ab58665f77 100644
--- a/net/ceph/ceph_common.c
+++ b/net/ceph/ceph_common.c
@@ -611,7 +611,11 @@ struct ceph_client *ceph_create_client(struct ceph_options 
*opt, void *private)
 {
struct ceph_client *client;
struct ceph_entity_addr *myaddr = NULL;
-   int err = -ENOMEM;
+   int err;
+
+   err = wait_for_random_bytes();
+   if (err < 0)
+   return ERR_PTR(err);
 
client = kzalloc(sizeof(*client), GFP_KERNEL);
if (client == NULL)
-- 
2.13.0



[PATCH v4 09/13] rhashtable: use get_random_u32 for hash_rnd

2017-06-06 Thread Jason A. Donenfeld
This is much faster and just as secure. It also has the added benefit of
probably returning better randomness at early-boot on systems with
architectural RNGs.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Thomas Graf <tg...@suug.ch>
Cc: Herbert Xu <herb...@gondor.apana.org.au>
---
 lib/rhashtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index d9e7274a04cd..a1eb7c947f46 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -235,7 +235,7 @@ static struct bucket_table *bucket_table_alloc(struct 
rhashtable *ht,
 
INIT_LIST_HEAD(>walkers);
 
-   get_random_bytes(>hash_rnd, sizeof(tbl->hash_rnd));
+   tbl->hash_rnd = get_random_u32();
 
for (i = 0; i < nbuckets; i++)
INIT_RHT_NULLS_HEAD(tbl->buckets[i], ht, i);
-- 
2.13.0



[PATCH v4 11/13] net/route: use get_random_int for random counter

2017-06-06 Thread Jason A. Donenfeld
Using get_random_int here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is when many of these structures
are assigned.

Also, semantically, it's not really proper to have been assigning an
atomic_t in this way before, even if in practice it works fine.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: David Miller <da...@davemloft.net>
---
 net/ipv4/route.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 655d9eebe43e..11e001a42094 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -2936,8 +2936,7 @@ static __net_init int rt_genid_init(struct net *net)
 {
atomic_set(>ipv4.rt_genid, 0);
atomic_set(>fnhe_genid, 0);
-   get_random_bytes(>ipv4.dev_addr_genid,
-sizeof(net->ipv4.dev_addr_genid));
+   atomic_set(>ipv4.dev_addr_genid, get_random_int());
return 0;
 }
 
-- 
2.13.0



[PATCH v4 12/13] bluetooth/smp: ensure RNG is properly seeded before ECDH use

2017-06-06 Thread Jason A. Donenfeld
This protocol uses lots of complex cryptography that relies on securely
generated random numbers. Thus, it's important that the RNG is actually
seeded before use. Fortuantely, it appears we're always operating in
process context (there are many GFP_KERNEL allocations and other
sleeping operations), and so we can simply demand that the RNG is seeded
before we use it.

We take two strategies in this commit. The first is for the library code
that's called from other modules like hci or mgmt: here we just change
the call to get_random_bytes_wait, and return the result of the wait to
the caller, along with the other error codes of those functions like
usual. Then there's the SMP protocol handler itself, which makes many
many many calls to get_random_bytes during different phases. For this,
rather than have to change all the calls to get_random_bytes_wait and
propagate the error result, it's actually enough to just put a single
call to wait_for_random_bytes() at the beginning of the handler, to
ensure that all the subsequent invocations are safe, without having to
actually change them. Likewise, for the random address changing
function, we'd rather know early on in the function whether the RNG
initialization has been interrupted, rather than later, so we call
wait_for_random_bytes() at the top, so that later on the call to
get_random_bytes() is acceptable.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Marcel Holtmann <mar...@holtmann.org>
Cc: Gustavo Padovan <gust...@padovan.org>
Cc: Johan Hedberg <johan.hedb...@gmail.com>
---
 net/bluetooth/hci_request.c |  6 ++
 net/bluetooth/smp.c | 18 ++
 2 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
index b5faff458d8b..4078057c4fd7 100644
--- a/net/bluetooth/hci_request.c
+++ b/net/bluetooth/hci_request.c
@@ -1406,6 +1406,12 @@ int hci_update_random_address(struct hci_request *req, 
bool require_privacy,
struct hci_dev *hdev = req->hdev;
int err;
 
+   if (require_privacy) {
+   err = wait_for_random_bytes();
+   if (unlikely(err))
+   return err;
+   }
+
/* If privacy is enabled use a resolvable private address. If
 * current RPA has expired or there is something else than
 * the current RPA in use, then generate a new one.
diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
index 14585edc9439..5fef1bc96f42 100644
--- a/net/bluetooth/smp.c
+++ b/net/bluetooth/smp.c
@@ -537,7 +537,9 @@ int smp_generate_rpa(struct hci_dev *hdev, const u8 
irk[16], bdaddr_t *rpa)
 
smp = chan->data;
 
-   get_random_bytes(>b[3], 3);
+   err = get_random_bytes_wait(>b[3], 3);
+   if (unlikely(err))
+   return err;
 
rpa->b[5] &= 0x3f;  /* Clear two most significant bits */
rpa->b[5] |= 0x40;  /* Set second most significant bit */
@@ -570,7 +572,9 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 
rand[16])
} else {
while (true) {
/* Seed private key with random number */
-   get_random_bytes(smp->local_sk, 32);
+   err = get_random_bytes_wait(smp->local_sk, 32);
+   if (unlikely(err))
+   return err;
 
/* Generate local key pair for Secure Connections */
if (!generate_ecdh_keys(smp->local_pk, smp->local_sk))
@@ -589,7 +593,9 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 
rand[16])
SMP_DBG("OOB Public Key Y: %32phN", smp->local_pk + 32);
SMP_DBG("OOB Private Key:  %32phN", smp->local_sk);
 
-   get_random_bytes(smp->local_rand, 16);
+   err = get_random_bytes_wait(smp->local_rand, 16);
+   if (unlikely(err))
+   return err;
 
err = smp_f4(smp->tfm_cmac, smp->local_pk, smp->local_pk,
 smp->local_rand, 0, hash);
@@ -2831,7 +2837,11 @@ static int smp_sig_channel(struct l2cap_chan *chan, 
struct sk_buff *skb)
struct hci_conn *hcon = conn->hcon;
struct smp_chan *smp;
__u8 code, reason;
-   int err = 0;
+   int err;
+
+   err = wait_for_random_bytes();
+   if (unlikely(err))
+   return err;
 
if (skb->len < 1)
return -EILSEQ;
-- 
2.13.0



[PATCH v4 13/13] random: warn when kernel uses unseeded randomness

2017-06-06 Thread Jason A. Donenfeld
This enables an important dmesg notification about when drivers have
used the crng without it being seeded first. Prior, these errors would
occur silently, and so there hasn't been a great way of diagnosing these
types of bugs for obscure setups. By adding this as a config option, we
can leave it on by default, so that we learn where these issues happen,
in the field, will still allowing some people to turn it off, if they
really know what they're doing and do not want the log entries.

However, we don't leave it _completely_ by default. An earlier version
of this patch simply had `default y`. I'd really love that, but it turns
out, this problem with unseeded randomness being used is really quite
present and is going to take a long time to fix. Thus, as a compromise
between log-messages-for-all and nobody-knows, this is `default y`,
except it is also `depends on DEBUG_KERNEL`. This will ensure that the
curious see the messages while others don't have to.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 drivers/char/random.c | 15 +--
 lib/Kconfig.debug | 16 
 2 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 36cdb2406610..33a9ec86d101 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -288,7 +288,6 @@
 #define SEC_XFER_SIZE  512
 #define EXTRACT_SIZE   10
 
-#define DEBUG_RANDOM_BOOT 0
 
 #define LONGS(x) (((x) + sizeof(unsigned long) - 1)/sizeof(unsigned long))
 
@@ -1477,7 +1476,7 @@ void get_random_bytes(void *buf, int nbytes)
 {
__u8 tmp[CHACHA20_BLOCK_SIZE];
 
-#if DEBUG_RANDOM_BOOT > 0
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
if (!crng_ready())
printk(KERN_NOTICE "random: %pF get_random_bytes called "
   "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
@@ -2071,6 +2070,12 @@ u64 get_random_u64(void)
return ret;
 #endif
 
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
+   if (!crng_ready())
+   printk(KERN_NOTICE "random: %pF get_random_u64 called "
+  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
+#endif
+
batch = _cpu_var(batched_entropy_u64);
if (use_lock)
read_lock_irqsave(_entropy_reset_lock, flags);
@@ -2097,6 +2102,12 @@ u32 get_random_u32(void)
if (arch_get_random_int())
return ret;
 
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
+   if (!crng_ready())
+   printk(KERN_NOTICE "random: %pF get_random_u32 called "
+  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
+#endif
+
batch = _cpu_var(batched_entropy_u32);
if (use_lock)
read_lock_irqsave(_entropy_reset_lock, flags);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index e4587ebe52c7..c4159605bfbf 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1209,6 +1209,22 @@ config STACKTRACE
  It is also used by various kernel debugging features that require
  stack trace generation.
 
+config WARN_UNSEEDED_RANDOM
+   bool "Warn when kernel uses unseeded randomness"
+   default y
+   depends on DEBUG_KERNEL
+   help
+ Some parts of the kernel contain bugs relating to their use of
+ cryptographically secure random numbers before it's actually possible
+ to generate those numbers securely. This setting ensures that these
+ flaws don't go unnoticed, by enabling a message, should this ever
+ occur. This will allow people with obscure setups to know when things
+ are going wrong, so that they might contact developers about fixing
+ it.
+
+ Say Y here, unless you simply do not care about using unseeded
+ randomness and do not want a potential warning message in your logs.
+
 config DEBUG_KOBJECT
bool "kobject debugging"
depends on DEBUG_KERNEL
-- 
2.13.0



[PATCH v4 06/13] iscsi: ensure RNG is seeded before use

2017-06-06 Thread Jason A. Donenfeld
It's not safe to use weak random data here, especially for the challenge
response randomness. Since we're always in process context, it's safe to
simply wait until we have enough randomness to carry out the
authentication correctly.

While we're at it, we clean up a small memleak during an error
condition.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: "Nicholas A. Bellinger" <n...@linux-iscsi.org>
Cc: Lee Duncan <ldun...@suse.com>
Cc: Chris Leech <cle...@redhat.com>
---
 drivers/target/iscsi/iscsi_target_auth.c  | 14 +++---
 drivers/target/iscsi/iscsi_target_login.c | 22 ++
 2 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target_auth.c 
b/drivers/target/iscsi/iscsi_target_auth.c
index 903b667f8e01..f9bc8ec6fb6b 100644
--- a/drivers/target/iscsi/iscsi_target_auth.c
+++ b/drivers/target/iscsi/iscsi_target_auth.c
@@ -47,18 +47,21 @@ static void chap_binaryhex_to_asciihex(char *dst, char 
*src, int src_len)
}
 }
 
-static void chap_gen_challenge(
+static int chap_gen_challenge(
struct iscsi_conn *conn,
int caller,
char *c_str,
unsigned int *c_len)
 {
+   int ret;
unsigned char challenge_asciihex[CHAP_CHALLENGE_LENGTH * 2 + 1];
struct iscsi_chap *chap = conn->auth_protocol;
 
memset(challenge_asciihex, 0, CHAP_CHALLENGE_LENGTH * 2 + 1);
 
-   get_random_bytes(chap->challenge, CHAP_CHALLENGE_LENGTH);
+   ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);
+   if (unlikely(ret))
+   return ret;
chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
CHAP_CHALLENGE_LENGTH);
/*
@@ -69,6 +72,7 @@ static void chap_gen_challenge(
 
pr_debug("[%s] Sending CHAP_C=0x%s\n\n", (caller) ? "server" : "client",
challenge_asciihex);
+   return 0;
 }
 
 static int chap_check_algorithm(const char *a_str)
@@ -143,6 +147,7 @@ static struct iscsi_chap *chap_server_open(
case CHAP_DIGEST_UNKNOWN:
default:
pr_err("Unsupported CHAP_A value\n");
+   kfree(conn->auth_protocol);
return NULL;
}
 
@@ -156,7 +161,10 @@ static struct iscsi_chap *chap_server_open(
/*
 * Generate Challenge.
 */
-   chap_gen_challenge(conn, 1, aic_str, aic_len);
+   if (chap_gen_challenge(conn, 1, aic_str, aic_len) < 0) {
+   kfree(conn->auth_protocol);
+   return NULL;
+   }
 
return chap;
 }
diff --git a/drivers/target/iscsi/iscsi_target_login.c 
b/drivers/target/iscsi/iscsi_target_login.c
index 66238477137b..5ef028c11738 100644
--- a/drivers/target/iscsi/iscsi_target_login.c
+++ b/drivers/target/iscsi/iscsi_target_login.c
@@ -245,22 +245,26 @@ int iscsi_check_for_session_reinstatement(struct 
iscsi_conn *conn)
return 0;
 }
 
-static void iscsi_login_set_conn_values(
+static int iscsi_login_set_conn_values(
struct iscsi_session *sess,
struct iscsi_conn *conn,
__be16 cid)
 {
+   int ret;
conn->sess  = sess;
conn->cid   = be16_to_cpu(cid);
/*
 * Generate a random Status sequence number (statsn) for the new
 * iSCSI connection.
 */
-   get_random_bytes(>stat_sn, sizeof(u32));
+   ret = get_random_bytes_wait(>stat_sn, sizeof(u32));
+   if (unlikely(ret))
+   return ret;
 
mutex_lock(_id_lock);
conn->auth_id   = iscsit_global->auth_id++;
mutex_unlock(_id_lock);
+   return 0;
 }
 
 __printf(2, 3) int iscsi_change_param_sprintf(
@@ -306,7 +310,11 @@ static int iscsi_login_zero_tsih_s1(
return -ENOMEM;
}
 
-   iscsi_login_set_conn_values(sess, conn, pdu->cid);
+   ret = iscsi_login_set_conn_values(sess, conn, pdu->cid);
+   if (unlikely(ret)) {
+   kfree(sess);
+   return ret;
+   }
sess->init_task_tag = pdu->itt;
memcpy(>isid, pdu->isid, 6);
sess->exp_cmd_sn= be32_to_cpu(pdu->cmdsn);
@@ -497,8 +505,7 @@ static int iscsi_login_non_zero_tsih_s1(
 {
struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
 
-   iscsi_login_set_conn_values(NULL, conn, pdu->cid);
-   return 0;
+   return iscsi_login_set_conn_values(NULL, conn, pdu->cid);
 }
 
 /*
@@ -554,9 +561,8 @@ static int iscsi_login_non_zero_tsih_s2(
atomic_set(>session_continuation, 1);
spin_unlock_bh(>conn_lock);
 
-   iscsi_login_set_conn_values(sess, conn, pdu->cid);
-
-   if (iscsi_copy_param_list(>param_list,
+   if (iscsi_login_set_conn_values(sess, conn, pdu->cid) < 0 ||
+   

[PATCH v4 02/13] random: add synchronous API for the urandom pool

2017-06-06 Thread Jason A. Donenfeld
This enables users of get_random_{bytes,u32,u64,int,long} to wait until
the pool is ready before using this function, in case they actually want
to have reliable randomness.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 drivers/char/random.c  | 41 +++--
 include/linux/random.h |  1 +
 2 files changed, 32 insertions(+), 10 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 2291e6224ed3..36cdb2406610 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -851,11 +851,6 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
spin_unlock_irqrestore(_crng.lock, flags);
 }
 
-static inline void crng_wait_ready(void)
-{
-   wait_event_interruptible(crng_init_wait, crng_ready());
-}
-
 static void _extract_crng(struct crng_state *crng,
  __u8 out[CHACHA20_BLOCK_SIZE])
 {
@@ -1473,7 +1468,10 @@ static ssize_t extract_entropy_user(struct entropy_store 
*r, void __user *buf,
  * number of good random numbers, suitable for key generation, seeding
  * TCP sequence numbers, etc.  It does not rely on the hardware random
  * number generator.  For random bytes direct from the hardware RNG
- * (when available), use get_random_bytes_arch().
+ * (when available), use get_random_bytes_arch(). In order to ensure
+ * that the randomness provided by this function is okay, the function
+ * wait_for_random_bytes() should be called and return 0 at least once
+ * at any point prior.
  */
 void get_random_bytes(void *buf, int nbytes)
 {
@@ -1503,6 +1501,24 @@ void get_random_bytes(void *buf, int nbytes)
 EXPORT_SYMBOL(get_random_bytes);
 
 /*
+ * Wait for the urandom pool to be seeded and thus guaranteed to supply
+ * cryptographically secure random numbers. This applies to: the /dev/urandom
+ * device, the get_random_bytes function, and the get_random_{u32,u64,int,long}
+ * family of functions. Using any of these functions without first calling
+ * this function forfeits the guarantee of security.
+ *
+ * Returns: 0 if the urandom pool has been seeded.
+ *  -ERESTARTSYS if the function was interrupted by a signal.
+ */
+int wait_for_random_bytes(void)
+{
+   if (likely(crng_ready()))
+   return 0;
+   return wait_event_interruptible(crng_init_wait, crng_ready());
+}
+EXPORT_SYMBOL(wait_for_random_bytes);
+
+/*
  * Add a callback function that will be invoked when the nonblocking
  * pool is initialised.
  *
@@ -1856,6 +1872,8 @@ const struct file_operations urandom_fops = {
 SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
unsigned int, flags)
 {
+   int ret;
+
if (flags & ~(GRND_NONBLOCK|GRND_RANDOM))
return -EINVAL;
 
@@ -1868,9 +1886,9 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, 
count,
if (!crng_ready()) {
if (flags & GRND_NONBLOCK)
return -EAGAIN;
-   crng_wait_ready();
-   if (signal_pending(current))
-   return -ERESTARTSYS;
+   ret = wait_for_random_bytes();
+   if (unlikely(ret))
+   return ret;
}
return urandom_read(NULL, buf, count, NULL);
 }
@@ -2031,7 +2049,10 @@ static rwlock_t batched_entropy_reset_lock = 
__RW_LOCK_UNLOCKED(batched_entropy_
 /*
  * Get a random word for internal kernel use only. The quality of the random
  * number is either as good as RDRAND or as good as /dev/urandom, with the
- * goal of being quite fast and not depleting entropy.
+ * goal of being quite fast and not depleting entropy. In order to ensure
+ * that the randomness provided by this function is okay, the function
+ * wait_for_random_bytes() should be called and return 0 at least once
+ * at any point prior.
  */
 static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
 u64 get_random_u64(void)
diff --git a/include/linux/random.h b/include/linux/random.h
index ed5c3838780d..e29929347c95 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -34,6 +34,7 @@ extern void add_input_randomness(unsigned int type, unsigned 
int code,
 extern void add_interrupt_randomness(int irq, int irq_flags) __latent_entropy;
 
 extern void get_random_bytes(void *buf, int nbytes);
+extern int wait_for_random_bytes(void);
 extern int add_random_ready_callback(struct random_ready_callback *rdy);
 extern void del_random_ready_callback(struct random_ready_callback *rdy);
 extern void get_random_bytes_arch(void *buf, int nbytes);
-- 
2.13.0



[PATCH v4 00/13] Unseeded In-Kernel Randomness Fixes

2017-06-06 Thread Jason A. Donenfeld
As discussed in [1], there is a problem with get_random_bytes being
used before the RNG has actually been seeded. The solution for fixing
this appears to be multi-pronged. One of those prongs involves adding
a simple blocking API so that modules that use the RNG in process
context can just sleep (in an interruptable manner) until the RNG is
ready to be used. This winds up being a very useful API that covers
a few use cases, several of which are included in this patch set.

[1] http://www.openwall.com/lists/kernel-hardening/2017/06/02/2

Changes v3->v4:
  - Mark one patch for stable
  - Operation ordering on batched entropy invalidation
  - Separate out big_key into its own patch to the keys mailing list
  - General cleanups

Jason A. Donenfeld (13):
  random: invalidate batched entropy after crng init
  random: add synchronous API for the urandom pool
  random: add get_random_{bytes,u32,u64,int,long,once}_wait family
  security/keys: ensure RNG is seeded before use
  crypto/rng: ensure that the RNG is ready before using
  iscsi: ensure RNG is seeded before use
  ceph: ensure RNG is seeded before using
  cifs: use get_random_u32 for 32-bit lock random
  rhashtable: use get_random_u32 for hash_rnd
  net/neighbor: use get_random_u32 for 32-bit hash random
  net/route: use get_random_int for random counter
  bluetooth/smp: ensure RNG is properly seeded before ECDH use
  random: warn when kernel uses unseeded randomness

 crypto/rng.c  |  6 +-
 drivers/char/random.c | 93 +++
 drivers/target/iscsi/iscsi_target_auth.c  | 14 -
 drivers/target/iscsi/iscsi_target_login.c | 22 +---
 fs/cifs/cifsfs.c  |  2 +-
 include/linux/net.h   |  2 +
 include/linux/once.h  |  2 +
 include/linux/random.h| 26 +
 lib/Kconfig.debug | 16 ++
 lib/rhashtable.c  |  2 +-
 net/bluetooth/hci_request.c   |  6 ++
 net/bluetooth/smp.c   | 18 --
 net/ceph/ceph_common.c|  6 +-
 net/core/neighbour.c  |  3 +-
 net/ipv4/route.c  |  3 +-
 security/keys/encrypted-keys/encrypted.c  |  8 ++-
 security/keys/key.c   | 16 +++---
 17 files changed, 198 insertions(+), 47 deletions(-)

-- 
2.13.0



Re: [kernel-hardening] Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-06 Thread Jason A. Donenfeld
On Tue, Jun 6, 2017 at 7:26 PM, Eric Biggers  wrote:
> I agree that the use of ECB mode in big_key is broken, and thanks for trying 
> to
> fix it!  I think using GCM is good, but please leave a very conspicuous 
> comment
> where the nonce is being set to 0, noting that it's safe only because a unique
> key is used to encrypt every big_key *and* the big_keys are not updatable (via
> an .update method in the key_type), resulting in each GCM key being used for
> only a single encryption.

Good idea. I'll make a large comment about this.

>
> Also, I think you should send this to the keyrings mailing list and maintainer
> so it can be discussed and merged separately from your RNG changes.

Yea, that seems like a good idea. I'll separate things out right now.


Re: [kernel-hardening] Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-06 Thread Jason A. Donenfeld
On Tue, Jun 6, 2017 at 7:03 PM, Theodore Ts'o  wrote:
> So it's not clear what you mean by Stephan's work.

I just meant that there's a guy out there who seems really motivated
to work on this stuff in detail, but hasn't seen too much love, AFAIK.
I'm sure there's an interesting technical discussion to be had about
his contributions.

> The reality though is that Linux is a volunteer effort

Yep! And here I am volunteering, writing some code, in my free time,
just for willies. I hope you'll be a kind, helpful, and supportive
maintainer who welcomes contributions and discussion.

> So it may in
> fact be _rational_ for people who are working on hardening the kernel
> to focus on other areas.
> improve things on all fronts, not just the sexy ones.

I don't want people to use some aspects of a module I'm writing before
get_random_bytes() will return good randomness. You made some point
about what's sexy and what isn't and what is rational for people to
work on, but I think I missed the thrust of it. I'm working on this
particular problem now with get_random_bytes, as something motivated
by simple considerations, not complex ulterior motives.


But anyway, moving on...


> I think this is a soluble problem, but it may be rather tricky.  For
> example, it may be that for a certain class of init calls, even though
> they are in subsystems that are compiled into the kernel, those init
> calls perhaps could be deferred so they are running in parallel with
> the init scripts.  (Or maybe we could just require that certain kernel
> modules can *only* be compiled as modules if they use rng_init ---
> although that may get annoying for those of us who like being able to
> build custom configured monolithic kernels.  So I'd prefer the first
> possibility if at all possible.)

Right, indeed this is tricky, and you bring up good points about
complex interactions on different architectures. Based on your
comments about whack-a-mole, I think we're mostly on the same page
about how arduous this is going to be to fix. My plan is to first get
in this simple blocking API, because while it doesn't solve _every_
use case, there are particular use cases that are helped nearly
perfectly by it. Namely, places where userspace is going to configure
something. So, soon after I finish up this testing I've been doing all
day on my series, I'll post v4, and hopefully we can get that merged,
and then move onto interesting other solutions for the general
problem.

Regards,
Jason


Re: [kernel-hardening] Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-06 Thread Jason A. Donenfeld
Hey again Eric,

One thing led to another and I wound up just rewriting all the crypto
in big_keys.c. I'll include this for v4:

https://git.zx2c4.com/linux-dev/commit/?h=jd/rng-blocker=886ff283b9808aecb14aa8e397da8496a9635aed

Not only was the use of crypto/rng inappropriate, but the decision to
go with aes-ecb is shocking. Seeing that this author had no other
commits in the tree, and that all subsequent commits that mentioned
his name were cleaning up his mess, I just went ahead and removed both
the crypto/rng misusage and changed from aes-ecb to aes-gcm.

Anyway, I'll wait for some more reviews on v3, and then this can be
reviewed for v4.

Regards,
Jason


Re: [kernel-hardening] Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-06 Thread Jason A. Donenfeld
Hi Eric,

On Tue, Jun 6, 2017 at 6:44 AM, Eric Biggers  wrote:
> I don't think big_key even needs randomness at init time.  The 'big_key_rng'
> could just be removed and big_key_gen_enckey() changed to call
> get_random_bytes().  (Or get_random_bytes_wait(), I guess; it's only reachable
> via the keyring syscalls.)

That sounds good to me. I'll go ahead and make these changes, and will
add you to the Cc for the patch. You'll find the latest version in
here:
https://git.zx2c4.com/linux-dev/log/?h=jd/rng-blocker

> It's going to take a while to go through all 217 users of get_random_bytes()
> like this, though...  It's really a shame there's no way to guarantee good
> randomness at boot time.

Yes, I agree whole-heartedly.  A lot of people have proposals for
fixing the direct idea of entropy gathering, but for whatever reason,
Ted hasn't merged stuff. I think Stephan (CCd) rewrote big critical
sections of the RNG, called LRNG, and published a big paper for peer
review and did a lot of cool engineering, but for some reason this
hasn't been integrated. I look forward to movement on this front in
the future, if it ever happens. Would be great.

However, in lieu of that, I agree that playing whack a mole with all
call sites is mega arduous and error prone. In my original message to
Ted about this, I proposed instead a more global approach of
introducing an rng_init() to complement things like late_init() and
device_init() and such. The idea here would be two-fold:

- Modules that are built in would only be loaded as a callback to the
initialization of the RNG. An API for that already exists.
- Modules that are external would simply block userspace in
request_module until the RNG is initialized. This patch series adds
that kind of API.

If I understood correctly, Ted was worried that this might introduce
some headaches with module load ordering. However, IMHO, dealing with
the very few use cases of ordering issues is going to be far less
arduous than playing whack-a-mole with every call site. But, given the
fact that we still do need a blocking API (this patch series), I
decided to go with implementing this first, and then second attacking
the more difficult issue of adding rng_init().

So hopefully a combination of this patch series and the next one will
amount to something workable.

Long term, though, I think we need Stephan's work, in one form or
another, to be merged.

Jason


Re: [PATCH RFC v2 0/8] get_random_bytes_wait family of APIs

2017-06-06 Thread Jason A. Donenfeld
On Tue, Jun 6, 2017 at 9:45 AM, Greg Kroah-Hartman
 wrote:
> If it's needed no matter what, can you make it the first patch in the
> series?  And does it need to go to any older kernels as well?

I believe it does belong in older kernels too. I'll work out precisely
which one those are and note it in the commit, which I'll order as the
first in the series and Cc to stable@. This is, of course, pending
Ted's review.


Re: [PATCH v3 05/13] security/keys: ensure RNG is seeded before use

2017-06-06 Thread Jason A. Donenfeld
On Tue, Jun 6, 2017 at 12:08 PM, David Howells <dhowe...@redhat.com> wrote:
> Jason A. Donenfeld <ja...@zx2c4.com> wrote:
>
>> + key->serial = get_random_u32() >> 1;
>
> If this may sleep, it must be interruptible.

That won't sleep. I could have made it get_random_u32_wait(), but we'd
get into trouble at boottime. So instead, for now, I just use
get_random_u32 rather than get_random_bytes, which can use the
architectural random number generator, when the platform has one,
which is available early on.


Re: [PATCH v3 02/13] random: add get_random_{bytes,u32,u64,int,long,once}_wait family

2017-06-06 Thread Jason A. Donenfeld
On Tue, Jun 6, 2017 at 7:11 AM, Jeffrey Walton <noloa...@gmail.com> wrote:
> On Mon, Jun 5, 2017 at 8:50 PM, Jason A. Donenfeld <ja...@zx2c4.com> wrote:
>> These functions are simple convenience wrappers that call
>> wait_for_random_bytes before calling the respective get_random_*
>> function.
>
> It may be advantageous to add a timeout, too.

This was in v1, but was removed because of a lack of particular use
case in this context.


Re: [PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-05 Thread Jason A. Donenfeld
Hey Ted,

On Tue, Jun 6, 2017 at 5:00 AM, Theodore Ts'o  wrote:
> Note that crypto_rng_reset() is called by big_key_init() in
> security/keys/big_key.c as a late_initcall().  So if we are on a
> system where the crng doesn't get initialized until during the system
> boot scripts, and big_key is compiled directly into the kernel, the
> boot could end up deadlocking.
>
> There may be other instances of where crypto_rng_reset() is called by
> an initcall, so big_key_init() may not be an exhaustive enumeration of
> potential problems.  But this is an example of why the synchronous
> API, although definitely much more convenient, can end up being a trap
> for the unwary

Thanks for pointing this out. I'll look more closely into it and see
if I can figure out a good way of approaching this.

Indeed you're right -- that we have to be really quite careful every
time we use the synchronous API. For this reason, I separated things
out into the wait_for_random_bytes and then the wrapper around
wait_for_random_bytes+get_random_bytes of get_random_bytes_wait. The
idea here would be that drivers could place a single
wait_for_random_bytes at some userspace entry point -- a configuration
ioctl, for example -- and then try to ensure that all calls to
get_random_bytes are ordered _after_ that wait_for_random_bytes call.
While this pattern doesn't fix all cases of unseeded get_random_bytes
calls -- we'll need to do some module loading order cleverness for
that, as we discussed in the other thread -- I think this pattern will
fix an acceptable amount of call sites, as seen here in this patchset,
that it makes it worthwhile. Having it, too, I think would encourage
other new drivers to think about when their calls to get_random_bytes
happens, and if it's possible for them to defer it until after a
userspace-blocking call to wait_for_random_bytes.

Anyway, I'll look into and fix up the problem you mentioned. Looking
forward to your feedback on the other patches here.

Regards,
Jason


[PATCH v3 01/13] random: add synchronous API for the urandom pool

2017-06-05 Thread Jason A. Donenfeld
This enables users of get_random_{bytes,u32,u64,int,long} to wait until
the pool is ready before using this function, in case they actually want
to have reliable randomness.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 drivers/char/random.c  | 41 +++--
 include/linux/random.h |  1 +
 2 files changed, 32 insertions(+), 10 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 0ab024918907..035a5d7c06bd 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -844,11 +844,6 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
spin_unlock_irqrestore(_crng.lock, flags);
 }
 
-static inline void crng_wait_ready(void)
-{
-   wait_event_interruptible(crng_init_wait, crng_ready());
-}
-
 static void _extract_crng(struct crng_state *crng,
  __u8 out[CHACHA20_BLOCK_SIZE])
 {
@@ -1466,7 +1461,10 @@ static ssize_t extract_entropy_user(struct entropy_store 
*r, void __user *buf,
  * number of good random numbers, suitable for key generation, seeding
  * TCP sequence numbers, etc.  It does not rely on the hardware random
  * number generator.  For random bytes direct from the hardware RNG
- * (when available), use get_random_bytes_arch().
+ * (when available), use get_random_bytes_arch(). In order to ensure
+ * that the randomness provided by this function is okay, the function
+ * wait_for_random_bytes() should be called and return 0 at least once
+ * at any point prior.
  */
 void get_random_bytes(void *buf, int nbytes)
 {
@@ -1496,6 +1494,24 @@ void get_random_bytes(void *buf, int nbytes)
 EXPORT_SYMBOL(get_random_bytes);
 
 /*
+ * Wait for the urandom pool to be seeded and thus guaranteed to supply
+ * cryptographically secure random numbers. This applies to: the /dev/urandom
+ * device, the get_random_bytes function, and the get_random_{u32,u64,int,long}
+ * family of functions. Using any of these functions without first calling
+ * this function forfeits the guarantee of security.
+ *
+ * Returns: 0 if the urandom pool has been seeded.
+ *  -ERESTARTSYS if the function was interrupted by a signal.
+ */
+int wait_for_random_bytes(void)
+{
+   if (likely(crng_ready()))
+   return 0;
+   return wait_event_interruptible(crng_init_wait, crng_ready());
+}
+EXPORT_SYMBOL(wait_for_random_bytes);
+
+/*
  * Add a callback function that will be invoked when the nonblocking
  * pool is initialised.
  *
@@ -1849,6 +1865,8 @@ const struct file_operations urandom_fops = {
 SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
unsigned int, flags)
 {
+   int ret;
+
if (flags & ~(GRND_NONBLOCK|GRND_RANDOM))
return -EINVAL;
 
@@ -1861,9 +1879,9 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, 
count,
if (!crng_ready()) {
if (flags & GRND_NONBLOCK)
return -EAGAIN;
-   crng_wait_ready();
-   if (signal_pending(current))
-   return -ERESTARTSYS;
+   ret = wait_for_random_bytes();
+   if (unlikely(ret))
+   return ret;
}
return urandom_read(NULL, buf, count, NULL);
 }
@@ -2023,7 +2041,10 @@ struct batched_entropy {
 /*
  * Get a random word for internal kernel use only. The quality of the random
  * number is either as good as RDRAND or as good as /dev/urandom, with the
- * goal of being quite fast and not depleting entropy.
+ * goal of being quite fast and not depleting entropy. In order to ensure
+ * that the randomness provided by this function is okay, the function
+ * wait_for_random_bytes() should be called and return 0 at least once
+ * at any point prior.
  */
 static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
 u64 get_random_u64(void)
diff --git a/include/linux/random.h b/include/linux/random.h
index ed5c3838780d..e29929347c95 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -34,6 +34,7 @@ extern void add_input_randomness(unsigned int type, unsigned 
int code,
 extern void add_interrupt_randomness(int irq, int irq_flags) __latent_entropy;
 
 extern void get_random_bytes(void *buf, int nbytes);
+extern int wait_for_random_bytes(void);
 extern int add_random_ready_callback(struct random_ready_callback *rdy);
 extern void del_random_ready_callback(struct random_ready_callback *rdy);
 extern void get_random_bytes_arch(void *buf, int nbytes);
-- 
2.13.0



[PATCH v3 02/13] random: add get_random_{bytes,u32,u64,int,long,once}_wait family

2017-06-05 Thread Jason A. Donenfeld
These functions are simple convenience wrappers that call
wait_for_random_bytes before calling the respective get_random_*
function.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 include/linux/net.h|  2 ++
 include/linux/once.h   |  2 ++
 include/linux/random.h | 25 +
 3 files changed, 29 insertions(+)

diff --git a/include/linux/net.h b/include/linux/net.h
index abcfa46a2bd9..dda2cc939a53 100644
--- a/include/linux/net.h
+++ b/include/linux/net.h
@@ -274,6 +274,8 @@ do {
\
 
 #define net_get_random_once(buf, nbytes)   \
get_random_once((buf), (nbytes))
+#define net_get_random_once_wait(buf, nbytes)  \
+   get_random_once_wait((buf), (nbytes))
 
 int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec,
   size_t num, size_t len);
diff --git a/include/linux/once.h b/include/linux/once.h
index 285f12cb40e6..9c98aaa87cbc 100644
--- a/include/linux/once.h
+++ b/include/linux/once.h
@@ -53,5 +53,7 @@ void __do_once_done(bool *done, struct static_key *once_key,
 
 #define get_random_once(buf, nbytes)\
DO_ONCE(get_random_bytes, (buf), (nbytes))
+#define get_random_once_wait(buf, nbytes)\
+   DO_ONCE(get_random_bytes_wait, (buf), (nbytes))  \
 
 #endif /* _LINUX_ONCE_H */
diff --git a/include/linux/random.h b/include/linux/random.h
index e29929347c95..4aecc339558d 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -58,6 +58,31 @@ static inline unsigned long get_random_long(void)
 #endif
 }
 
+/* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbytes).
+ * Returns the result of the call to wait_for_random_bytes. */
+static inline int get_random_bytes_wait(void *buf, int nbytes)
+{
+   int ret = wait_for_random_bytes();
+   if (unlikely(ret))
+   return ret;
+   get_random_bytes(buf, nbytes);
+   return 0;
+}
+
+#define declare_get_random_var_wait(var) \
+   static inline int get_random_ ## var ## _wait(var *out) { \
+   int ret = wait_for_random_bytes(); \
+   if (unlikely(ret)) \
+   return ret; \
+   *out = get_random_ ## var(); \
+   return 0; \
+   }
+declare_get_random_var_wait(u32)
+declare_get_random_var_wait(u64)
+declare_get_random_var_wait(int)
+declare_get_random_var_wait(long)
+#undef declare_get_random_var
+
 unsigned long randomize_page(unsigned long start, unsigned long range);
 
 u32 prandom_u32(void);
-- 
2.13.0



[PATCH v3 03/13] random: invalidate batched entropy after crng init

2017-06-05 Thread Jason A. Donenfeld
It's possible that get_random_{u32,u64} is used before the crng has
initialized, in which case, its output might not be cryptographically
secure. For this problem, directly, this patch set is introducing the
*_wait variety of functions, but even with that, there's a subtle issue:
what happens to our batched entropy that was generated before
initialization. Prior to this commit, it'd stick around, supplying bad
numbers. After this commit, we force the entropy to be re-extracted
after each phase of the crng has initialized.

In order to avoid a race condition with the position counter, we
introduce a simple rwlock for this invalidation. Since it's only during
this awkward transition period, after things are all set up, we stop
using it, so that it doesn't have an impact on performance.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 drivers/char/random.c | 34 ++
 1 file changed, 34 insertions(+)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 035a5d7c06bd..c328e9b11f1f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -762,6 +762,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
 static struct crng_state **crng_node_pool __read_mostly;
 #endif
 
+static void invalidate_batched_entropy(void);
+
 static void crng_initialize(struct crng_state *crng)
 {
int i;
@@ -800,6 +802,7 @@ static int crng_fast_load(const char *cp, size_t len)
}
if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
crng_init = 1;
+   invalidate_batched_entropy();
wake_up_interruptible(_init_wait);
pr_notice("random: fast init done\n");
}
@@ -837,6 +840,7 @@ static void crng_reseed(struct crng_state *crng, struct 
entropy_store *r)
crng->init_time = jiffies;
if (crng == _crng && crng_init < 2) {
crng_init = 2;
+   invalidate_batched_entropy();
process_random_ready_list();
wake_up_interruptible(_init_wait);
pr_notice("random: crng init done\n");
@@ -2037,6 +2041,7 @@ struct batched_entropy {
};
unsigned int position;
 };
+static rwlock_t batched_entropy_reset_lock = 
__RW_LOCK_UNLOCKED(batched_entropy_reset_lock);
 
 /*
  * Get a random word for internal kernel use only. The quality of the random
@@ -2050,6 +2055,8 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u64);
 u64 get_random_u64(void)
 {
u64 ret;
+   bool use_lock = crng_init < 2;
+   unsigned long flags;
struct batched_entropy *batch;
 
 #if BITS_PER_LONG == 64
@@ -2062,11 +2069,15 @@ u64 get_random_u64(void)
 #endif
 
batch = _cpu_var(batched_entropy_u64);
+   if (use_lock)
+   read_lock_irqsave(_entropy_reset_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
extract_crng((u8 *)batch->entropy_u64);
batch->position = 0;
}
ret = batch->entropy_u64[batch->position++];
+   if (use_lock)
+   read_unlock_irqrestore(_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u64);
return ret;
 }
@@ -2076,22 +2087,45 @@ static DEFINE_PER_CPU(struct batched_entropy, 
batched_entropy_u32);
 u32 get_random_u32(void)
 {
u32 ret;
+   bool use_lock = crng_init < 2;
+   unsigned long flags;
struct batched_entropy *batch;
 
if (arch_get_random_int())
return ret;
 
batch = _cpu_var(batched_entropy_u32);
+   if (use_lock)
+   read_lock_irqsave(_entropy_reset_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
extract_crng((u8 *)batch->entropy_u32);
batch->position = 0;
}
ret = batch->entropy_u32[batch->position++];
+   if (use_lock)
+   read_unlock_irqrestore(_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u32);
return ret;
 }
 EXPORT_SYMBOL(get_random_u32);
 
+/* It's important to invalidate all potential batched entropy that might
+ * be stored before the crng is initialized, which we can do lazily by
+ * simply resetting the counter to zero so that it's re-extracted on the
+ * next usage. */
+static void invalidate_batched_entropy(void)
+{
+   int cpu;
+   unsigned long flags;
+
+   write_lock_irqsave(_entropy_reset_lock, flags);
+   for_each_possible_cpu (cpu) {
+   per_cpu_ptr(_entropy_u32, cpu)->position = 0;
+   per_cpu_ptr(_entropy_u64, cpu)->position = 0;
+   }
+   write_unlock_irqrestore(_entropy_reset_lock, flags);
+}
+
 /**
  * randomize_page - Generate a random, page aligned address
  * @start: The smallest acceptable address the caller will take.
-- 
2.13.0



[PATCH v3 04/13] crypto/rng: ensure that the RNG is ready before using

2017-06-05 Thread Jason A. Donenfeld
Otherwise, we might be seeding the RNG using bad randomness, which is
dangerous.

Cc: Herbert Xu <herb...@gondor.apana.org.au>
Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 crypto/rng.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/crypto/rng.c b/crypto/rng.c
index f46dac5288b9..e042437e64b4 100644
--- a/crypto/rng.c
+++ b/crypto/rng.c
@@ -48,12 +48,14 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 
*seed, unsigned int slen)
if (!buf)
return -ENOMEM;
 
-   get_random_bytes(buf, slen);
+   err = get_random_bytes_wait(buf, slen);
+   if (err)
+   goto out;
seed = buf;
}
 
err = crypto_rng_alg(tfm)->seed(tfm, seed, slen);
-
+out:
kzfree(buf);
return err;
 }
-- 
2.13.0



[PATCH v3 05/13] security/keys: ensure RNG is seeded before use

2017-06-05 Thread Jason A. Donenfeld
Otherwise, we might use bad random numbers which, particularly in the
case of IV generation, could be quite bad. It makes sense to use the
synchronous API here, because we're always in process context (as the
code is littered with GFP_KERNEL and the like). However, we can't change
to using a blocking function in key serial allocation, because this will
block booting in some configurations, so here we use the more
appropriate get_random_u32, which will use RDRAND if available.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: David Howells <dhowe...@redhat.com>
Cc: Mimi Zohar <zo...@linux.vnet.ibm.com>
Cc: David Safford <saff...@us.ibm.com>
---
 security/keys/encrypted-keys/encrypted.c |  8 +---
 security/keys/key.c  | 16 
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/security/keys/encrypted-keys/encrypted.c 
b/security/keys/encrypted-keys/encrypted.c
index 0010955d7876..d51a28fc5cd5 100644
--- a/security/keys/encrypted-keys/encrypted.c
+++ b/security/keys/encrypted-keys/encrypted.c
@@ -777,10 +777,12 @@ static int encrypted_init(struct encrypted_key_payload 
*epayload,
 
__ekey_init(epayload, format, master_desc, datalen);
if (!hex_encoded_iv) {
-   get_random_bytes(epayload->iv, ivsize);
+   ret = get_random_bytes_wait(epayload->iv, ivsize);
+   if (unlikely(ret))
+   return ret;
 
-   get_random_bytes(epayload->decrypted_data,
-epayload->decrypted_datalen);
+   ret = get_random_bytes_wait(epayload->decrypted_data,
+   epayload->decrypted_datalen);
} else
ret = encrypted_key_decrypt(epayload, format, hex_encoded_iv);
return ret;
diff --git a/security/keys/key.c b/security/keys/key.c
index 455c04d80bbb..b72078e532f2 100644
--- a/security/keys/key.c
+++ b/security/keys/key.c
@@ -134,17 +134,15 @@ void key_user_put(struct key_user *user)
  * Allocate a serial number for a key.  These are assigned randomly to avoid
  * security issues through covert channel problems.
  */
-static inline void key_alloc_serial(struct key *key)
+static inline int key_alloc_serial(struct key *key)
 {
struct rb_node *parent, **p;
struct key *xkey;
 
-   /* propose a random serial number and look for a hole for it in the
-* serial number tree */
+   /* propose a non-negative random serial number and look for a hole for
+* it in the serial number tree */
do {
-   get_random_bytes(>serial, sizeof(key->serial));
-
-   key->serial >>= 1; /* negative numbers are not permitted */
+   key->serial = get_random_u32() >> 1;
} while (key->serial < 3);
 
spin_lock(_serial_lock);
@@ -170,7 +168,7 @@ static inline void key_alloc_serial(struct key *key)
rb_insert_color(>serial_node, _serial_tree);
 
spin_unlock(_serial_lock);
-   return;
+   return 0;
 
/* we found a key with the proposed serial number - walk the tree from
 * that point looking for the next unused serial number */
@@ -314,7 +312,9 @@ struct key *key_alloc(struct key_type *type, const char 
*desc,
 
/* publish the key by giving it a serial number */
atomic_inc(>nkeys);
-   key_alloc_serial(key);
+   ret = key_alloc_serial(key);
+   if (ret < 0)
+   goto security_error;
 
 error:
return key;
-- 
2.13.0



[PATCH v3 06/13] iscsi: ensure RNG is seeded before use

2017-06-05 Thread Jason A. Donenfeld
It's not safe to use weak random data here, especially for the challenge
response randomness. Since we're always in process context, it's safe to
simply wait until we have enough randomness to carry out the
authentication correctly.

While we're at it, we clean up a small memleak during an error
condition.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: "Nicholas A. Bellinger" <n...@linux-iscsi.org>
Cc: Lee Duncan <ldun...@suse.com>
Cc: Chris Leech <cle...@redhat.com>
---
 drivers/target/iscsi/iscsi_target_auth.c  | 14 +++---
 drivers/target/iscsi/iscsi_target_login.c | 22 ++
 2 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target_auth.c 
b/drivers/target/iscsi/iscsi_target_auth.c
index 903b667f8e01..f9bc8ec6fb6b 100644
--- a/drivers/target/iscsi/iscsi_target_auth.c
+++ b/drivers/target/iscsi/iscsi_target_auth.c
@@ -47,18 +47,21 @@ static void chap_binaryhex_to_asciihex(char *dst, char 
*src, int src_len)
}
 }
 
-static void chap_gen_challenge(
+static int chap_gen_challenge(
struct iscsi_conn *conn,
int caller,
char *c_str,
unsigned int *c_len)
 {
+   int ret;
unsigned char challenge_asciihex[CHAP_CHALLENGE_LENGTH * 2 + 1];
struct iscsi_chap *chap = conn->auth_protocol;
 
memset(challenge_asciihex, 0, CHAP_CHALLENGE_LENGTH * 2 + 1);
 
-   get_random_bytes(chap->challenge, CHAP_CHALLENGE_LENGTH);
+   ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);
+   if (unlikely(ret))
+   return ret;
chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
CHAP_CHALLENGE_LENGTH);
/*
@@ -69,6 +72,7 @@ static void chap_gen_challenge(
 
pr_debug("[%s] Sending CHAP_C=0x%s\n\n", (caller) ? "server" : "client",
challenge_asciihex);
+   return 0;
 }
 
 static int chap_check_algorithm(const char *a_str)
@@ -143,6 +147,7 @@ static struct iscsi_chap *chap_server_open(
case CHAP_DIGEST_UNKNOWN:
default:
pr_err("Unsupported CHAP_A value\n");
+   kfree(conn->auth_protocol);
return NULL;
}
 
@@ -156,7 +161,10 @@ static struct iscsi_chap *chap_server_open(
/*
 * Generate Challenge.
 */
-   chap_gen_challenge(conn, 1, aic_str, aic_len);
+   if (chap_gen_challenge(conn, 1, aic_str, aic_len) < 0) {
+   kfree(conn->auth_protocol);
+   return NULL;
+   }
 
return chap;
 }
diff --git a/drivers/target/iscsi/iscsi_target_login.c 
b/drivers/target/iscsi/iscsi_target_login.c
index 66238477137b..5ef028c11738 100644
--- a/drivers/target/iscsi/iscsi_target_login.c
+++ b/drivers/target/iscsi/iscsi_target_login.c
@@ -245,22 +245,26 @@ int iscsi_check_for_session_reinstatement(struct 
iscsi_conn *conn)
return 0;
 }
 
-static void iscsi_login_set_conn_values(
+static int iscsi_login_set_conn_values(
struct iscsi_session *sess,
struct iscsi_conn *conn,
__be16 cid)
 {
+   int ret;
conn->sess  = sess;
conn->cid   = be16_to_cpu(cid);
/*
 * Generate a random Status sequence number (statsn) for the new
 * iSCSI connection.
 */
-   get_random_bytes(>stat_sn, sizeof(u32));
+   ret = get_random_bytes_wait(>stat_sn, sizeof(u32));
+   if (unlikely(ret))
+   return ret;
 
mutex_lock(_id_lock);
conn->auth_id   = iscsit_global->auth_id++;
mutex_unlock(_id_lock);
+   return 0;
 }
 
 __printf(2, 3) int iscsi_change_param_sprintf(
@@ -306,7 +310,11 @@ static int iscsi_login_zero_tsih_s1(
return -ENOMEM;
}
 
-   iscsi_login_set_conn_values(sess, conn, pdu->cid);
+   ret = iscsi_login_set_conn_values(sess, conn, pdu->cid);
+   if (unlikely(ret)) {
+   kfree(sess);
+   return ret;
+   }
sess->init_task_tag = pdu->itt;
memcpy(>isid, pdu->isid, 6);
sess->exp_cmd_sn= be32_to_cpu(pdu->cmdsn);
@@ -497,8 +505,7 @@ static int iscsi_login_non_zero_tsih_s1(
 {
struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf;
 
-   iscsi_login_set_conn_values(NULL, conn, pdu->cid);
-   return 0;
+   return iscsi_login_set_conn_values(NULL, conn, pdu->cid);
 }
 
 /*
@@ -554,9 +561,8 @@ static int iscsi_login_non_zero_tsih_s2(
atomic_set(>session_continuation, 1);
spin_unlock_bh(>conn_lock);
 
-   iscsi_login_set_conn_values(sess, conn, pdu->cid);
-
-   if (iscsi_copy_param_list(>param_list,
+   if (iscsi_login_set_conn_values(sess, conn, pdu->cid) < 0 ||
+   

[PATCH v3 07/13] ceph: ensure RNG is seeded before using

2017-06-05 Thread Jason A. Donenfeld
Ceph uses the RNG for various nonce generations, and it shouldn't accept
using bad randomness. So, we wait for the RNG to be properly seeded. We
do this by calling wait_for_random_bytes() in a function that is
certainly called in process context, early on, so that all subsequent
calls to get_random_bytes are necessarily acceptable.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Ilya Dryomov <idryo...@gmail.com>
Cc: "Yan, Zheng" <z...@redhat.com>
Cc: Sage Weil <s...@redhat.com>
---
 net/ceph/ceph_common.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
index 4fd02831beed..26ab58665f77 100644
--- a/net/ceph/ceph_common.c
+++ b/net/ceph/ceph_common.c
@@ -611,7 +611,11 @@ struct ceph_client *ceph_create_client(struct ceph_options 
*opt, void *private)
 {
struct ceph_client *client;
struct ceph_entity_addr *myaddr = NULL;
-   int err = -ENOMEM;
+   int err;
+
+   err = wait_for_random_bytes();
+   if (err < 0)
+   return ERR_PTR(err);
 
client = kzalloc(sizeof(*client), GFP_KERNEL);
if (client == NULL)
-- 
2.13.0



[PATCH v3 10/13] net/neighbor: use get_random_u32 for 32-bit hash random

2017-06-05 Thread Jason A. Donenfeld
Using get_random_u32 here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is when many of these structures
are assigned.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: David Miller <da...@davemloft.net>
---
 net/core/neighbour.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index d274f81fcc2c..9784133b0cdb 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -312,8 +312,7 @@ static struct neighbour *neigh_alloc(struct neigh_table 
*tbl, struct net_device
 
 static void neigh_get_hash_rnd(u32 *x)
 {
-   get_random_bytes(x, sizeof(*x));
-   *x |= 1;
+   *x = get_random_u32() | 1;
 }
 
 static struct neigh_hash_table *neigh_hash_alloc(unsigned int shift)
-- 
2.13.0



[PATCH v3 08/13] cifs: use get_random_u32 for 32-bit lock random

2017-06-05 Thread Jason A. Donenfeld
Using get_random_u32 here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is sometimes when this is used.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Steve French <sfre...@samba.org>
---
 fs/cifs/cifsfs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index 9a1667e0e8d6..fe0c8dcc7dc7 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -1359,7 +1359,7 @@ init_cifs(void)
spin_lock_init(_tcp_ses_lock);
spin_lock_init(_Lock);
 
-   get_random_bytes(_lock_secret, sizeof(cifs_lock_secret));
+   cifs_lock_secret = get_random_u32();
 
if (cifs_max_pending < 2) {
cifs_max_pending = 2;
-- 
2.13.0



[PATCH v3 12/13] bluetooth/smp: ensure RNG is properly seeded before ECDH use

2017-06-05 Thread Jason A. Donenfeld
This protocol uses lots of complex cryptography that relies on securely
generated random numbers. Thus, it's important that the RNG is actually
seeded before use. Fortuantely, it appears we're always operating in
process context (there are many GFP_KERNEL allocations and other
sleeping operations), and so we can simply demand that the RNG is seeded
before we use it.

We take two strategies in this commit. The first is for the library code
that's called from other modules like hci or mgmt: here we just change
the call to get_random_bytes_wait, and return the result of the wait to
the caller, along with the other error codes of those functions like
usual. Then there's the SMP protocol handler itself, which makes many
many many calls to get_random_bytes during different phases. For this,
rather than have to change all the calls to get_random_bytes_wait and
propagate the error result, it's actually enough to just put a single
call to wait_for_random_bytes() at the beginning of the handler, to
ensure that all the subsequent invocations are safe, without having to
actually change them. Likewise, for the random address changing
function, we'd rather know early on in the function whether the RNG
initialization has been interrupted, rather than later, so we call
wait_for_random_bytes() at the top, so that later on the call to
get_random_bytes() is acceptable.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Marcel Holtmann <mar...@holtmann.org>
Cc: Gustavo Padovan <gust...@padovan.org>
Cc: Johan Hedberg <johan.hedb...@gmail.com>
---
 net/bluetooth/hci_request.c |  6 ++
 net/bluetooth/smp.c | 18 ++
 2 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
index b5faff458d8b..4078057c4fd7 100644
--- a/net/bluetooth/hci_request.c
+++ b/net/bluetooth/hci_request.c
@@ -1406,6 +1406,12 @@ int hci_update_random_address(struct hci_request *req, 
bool require_privacy,
struct hci_dev *hdev = req->hdev;
int err;
 
+   if (require_privacy) {
+   err = wait_for_random_bytes();
+   if (unlikely(err))
+   return err;
+   }
+
/* If privacy is enabled use a resolvable private address. If
 * current RPA has expired or there is something else than
 * the current RPA in use, then generate a new one.
diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
index 14585edc9439..5fef1bc96f42 100644
--- a/net/bluetooth/smp.c
+++ b/net/bluetooth/smp.c
@@ -537,7 +537,9 @@ int smp_generate_rpa(struct hci_dev *hdev, const u8 
irk[16], bdaddr_t *rpa)
 
smp = chan->data;
 
-   get_random_bytes(>b[3], 3);
+   err = get_random_bytes_wait(>b[3], 3);
+   if (unlikely(err))
+   return err;
 
rpa->b[5] &= 0x3f;  /* Clear two most significant bits */
rpa->b[5] |= 0x40;  /* Set second most significant bit */
@@ -570,7 +572,9 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 
rand[16])
} else {
while (true) {
/* Seed private key with random number */
-   get_random_bytes(smp->local_sk, 32);
+   err = get_random_bytes_wait(smp->local_sk, 32);
+   if (unlikely(err))
+   return err;
 
/* Generate local key pair for Secure Connections */
if (!generate_ecdh_keys(smp->local_pk, smp->local_sk))
@@ -589,7 +593,9 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 
rand[16])
SMP_DBG("OOB Public Key Y: %32phN", smp->local_pk + 32);
SMP_DBG("OOB Private Key:  %32phN", smp->local_sk);
 
-   get_random_bytes(smp->local_rand, 16);
+   err = get_random_bytes_wait(smp->local_rand, 16);
+   if (unlikely(err))
+   return err;
 
err = smp_f4(smp->tfm_cmac, smp->local_pk, smp->local_pk,
 smp->local_rand, 0, hash);
@@ -2831,7 +2837,11 @@ static int smp_sig_channel(struct l2cap_chan *chan, 
struct sk_buff *skb)
struct hci_conn *hcon = conn->hcon;
struct smp_chan *smp;
__u8 code, reason;
-   int err = 0;
+   int err;
+
+   err = wait_for_random_bytes();
+   if (unlikely(err))
+   return err;
 
if (skb->len < 1)
return -EILSEQ;
-- 
2.13.0



[PATCH v3 11/13] net/route: use get_random_int for random counter

2017-06-05 Thread Jason A. Donenfeld
Using get_random_int here is faster, more fitting of the use case, and
just as cryptographically secure. It also has the benefit of providing
better randomness at early boot, which is when many of these structures
are assigned.

Also, semantically, it's not really proper to have been assigning an
atomic_t in this way before, even if in practice it works fine.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: David Miller <da...@davemloft.net>
---
 net/ipv4/route.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 655d9eebe43e..11e001a42094 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -2936,8 +2936,7 @@ static __net_init int rt_genid_init(struct net *net)
 {
atomic_set(>ipv4.rt_genid, 0);
atomic_set(>fnhe_genid, 0);
-   get_random_bytes(>ipv4.dev_addr_genid,
-sizeof(net->ipv4.dev_addr_genid));
+   atomic_set(>ipv4.dev_addr_genid, get_random_int());
return 0;
 }
 
-- 
2.13.0



[PATCH v3 13/13] random: warn when kernel uses unseeded randomness

2017-06-05 Thread Jason A. Donenfeld
This enables an important dmesg notification about when drivers have
used the crng without it being seeded first. Prior, these errors would
occur silently, and so there hasn't been a great way of diagnosing these
types of bugs for obscure setups. By adding this as a config option, we
can leave it on by default, so that we learn where these issues happen,
in the field, will still allowing some people to turn it off, if they
really know what they're doing and do not want the log entries.

However, we don't leave it _completely_ by default. An earlier version
of this patch simply had `default y`. I'd really love that, but it turns
out, this problem with unseeded randomness being used is really quite
present and is going to take a long time to fix. Thus, as a compromise
between log-messages-for-all and nobody-knows, this is `default y`,
except it is also `depends on DEBUG_KERNEL`. This will ensure that the
curious see the messages while others don't have to.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
---
 drivers/char/random.c | 15 +--
 lib/Kconfig.debug | 16 
 2 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index c328e9b11f1f..d4698c8bc35f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -285,7 +285,6 @@
 #define SEC_XFER_SIZE  512
 #define EXTRACT_SIZE   10
 
-#define DEBUG_RANDOM_BOOT 0
 
 #define LONGS(x) (((x) + sizeof(unsigned long) - 1)/sizeof(unsigned long))
 
@@ -1474,7 +1473,7 @@ void get_random_bytes(void *buf, int nbytes)
 {
__u8 tmp[CHACHA20_BLOCK_SIZE];
 
-#if DEBUG_RANDOM_BOOT > 0
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
if (!crng_ready())
printk(KERN_NOTICE "random: %pF get_random_bytes called "
   "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
@@ -2068,6 +2067,12 @@ u64 get_random_u64(void)
return ret;
 #endif
 
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
+   if (!crng_ready())
+   printk(KERN_NOTICE "random: %pF get_random_u64 called "
+  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
+#endif
+
batch = _cpu_var(batched_entropy_u64);
if (use_lock)
read_lock_irqsave(_entropy_reset_lock, flags);
@@ -2094,6 +2099,12 @@ u32 get_random_u32(void)
if (arch_get_random_int())
return ret;
 
+#ifdef CONFIG_WARN_UNSEEDED_RANDOM
+   if (!crng_ready())
+   printk(KERN_NOTICE "random: %pF get_random_u32 called "
+  "with crng_init = %d\n", (void *) _RET_IP_, crng_init);
+#endif
+
batch = _cpu_var(batched_entropy_u32);
if (use_lock)
read_lock_irqsave(_entropy_reset_lock, flags);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index e4587ebe52c7..c4159605bfbf 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1209,6 +1209,22 @@ config STACKTRACE
  It is also used by various kernel debugging features that require
  stack trace generation.
 
+config WARN_UNSEEDED_RANDOM
+   bool "Warn when kernel uses unseeded randomness"
+   default y
+   depends on DEBUG_KERNEL
+   help
+ Some parts of the kernel contain bugs relating to their use of
+ cryptographically secure random numbers before it's actually possible
+ to generate those numbers securely. This setting ensures that these
+ flaws don't go unnoticed, by enabling a message, should this ever
+ occur. This will allow people with obscure setups to know when things
+ are going wrong, so that they might contact developers about fixing
+ it.
+
+ Say Y here, unless you simply do not care about using unseeded
+ randomness and do not want a potential warning message in your logs.
+
 config DEBUG_KOBJECT
bool "kobject debugging"
depends on DEBUG_KERNEL
-- 
2.13.0



[PATCH v3 09/13] rhashtable: use get_random_u32 for hash_rnd

2017-06-05 Thread Jason A. Donenfeld
This is much faster and just as secure. It also has the added benefit of
probably returning better randomness at early-boot on systems with
architectural RNGs.

Signed-off-by: Jason A. Donenfeld <ja...@zx2c4.com>
Cc: Thomas Graf <tg...@suug.ch>
Cc: Herbert Xu <herb...@gondor.apana.org.au>
---
 lib/rhashtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index d9e7274a04cd..a1eb7c947f46 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -235,7 +235,7 @@ static struct bucket_table *bucket_table_alloc(struct 
rhashtable *ht,
 
INIT_LIST_HEAD(>walkers);
 
-   get_random_bytes(>hash_rnd, sizeof(tbl->hash_rnd));
+   tbl->hash_rnd = get_random_u32();
 
for (i = 0; i < nbuckets; i++)
INIT_RHT_NULLS_HEAD(tbl->buckets[i], ht, i);
-- 
2.13.0



[PATCH v3 00/13] Unseeded In-Kernel Randomness Fixes

2017-06-05 Thread Jason A. Donenfeld
As discussed in [1], there is a problem with get_random_bytes being
used before the RNG has actually been seeded. The solution for fixing
this appears to be multi-pronged. One of those prongs involves adding
a simple blocking API so that modules that use the RNG in process
context can just sleep (in an interruptable manner) until the RNG is
ready to be used. This winds up being a very useful API that covers
a few use cases, 5 of which are included in this patch set.

[1] http://www.openwall.com/lists/kernel-hardening/2017/06/02/2

Changes v2->v3:
  - Since this issue, in general, is going to take a long time to fully
fix, the patch turning on the warning is now dependent on DEBUG_KERNEL
so that the right people see the messages but the others aren't annoyed.
  - Fixed some inappropriate blocking for functions that load during module
insertion. As discussed in [1], module insertion deferal is a topic for
another patch set.
  - An interesting and essential patch has been added for invalidating the
batched entropy pool after the crng initializes.
  - Some places that need randomness at bootup for just small integers would
be better served by get_random_{u32,u64}, so this series makes those
changes in a few places. It's useful here, since on some architectures
that delivers better early randomness.

Jason A. Donenfeld (13):
  random: add synchronous API for the urandom pool
  random: add get_random_{bytes,u32,u64,int,long,once}_wait family
  random: invalidate batched entropy after crng init
  crypto/rng: ensure that the RNG is ready before using
  security/keys: ensure RNG is seeded before use
  iscsi: ensure RNG is seeded before use
  ceph: ensure RNG is seeded before using
  cifs: use get_random_u32 for 32-bit lock random
  rhashtable: use get_random_u32 for hash_rnd
  net/neighbor: use get_random_u32 for 32-bit hash random
  net/route: use get_random_int for random counter
  bluetooth/smp: ensure RNG is properly seeded before ECDH use
  random: warn when kernel uses unseeded randomness

 crypto/rng.c  |  6 ++-
 drivers/char/random.c | 90 ++-
 drivers/target/iscsi/iscsi_target_auth.c  | 14 +++--
 drivers/target/iscsi/iscsi_target_login.c | 22 +---
 fs/cifs/cifsfs.c  |  2 +-
 include/linux/net.h   |  2 +
 include/linux/once.h  |  2 +
 include/linux/random.h| 26 +
 lib/Kconfig.debug | 16 ++
 lib/rhashtable.c  |  2 +-
 net/bluetooth/hci_request.c   |  6 +++
 net/bluetooth/smp.c   | 18 +--
 net/ceph/ceph_common.c|  6 ++-
 net/core/neighbour.c  |  3 +-
 net/ipv4/route.c  |  3 +-
 security/keys/encrypted-keys/encrypted.c  |  8 +--
 security/keys/key.c   | 16 +++---
 17 files changed, 195 insertions(+), 47 deletions(-)

-- 
2.13.0



Re: [PATCH RFC v2 0/8] get_random_bytes_wait family of APIs

2017-06-05 Thread Jason A. Donenfeld
As this RFC series matures, all the changes are in this branch here, to look at:

https://git.zx2c4.com/linux-dev/log/?h=jd/rng-blocker

Ted -- there's one, in particular, that should probably be picked up
regardless of the rest, and that's "random: invalidate batched entropy
after crng init". Hopefully though, we can develop that in relation
with the rest and make this a proper series.

Anyway, awaiting your feedback on these RFC series, if you'd like to
help me help the Linux RNG.

Jason


  1   2   3   >