[PATCH -v2] random: mix rdrand with entropy sent in from userspace

2018-07-17 Thread Theodore Ts'o
Fedora has integrated the jitter entropy daemon to work around slow
boot problems, especially on VM's that don't support virtio-rng:

https://bugzilla.redhat.com/show_bug.cgi?id=1572944

It's understandable why they did this, but the Jitter entropy daemon
works fundamentally on the principle: "the CPU microarchitecture is
**so** complicated and we can't figure it out, so it *must* be
random".  Yes, it uses statistical tests to "prove" it is secure, but
AES_ENCRYPT(NSA_KEY, COUNTER++) will also pass statistical tests with
flying colors.

So if RDRAND is available, mix it into entropy submitted from
userspace.  It can't hurt, and if you believe the NSA has backdoored
RDRAND, then they probably have enough details about the Intel
microarchitecture that they can reverse engineer how the Jitter
entropy daemon affects the microarchitecture, and attack its output
stream.  And if RDRAND is in fact an honest DRNG, it will immeasurably
improve on what the Jitter entropy daemon might produce.

This also provides some protection against someone who is able to read
or set the entropy seed file.

Signed-off-by: Theodore Ts'o 
Cc: sta...@vger.kernel.org
Cc: Arnd Bergmann 
---

Changes in v2:
 - Fix silly typo that Arnd pointed out in check the return value of
   arch_get_random_int()
 - Break out of the loop after the first failure reported by
   arch_get_random_int()

 drivers/char/random.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index cd888d4ee605..bd449ad52442 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1895,14 +1895,22 @@ static int
 write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
 {
size_t bytes;
-   __u32 buf[16];
+   __u32 t, buf[16];
const char __user *p = buffer;
 
while (count > 0) {
+   int b, i = 0;
+
bytes = min(count, sizeof(buf));
if (copy_from_user(, p, bytes))
return -EFAULT;
 
+   for (b = bytes ; b > 0 ; b -= sizeof(__u32), i++) {
+   if (!arch_get_random_int())
+   break;
+   buf[i] ^= t;
+   }
+
count -= bytes;
p += bytes;
 
-- 
2.18.0.rc0



[PATCH] crypto: arm64/sha256 - increase cra_priority of scalar implementations

2018-07-17 Thread Eric Biggers
From: Eric Biggers 

Commit b73b7ac0a774 ("crypto: sha256_generic - add cra_priority") gave
sha256-generic and sha224-generic a cra_priority of 100, to match the
convention for generic implementations.  But sha256-arm64 and
sha224-arm64 also have priority 100, so their order relative to the
generic implementations became ambiguous.

Therefore, increase their priority to 125 so that they have higher
priority than the generic implementations but lower priority than the
NEON implementations which have priority 150.

Signed-off-by: Eric Biggers 
---
 arch/arm64/crypto/sha256-glue.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c
index f1b4f4420ca1..4aedeaefd61f 100644
--- a/arch/arm64/crypto/sha256-glue.c
+++ b/arch/arm64/crypto/sha256-glue.c
@@ -67,7 +67,7 @@ static struct shash_alg algs[] = { {
.descsize   = sizeof(struct sha256_state),
.base.cra_name  = "sha256",
.base.cra_driver_name   = "sha256-arm64",
-   .base.cra_priority  = 100,
+   .base.cra_priority  = 125,
.base.cra_blocksize = SHA256_BLOCK_SIZE,
.base.cra_module= THIS_MODULE,
 }, {
@@ -79,7 +79,7 @@ static struct shash_alg algs[] = { {
.descsize   = sizeof(struct sha256_state),
.base.cra_name  = "sha224",
.base.cra_driver_name   = "sha224-arm64",
-   .base.cra_priority  = 100,
+   .base.cra_priority  = 125,
.base.cra_blocksize = SHA224_BLOCK_SIZE,
.base.cra_module= THIS_MODULE,
 } };
-- 
2.18.0.203.gfac676dfb9-goog