Re: [PATCH, RFC] random: introduce getrandom(2) system call

2014-07-21 Thread Dwayne Litzenberger

On Thu, Jul 17, 2014 at 05:18:15AM -0400, Theodore Ts'o wrote:

SYNOPSIS
#include linux/random.h

int getrandom(void *buf, size_t buflen, unsigned int flags);

DESCRIPTION

The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic processes.  It should not be used Monte Carlo
simulations or for other probabilistic sampling applications.


Aside from poor performance for the offending application, will anything
actually break if an application ignores this warning and makes heavy
use of getrandom(2)?  It would be helpful if the documentation made this
clearer, rather than just saying, don't do that.

As the developer of a userspace crypto library, I can't always prevent
downstream developers from doing silly things, and many developers
simply don't understand different kinds of random numbers, so I prefer
to tell them to just use the kernel CSPRNG by default, and to ask for
help once they run into performance problems.  It's not ideal, but it's
safer than the alternative.[1]


If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of /dev/urandom pool.  The /dev/random
pool is limited based on the entropy that can be obtained from
environmental noise, so if there is insufficient entropy, the
requested number of bytes may not be returned.  If there is no
entropy available at all, getrandom(2) will either return an
error with errno set to EAGAIN, or block if the GRND_BLOCK flags
bit is set.

If the GRND_RANDOM flags bit is not set, then the /dev/raundom
pool will be used.  Unlike reading from the /dev/urandom, if
the urandom pool has not been sufficiently initialized,
getrandom(2) will either return an error with errno set to
EGAIN, or block if the GRND_BLOCK flags bit is set.

RETURN VALUE
  On success, the number of bytes that was returned is returned.

  On error, -1 is returned, and errno is set appropriately


Hm.  Is it correct that, in blocking mode, the call is guaranteed either
to return -EINVAL immediately, or to block until the buffer is
*completely* populated with buflen bytes?  If so, I think a few small
changes could make this a really nice interface to work with:

* Use blocking mode by default.

* Add a new flag called GRND_PARTIAL (replacing GRND_BLOCK), which
 indicates that the caller is prepared to handle a partial/incomplete
 result.

* On success with GRND_PARTIAL, return the number of bytes that were
 written into buf.  (Or -EAGAIN, as is currently done.)

* If GRND_PARTIAL is *not* set, just return 0 on success.  (This avoids
 all signed-unsigned confusion when buflen  INT_MAX.)

With those changes, it would be trivial for a userspace library to
implement a reliable RNG interface as recommended in [2] or [3]:

/*
* memzap(3)
*
* Fills the buffer pointed to by buf with exactly buflen random bytes
* suitable for cryptographic purposes.  Nothing is returned.
*
* This function is thread-safe, and is safe to call from inside a
* signal handler.
*
* It blocks if the kernel random number generator is not yet fully
* initialized (e.g. early in the boot process), and it may trigger
* abort(3) if invoked on an old kernel version that does not support
* the getrandom(2) system call.
*/
void memzap(void *buf, size_t buflen)
{
   int ret;

   ret = getrandom(buf, buflen, 0);
   if (ret != 0) {
   perror(getrandom(2) failed);
   abort();
   }
}

Best,
- Dwayne

P.S.  If I had my way, I would also drop GRND_RANDOM.  Most software
won't use it, there's no legacy installed base, and no developer who
still wants that behavior can legitimately claim to care about RNG
availability guarantees, IMHO.  Anyone who really wants the old
/dev/random behavior can always just continue to use the existing
character device.  I don't really care enough to put up a fight about
it, though, as long as it doesn't affect the quality of the
non-GRND_RANDOM interface.


[1] On more than one occasion, I've seen developers use Python's
   standard random module to generate IVs.  I mean, why not?  IVs are
   public, right?
[2] http://cr.yp.to/highspeed/coolnacl-20120725.pdf
[3] 
https://groups.google.com/forum/#!msg/randomness-generation/4opmDHA6_3w/__TyKhbnNWsJ

--
Dwayne C. Litzenberger dl...@dlitz.net
OpenPGP: 19E1 1FE8 B3CF F273 ED17  4A24 928C EC13 39C2 5CF7


pgpApJX2wIboT.pgp
Description: PGP signature


Re: [PATCH, RFC] random: introduce getrandom(2) system call

2014-07-21 Thread Theodore Ts'o
On Sun, Jul 20, 2014 at 05:25:40PM -0700, Dwayne Litzenberger wrote:
 
 This could still return predictable bytes during early boot, though, right?

Read the suggetsed man page; especially, the version of the man page
in the commit description -v4 version of the patch.

getrandom(2) is a new interface, so I can afford do things differently
from redaing from /dev/urandom.  Specifically, it will block until it
is fully initialized, and in GRND_NONBLOCK mode, it will return
EAGAIN.

There have been some people kvetching and whining that this is less
convenient for seeding srand(3), but for non-crypto uses, using
getpid() and time() to seed random(3) or rand(3) is just fine, and if
they really want, they can open /dev/urandom.

  The system call getrandom() fills the buffer pointed to by buf
  with up to buflen random bytes which can be used to seed user
  space random number generators (i.e., DRBG's) or for other
  cryptographic processes.  It should not be used Monte Carlo
  simulations or for other probabilistic sampling applications.
 
 Aside from poor performance for the offending application, will anything
 actually break if an application ignores this warning and makes heavy
 use of getrandom(2)?  

It will be slow, and then the graduate student will whine and complain
and send a bug report.  It will cause urandom to pull more heavily on
entropy, and if that means that you are using some kind of hardware
random generator on a laptop, such as tpm-rng, you will burn more
battery, but no, it will not break.  This is why the man page says
SHOULD not, and not MUST not.   :-)


 As the developer of a userspace crypto library, I can't always prevent
 downstream developers from doing silly things, and many developers
 simply don't understand different kinds of random numbers, so I prefer
 to tell them to just use the kernel CSPRNG by default, and to ask for
 help once they run into performance problems.  It's not ideal, but it's
 safer than the alternative.[1]

Yes, but the point is that Monte Carlo simulations don't need any kind
crypto guarantees.

 Hm.  Is it correct that, in blocking mode, the call is guaranteed either
 to return -EINVAL immediately, or to block until the buffer is
 *completely* populated with buflen bytes?  If so, I think a few small
 changes could make this a really nice interface to work with:
 
 * Use blocking mode by default.

Read the -v4 version of the patch.  Blocking is now the default.

 * Add a new flag called GRND_PARTIAL (replacing GRND_BLOCK), which
  indicates that the caller is prepared to handle a partial/incomplete
  result.

This is not needed if you are using the preferred use of flags == 0,
and are extracting a sane amount of entropy.  For values of buflen 
256 bytes, once urandom pool is initialized, getrandom(buf, buflen, 0)
will not block and will always return the amount of entropy that you
asked for.

But if the user asks for INT_MAX bytes, getrandom(2) must be
interruptible, or else you will end up burning CPU time for a long,
LONG, LONG time.  The choice was to either pick some arbitrarily
limit, such as 256, and then return EIO, which is what OpenBSD did.  I
decided to simply allow getrandom(2) to be interruptible if buflen 
256.

Similarly, if you use GRND_RANDOM, you are also implicitly agreeing
for what you are calling GRND_PARTIAL semantics, because otherwise,
you could end up blocking for a very long time, with no way to
interrupt a buggy program.

I'd much rather keep things simple, and not add too many extra flags,
especially when certain combination of flags result in a really
unfriendly / insane result.

 * If GRND_PARTIAL is *not* set, just return 0 on success.  (This avoids
  all signed-unsigned confusion when buflen  INT_MAX.)

We simply cap any requests for buflen  INT_MAX, and in practice, it
would take so long to generate the requested number of bytes that the
user would want to interrupt the process anyway.

I considered adding a printk to shame any application writer that
tried to ask for more than 256 bytes, but ultimately decided that was
a bit over the top.  But in any case, any request anywhere near
INT_MAX bytes is really not anything I'm concerned about.  If we do
start seeing evidence for that, I might reconsider and add some kind
of Warning!  The author of [current-comm] is asking for insane
amounts of entropy printk.

 With those changes, it would be trivial for a userspace library to
 implement a reliable RNG interface as recommended in [2] or [3]:

The userspace library should ideally be integrted into glibc, so it is
fork- and thread- aware, and it should use a proper CRNG, much like
OpenBSD's arcrandom(3).  There's been a proposal submitted to the
Austin Group for future standardization in Posix, and I'm all in favor
of something like that.

 P.S.  If I had my way, I would also drop GRND_RANDOM.  Most software
 won't use it, there's no legacy installed base, and no developer who
 still wants that behavior can 

Re: [PATCH 0/9] crypto: caam - Add RTA descriptor creation library

2014-07-21 Thread Horia Geantă

On 7/19/2014 4:23 AM, Kim Phillips wrote:

On Sat, 19 Jul 2014 02:51:30 +0300
Horia Geantă horia.gea...@freescale.com wrote:


On 7/19/2014 1:13 AM, Kim Phillips wrote:

On Fri, 18 Jul 2014 19:37:17 +0300
Horia Geanta horia.gea...@freescale.com wrote:


This patch set adds Run Time Assembler (RTA) SEC descriptor library.

The main reason of replacing incumbent inline append is
to have a single code base both for user space and kernel space.


that's orthogonal to what this patchseries is doing from the kernel
maintainer's perspective:  it's polluting the driver with a
CodingStyle-violating (see, e.g., Chapter 12) 6000+ lines of code -


Regarding coding style - AFAICT that's basically:
ERROR: Macros with complex values should be enclosed in parenthesis
and I am wiling to find a different approach.


There's that, too.


which can only mean it's slower and more susceptible to bugs - and
AFAICT for no superior technical advantage:  NACK from me.


The fact that the code size is bigger doesn't necessarily mean a bad thing:
1-code is better documented - cloc reports ~ 1000 more lines of
comments; patch 09 even adds support for generating a docbook
2-pure code (i.e. no comments, white spaces) - cloc reports ~ 5000 more
lines; this reflects two things, AFAICT:
2.1-more features: options (for e.g. new SEC instructions, little endian
env. support), platform support includes Era 7 and Era 8, i.e.
Layerscape LS1 and LS2; this is important to note, since plans are to
run the very same CAAM driver on ARM platforms


um, *those* features should not cost any driver *that many* lines of
code!


You are invited to comment on the code at hand. I am pretty sure it's
not perfect.




2.2-more error-checking - from this perspective, I'd say driver is less
susceptible to bugs, especially subtle ones in CAAM descriptors that are
hard to identify / debug; RTA will complain when generating descriptors
using features (say a new bit in an instruction opcode) that are not
supported on the SEC on device


?  The hardware does the error checking.  This just tells me RTA is
slow, inflexible, and requires unnecessary maintenance by design:
all the more reason to keep it out of the kernel :)


This is just like saying a toolchain isn't performing any checks and
lets the user generate invalid machine code and deal with HW errors.

Beside this, there are (quite a few) cases when SEC won't emit any
error, but still the results are different than expected.
SEC HW is complex enough to deserve descriptor error checking.




RTA currently runs on:
-QorIQ platforms - userspace (USDPAA)
-Layerscape platforms - AIOP accelerator
(obviously, plans are to run also on QorIQ/PowerPC and LS/ARM kernels)


inline append runs elsewhere, too, but I don't see how this is
related to the subject I'm bringing up.


This is relevant, since having a piece of SW running in multiple
environments should lead to better testing, more exposure, finding bugs
faster.
inline append *could run* elsewhere , but it doesn't AFAICT. Last time
I checked, USDPAA and AIOP use RTA.




Combined with:
-comprehensive unit testing suite
-RTA kernel port is bit-exact in terms of SEC descriptors hex dumps with
inline append; besides this, it was tested with tcrypt and in IPsec
scenarios
I would say that RTA is tested more than inline append. In the end, this
is a side effect of having a single code base.


inline append has been proven stable for years now.  RTA just adds
redundant code and violates CodingStyle AFAICT.


New platform support is not redundant.
Error checking is not redundant, as explained above.
kernel-doc is always helpful.
Coding Style can be fixed.

Thanks,
Horia


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH, RFC] random: introduce getrandom(2) system call

2014-07-21 Thread George Spelvin
 I don't like partial reads/writes and think that a lot of people get
 them wrong, because they often only check for negative return values.

The v1 patch, which did it right IMHO, didn't do partial reads in the
case we're talking about:

+   if (count  256)
+   return -EINVAL;

 In case of urandom extraction, I wouldn't actually limit the number of
 bytes. A lot of applications I have seen already extract more than 128
 out of urandom (not for seeding a prng but just to mess around with some
 memory). I don't see a reason why getrandom shouldn't be used for that.
 It just adds one more thing to look out for if using getrandom() in
 urandom mode, especially during porting an application over to this new
 interface.

Again, I disagree.  If it's just messing around code, use /dev/urandom.
It's more portable and you don't care about the fd exhaustion attacks.

If it's actual code to be used in anger, fix it to not abuse /dev/urandom.

You're right that a quick hack might be broken on purpose, but without
exception, *all* code that I have seen which reads 64 or more bytes from
/dev/*random is broken, and highlighting the brokenness is a highly
desirable thing.

The sole and exclusive reason for this syscall to exist at all is to
solve a security problem.  Supporting broken security code does no favors
to anyone.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: caam - fix DECO RSR polling

2014-07-21 Thread Horia Geanta
RSR (Request Source Register) is not used when
virtualization is disabled, thus don't poll for Valid bit.

Besides this, if used, timeout has to be reinitialized.

Signed-off-by: Horia Geanta horia.gea...@freescale.com
---
Only compile-tested.
Ruchika / Kim, please review / test.

 drivers/crypto/caam/ctrl.c | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index c6e9d3b2d502..84d4b95c761e 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -89,12 +89,15 @@ static inline int run_descriptor_deco0(struct device 
*ctrldev, u32 *desc,
/* Set the bit to request direct access to DECO0 */
topregs = (struct caam_full __iomem *)ctrlpriv-ctrl;
 
-   if (ctrlpriv-virt_en == 1)
+   if (ctrlpriv-virt_en == 1) {
setbits32(topregs-ctrl.deco_rsr, DECORSR_JR0);
 
-   while (!(rd_reg32(topregs-ctrl.deco_rsr)  DECORSR_VALID) 
-  --timeout)
-   cpu_relax();
+   while (!(rd_reg32(topregs-ctrl.deco_rsr)  DECORSR_VALID) 
+  --timeout)
+   cpu_relax();
+
+   timeout = 10;
+   }
 
setbits32(topregs-ctrl.deco_rq, DECORR_RQD0ENABLE);
 
-- 
1.8.3.1

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9] crypto: caam - Add RTA descriptor creation library

2014-07-21 Thread Kim Phillips
On Mon, 21 Jul 2014 10:47:49 +0300
Horia Geantă horia.gea...@freescale.com wrote:

 On 7/19/2014 4:23 AM, Kim Phillips wrote:
  On Sat, 19 Jul 2014 02:51:30 +0300
  Horia Geantă horia.gea...@freescale.com wrote:
 
  On 7/19/2014 1:13 AM, Kim Phillips wrote:
  On Fri, 18 Jul 2014 19:37:17 +0300
  Horia Geanta horia.gea...@freescale.com wrote:
 
  This patch set adds Run Time Assembler (RTA) SEC descriptor library.
 
  which can only mean it's slower and more susceptible to bugs - and
  AFAICT for no superior technical advantage:  NACK from me.
 
  The fact that the code size is bigger doesn't necessarily mean a bad thing:
  1-code is better documented - cloc reports ~ 1000 more lines of
  comments; patch 09 even adds support for generating a docbook
  2-pure code (i.e. no comments, white spaces) - cloc reports ~ 5000 more
  lines; this reflects two things, AFAICT:
  2.1-more features: options (for e.g. new SEC instructions, little endian
  env. support), platform support includes Era 7 and Era 8, i.e.
  Layerscape LS1 and LS2; this is important to note, since plans are to
  run the very same CAAM driver on ARM platforms
 
  um, *those* features should not cost any driver *that many* lines of
  code!
 
 You are invited to comment on the code at hand. I am pretty sure it's
 not perfect.

I can see RTA is responsible for the code size increase, not the
features.  And just because RTA has - or has plans for - those
features don't justify the kernel driver adopting RTA over
inline-append.

  2.2-more error-checking - from this perspective, I'd say driver is less
  susceptible to bugs, especially subtle ones in CAAM descriptors that are
  hard to identify / debug; RTA will complain when generating descriptors
  using features (say a new bit in an instruction opcode) that are not
  supported on the SEC on device
 
  ?  The hardware does the error checking.  This just tells me RTA is
  slow, inflexible, and requires unnecessary maintenance by design:
  all the more reason to keep it out of the kernel :)
 
 This is just like saying a toolchain isn't performing any checks and
 lets the user generate invalid machine code and deal with HW errors.
 
 Beside this, there are (quite a few) cases when SEC won't emit any
 error, but still the results are different than expected.
 SEC HW is complex enough to deserve descriptor error checking.

if part of RTA's objective is to cater to new SEC programmers, great,
but that doesn't mean it belongs in the crypto API driver's limited
input parameter set, and fixed descriptors operating environment:
it's not the place to host an entire SEC toolchain.

  RTA currently runs on:
  -QorIQ platforms - userspace (USDPAA)
  -Layerscape platforms - AIOP accelerator
  (obviously, plans are to run also on QorIQ/PowerPC and LS/ARM kernels)
 
  inline append runs elsewhere, too, but I don't see how this is
  related to the subject I'm bringing up.
 
 This is relevant, since having a piece of SW running in multiple
 environments should lead to better testing, more exposure, finding bugs
 faster.

that doesn't defeat the fact that more lines of code to do the same
thing is always going to be a more bug-prone way of doing it.

 inline append *could run* elsewhere , but it doesn't AFAICT. Last time
 I checked, USDPAA and AIOP use RTA.

inline append runs in ASF, and has been available for all upstream
for years.

  Combined with:
  -comprehensive unit testing suite
  -RTA kernel port is bit-exact in terms of SEC descriptors hex dumps with
  inline append; besides this, it was tested with tcrypt and in IPsec
  scenarios
  I would say that RTA is tested more than inline append. In the end, this
  is a side effect of having a single code base.
 
  inline append has been proven stable for years now.  RTA just adds
  redundant code and violates CodingStyle AFAICT.
 
 New platform support is not redundant.

No, RTA is.

 Error checking is not redundant, as explained above.

It is: the kernel has fixed descriptors.

 kernel-doc is always helpful.

it doesn't matter how much you decorate it.

 Coding Style can be fixed.

inline append isn't broken.

Kim
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH, RFC] random: introduce getrandom(2) system call

2014-07-21 Thread Hannes Frederic Sowa
On Mo, 2014-07-21 at 07:21 -0400, George Spelvin wrote:
  I don't like partial reads/writes and think that a lot of people get
  them wrong, because they often only check for negative return values.
 
 The v1 patch, which did it right IMHO, didn't do partial reads in the
 case we're talking about:
 
 + if (count  256)
 + return -EINVAL;

I checked and unfortunately it does not; 256 bytes is way too large to
guarantee non-existence of partial reads. On a typical older system
without rdrand/rdseed you would have to limit the amount of bytes to
extract to about 32. That's way too low. That said, the 512 bytes check
only in case of extracting bytes from blocking pool would serve no
purpose.

  In case of urandom extraction, I wouldn't actually limit the number of
  bytes. A lot of applications I have seen already extract more than 128
  out of urandom (not for seeding a prng but just to mess around with some
  memory). I don't see a reason why getrandom shouldn't be used for that.
  It just adds one more thing to look out for if using getrandom() in
  urandom mode, especially during porting an application over to this new
  interface.
 
 Again, I disagree.  If it's just messing around code, use /dev/urandom.
 It's more portable and you don't care about the fd exhaustion attacks.
 
 If it's actual code to be used in anger, fix it to not abuse /dev/urandom.
 
 You're right that a quick hack might be broken on purpose, but without
 exception, *all* code that I have seen which reads 64 or more bytes from
 /dev/*random is broken, and highlighting the brokenness is a highly
 desirable thing.
 
 The sole and exclusive reason for this syscall to exist at all is to
 solve a security problem.  Supporting broken security code does no favors
 to anyone.

Then let's agree to disagree. :)

I think it is dangerous if application will get ported to this new
interface without checking size limitations and will only notice it
after the applications will be rolled out (if at all).

Bye,
Hannes


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH -v4] random: introduce getrandom(2) system call

2014-07-21 Thread Till Smejkal
Hi,

On Fri, 18 Jul 2014, Theodore Ts'o wrote:
[...]
   If the GRND_RANDOM bit is not set, then the /dev/urandom pool
   will be used.  Unlike using read(2) to fetch data from
   /dev/urandom, if the urandom pool has not been sufficiently
   initialized, getrandom(2) will block or return -1 with the
   errno set to EGAIN if the GRND_NONBLOCK bit is set in flags.
 ^
Small typo: this should be EAGAIN.

[...]

Regards

Till Smejkal
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH -v4] random: introduce getrandom(2) system call

2014-07-21 Thread Theodore Ts'o
On Mon, Jul 21, 2014 at 10:21:26PM +0200, Till Smejkal wrote:
 Hi,
 
 On Fri, 18 Jul 2014, Theodore Ts'o wrote:
 [...]
  If the GRND_RANDOM bit is not set, then the /dev/urandom pool
  will be used.  Unlike using read(2) to fetch data from
  /dev/urandom, if the urandom pool has not been sufficiently
  initialized, getrandom(2) will block or return -1 with the
  errno set to EGAIN if the GRND_NONBLOCK bit is set in flags.
  ^
 Small typo: this should be EAGAIN.

Thanks for pointing this out!  Will fix.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH, RFC] random: introduce getrandom(2) system call

2014-07-21 Thread Hannes Frederic Sowa
On Mo, 2014-07-21 at 17:27 +0200, Hannes Frederic Sowa wrote:
 On Mo, 2014-07-21 at 07:21 -0400, George Spelvin wrote:
   I don't like partial reads/writes and think that a lot of people get
   them wrong, because they often only check for negative return values.
  
  The v1 patch, which did it right IMHO, didn't do partial reads in the
  case we're talking about:
  
  +   if (count  256)
  +   return -EINVAL;
 
 I checked and unfortunately it does not; 256 bytes is way too large to
 guarantee non-existence of partial reads. On a typical older system
 without rdrand/rdseed you would have to limit the amount of bytes to
 extract to about 32. That's way too low. That said, the 512 bytes check
 only in case of extracting bytes from blocking pool would serve no
 purpose.

Ted, would it make sense to specifiy a 512 byte upper bound limit for
random entropy extraction (I am not yet convinced to do that for
urandom) and in case the syscall should block we make sure that we
extract the amount of bytes the user requested?

Having a quick look maybe something like this? Only compile tested and
maybe can still be simplified. It guarantees we don't do a partial write
to user space for sub 512 bytes requests.

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 2721543..c0db6f5 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1345,8 +1345,14 @@ static int arch_random_refill(void)
return n;
 }
 
+enum blocking_mode {
+   NONBLOCKING,
+   SYSCALL_BLOCK,
+   DEV_RANDOM_BLOCK
+};
+
 static ssize_t
-_random_read(int nonblock, char __user *buf, size_t nbytes)
+_random_read(enum blocking_mode mode, char __user *buf, size_t nbytes)
 {
ssize_t n;
 
@@ -1361,7 +1367,7 @@ _random_read(int nonblock, char __user *buf, size_t 
nbytes)
trace_random_read(n*8, (nbytes-n)*8,
  ENTROPY_BITS(blocking_pool),
  ENTROPY_BITS(input_pool));
-   if (n  0)
+   if (mode != SYSCALL_BLOCK  n  0)
return n;
 
/* Pool is (near) empty.  Maybe wait and retry. */
@@ -1370,7 +1376,7 @@ _random_read(int nonblock, char __user *buf, size_t 
nbytes)
if (arch_random_refill())
continue;
 
-   if (nonblock)
+   if (mode == NONBLOCKING)
return -EAGAIN;
 
wait_event_interruptible(random_read_wait,
@@ -1384,7 +1390,8 @@ _random_read(int nonblock, char __user *buf, size_t 
nbytes)
 static ssize_t
 random_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
 {
-   return _random_read(file-f_flags  O_NONBLOCK, buf, nbytes);
+   return _random_read(file-f_flags  O_NONBLOCK ? NONBLOCKING :
+   DEV_RANDOM_BLOCK, buf, nbytes);
 }
 
 static ssize_t
@@ -1534,8 +1541,6 @@ const struct file_operations urandom_fops = {
 SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
unsigned int, flags)
 {
-   int r;
-
if (flags  ~(GRND_NONBLOCK|GRND_RANDOM))
return -EINVAL;
 
@@ -1543,7 +1548,8 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, 
count,
count = INT_MAX;
 
if (flags  GRND_RANDOM)
-   return _random_read(flags  GRND_NONBLOCK, buf, count);
+   return _random_read(flags  GRND_NONBLOCK ? NONBLOCKING :
+   SYSCALL_BLOCK, buf, count);
 
if (unlikely(nonblocking_pool.initialized == 0)) {
if (flags  GRND_NONBLOCK)

Bye,
Hannes


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH, RFC] random: introduce getrandom(2) system call

2014-07-21 Thread Theodore Ts'o
On Tue, Jul 22, 2014 at 03:02:20AM +0200, Hannes Frederic Sowa wrote:
 
 Ted, would it make sense to specifiy a 512 byte upper bound limit for
 random entropy extraction (I am not yet convinced to do that for
 urandom) and in case the syscall should block we make sure that we
 extract the amount of bytes the user requested?

On some systems, it's still possible that with a large /dev/random
extraction, you could end up blocking for hours.  So either the
getrandom(2) syscall needs to be uninterruptible, which has one set of
problems (and not just the user typing ^C, but also things like being
able to process alarms, which is highly problematic indeed), or you
need to allow it to be interruptible by a signal, in which case
userspace needs to check the error return for things like EINTR
anyway.  And if you have to check the error return, you might as well
check the number of bytes returned.

Yes, one could in theory set up a new variant of uninterruptible
signals that only exited if the signal caused the process to exit, and
otherwise, forced a system call restart even if SA_INTERRUPTIBLE was
not set in sigalarim, but that's add *way* more complexity than this
deserves.

Basically, I view /dev/random as an advanced call, and someone who
uses it should know what they are doing.  It's not the default for a
reason.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html