Re: [PATCH] random: add blocking facility to urandom

2011-09-13 Thread Peter Zijlstra
On Mon, 2011-09-12 at 09:56 -0400, Jarod Wilson wrote:
 Thomas Gleixner wrote:

  Well, there is enough prove out there that the hardware you're using
  is a perfect random number generator by itself.
 
  So stop complaining about not having access to TPM chips if you can
  create an entropy source just by (ab)using the inherent randomness of
  modern CPU architectures to refill your entropy pool on the fly when
  the need arises w/o imposing completely unintuitive thresholds and
  user visible API changes.
 
 We started out going down that path:
 
 http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg05778.html
 
 We hit a bit of a roadblock with it though.

Have you guys seen this work:

  http://lwn.net/images/conf/rtlws11/random-hardware.pdf


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-13 Thread Jarod Wilson

Peter Zijlstra wrote:

On Mon, 2011-09-12 at 09:56 -0400, Jarod Wilson wrote:

Thomas Gleixner wrote:



Well, there is enough prove out there that the hardware you're using
is a perfect random number generator by itself.

So stop complaining about not having access to TPM chips if you can
create an entropy source just by (ab)using the inherent randomness of
modern CPU architectures to refill your entropy pool on the fly when
the need arises w/o imposing completely unintuitive thresholds and
user visible API changes.

We started out going down that path:

http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg05778.html

We hit a bit of a roadblock with it though.


Have you guys seen this work:

   http://lwn.net/images/conf/rtlws11/random-hardware.pdf


Yeah, that was part of the initial inspiration for the prior approach. 
There were still concerns that clock entropy didn't meet the random 
entropy pool's perfect security design goal. Without a rewrite of the 
entropy accounting system, clock entropy isn't going in, so I think 
looking into said rewrite is up next on my list.


--
Jarod Wilson
ja...@redhat.com


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-12 Thread Jarod Wilson

valdis.kletni...@vt.edu wrote:

On Fri, 09 Sep 2011 10:21:13 +0800, Sandy Harris said:

Barring a complete failure of SHA-1, an enemy who wants to
infer the state from outputs needs astronomically large amounts
of both data and effort.


So let me get this straight - the movie-plot attack we're defending against is
somebody readin literally gigabytes to terabytes (though I suspect realistic
attacks will require peta/exabytes) of data from /dev/urandom, then performing
all the data reduction needed to infer the state of enough of the entropy pool
to infer all 160 bits of SHA-1 when only 80 bits are output...

*and* doing it all without taking *any* action that adds any entropy to the
pool, and *also* ensuring that no other programs add any entropy via their
actions before the reading and data reduction completes. (Hint - if the
attacker can do this, you're already pwned and have bigger problems)

/me thinks RedHat needs to start insisting on random drug testing for
their security experts at BSI.  EIther that, or force BSI to share the
really good stuff they've been smoking, or they need to learn how huge
a number 2^160 *really* is


Well, previously, we were looking at simply improving random entropy 
contributions, but quoting Matt Mackall from here:


http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg05799.html

'I recommend you do some Google searches for ssl timing attack and 
aes timing attack to get a feel for the kind of seemingly impossible 
things that can be done and thereby recalibrate your scale of the 
impossible.'


:)

Note: I'm not a crypto person. At all. I'm just the lucky guy who got 
tagged to work on trying to implement various suggestions to satisfy 
various government agencies.


--
Jarod Wilson
ja...@redhat.com


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-12 Thread Jarod Wilson

Thomas Gleixner wrote:

On Fri, 9 Sep 2011, Steve Grubb wrote:

But what I was trying to say is that we can't depend on these supplemental 
hardware
devices like TPM because we don't have access to the proprietary technical 
details
that would be necessary to supplement the analysis. And when it comes to TPM 
chips, I
bet each chip has different details and entropy sources and entropy estimations 
and
rates. Those details we can't get at, so we can't solve the problem by 
including that
hardware. That is the point I was trying to make. :)


Well, there is enough prove out there that the hardware you're using
is a perfect random number generator by itself.

So stop complaining about not having access to TPM chips if you can
create an entropy source just by (ab)using the inherent randomness of
modern CPU architectures to refill your entropy pool on the fly when
the need arises w/o imposing completely unintuitive thresholds and
user visible API changes.


We started out going down that path:

http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg05778.html

We hit a bit of a roadblock with it though.

--
Jarod Wilson
ja...@redhat.com


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-12 Thread Mark Brown
On Mon, Sep 12, 2011 at 10:02:43AM -0400, Jarod Wilson wrote:
 Ted Ts'o wrote:

 Yeah, but there are userspace programs that depend on urandom not
 blocking... so your proposed change would break them.

 I'm already consigned to the fact this isn't going to fly, but I'm
 still curious to know examples of programs that are going to break
 here, for my own education. Its already possible for urandom reads
 to fail as the code is now (-ERESTARTSYS and -EFAULT are possible),
 so a sane program ought to already be handling error cases, though
 not -EAGAIN, which this would add.

It's not just a question of error handling existing, it's also about the
expectations the system has for the behaviour of the file - if urandom
is expected to always be able to return data an application is likely to
rely on the fact that it's effectively non-blocking anyway and not bother
setting non-blocking mode at all and so have no graceful handling for
this.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-12 Thread Jarod Wilson

valdis.kletni...@vt.edu wrote:

On Mon, 12 Sep 2011 09:55:15 EDT, Jarod Wilson said:


Well, previously, we were looking at simply improving random entropy
contributions, but quoting Matt Mackall from here:

http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg05799.html

'I recommend you do some Google searches for ssl timing attack and
aes timing attack to get a feel for the kind of seemingly impossible
things that can be done and thereby recalibrate your scale of the
impossible.'


If you're referring to Dan Bernstein's 2005 paper on AES timing attacks
(http://cr.yp.to/antiforgery/cachetiming-20050414.pdf), note that it took
him on the order of 2*25 packets per byte of AES key - targeting a
dummy server intentionally designed to minimize noise.  Although he
correctly notes:

  Of course, I wrote this server to minimize the amount of noise in the 
timings
   available to the client. However, adding noise does not stop the attack: the 
client
   simply averages over a larger number of samples, as in [7]. In particular, 
reducing
   the precision of the server's timestamps, or eliminating them from the 
server's
   responses, does not stop the attack: the client simply uses round-trip 
timings
   based on its local clock, and compensates for the increased noise by 
averaging
   over a larger number of samples.

one has to remember that he's measuring average differences in processing
time on the order of single-digits of cycles - if any *real* processing was 
happening
it would only take a few cache line misses or an 'if' statement branching the
other way to almost totally drown out the AES computation.  (Personally,
I'm amazed that FreeBSD 4.8's kernel is predictable enough to do these
measurements - probably helps a *lot* that the server was otherwise idle -
if somebody else was getting a timeslice in between it would totally swamp
the numbers).

Dan's reference [7] mentions specifically that RSA blinding (first implemented
by default all the way back in OpenSSL 0.9.7b) defeats that paper's timing
attack.

If anything, those attacks are the best proof possible that the suggested
fix for /dev/urandom is a fool's errand - why would anybody bother trying to
figure out what the next data out of /dev/urandom is, when they can simply
wait for a few milliseconds and extract it out of whatever program read it? :)


I'm not referring to anything in particular, I'm mostly referring to the 
irony that one approach that was shot down was because, while not 
exactly practical, its theoretically not impossible to figure out clock 
sample entropy contributions, which might weaken the strength of the 
entropy pool. Your argument is more or less directly opposed to the 
reasoning the clock entropy patches were deemed unacceptable. :)


Something to keep in mind: the whole impetus behind all this is 
*government* crypto certification requirements. They're paranoid. And 
something impractical at the individual level is less so at the 
determined, and willing to spend buckets of cash on resources, hostile 
foreign government level. At least in the minds of some governments.


Note also: I don't necessarily share said governments' sentiments, I'm 
just tasked with trying to satisfy the requirements, and this was looked 
at as a potential minimally-invasive solution. I still think paranoid 
government-types would be fine with applications falling down if urandom 
blocked, because that should *only* happen if the system is being 
abused, but I understand the objections, so that idea is off the table.


I'm likely going to look into Sasha's suggestion to do something via 
CUSE next, followed by taking a long hard look at what's involved in 
rewriting the entropy estimation logic such that clock-based entropy 
would be acceptable.


--
Jarod Wilson
ja...@redhat.com


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-10 Thread Valdis . Kletnieks
On Fri, 09 Sep 2011 10:21:13 +0800, Sandy Harris said:
 Barring a complete failure of SHA-1, an enemy who wants to
 infer the state from outputs needs astronomically large amounts
 of both data and effort.

So let me get this straight - the movie-plot attack we're defending against is
somebody readin literally gigabytes to terabytes (though I suspect realistic
attacks will require peta/exabytes) of data from /dev/urandom, then performing
all the data reduction needed to infer the state of enough of the entropy pool
to infer all 160 bits of SHA-1 when only 80 bits are output...

*and* doing it all without taking *any* action that adds any entropy to the
pool, and *also* ensuring that no other programs add any entropy via their
actions before the reading and data reduction completes. (Hint - if the
attacker can do this, you're already pwned and have bigger problems)

/me thinks RedHat needs to start insisting on random drug testing for
their security experts at BSI.  EIther that, or force BSI to share the
really good stuff they've been smoking, or they need to learn how huge
a number 2^160 *really* is



pgpEDpheXw1MY.pgp
Description: PGP signature


Re: [PATCH] random: add blocking facility to urandom

2011-09-09 Thread Steve Grubb
On Thursday, September 08, 2011 10:21:13 PM Sandy Harris wrote:
  The system being low on entropy is another problem that should be
  addressed. For our purposes, we cannot say take it from TPM or RDRND or
  any plugin board. We have to have the mathematical analysis that goes
  with it, we need to know where the entropy comes from, and a worst case
  entropy estimation.
 
 Much of that is in the driver code's comments or previous email
 threads. For example,
 this thread cover many of the issues:
 http://yarchive.net/comp/linux/dev_random.html
 There are plenty of others as well.
 
  It has to be documented in detail.
 
 Yes. But apart from code comments, what documentation
 are we talking about? Googling for /dev/random on tldp.org
 turns up nothing that treats this in any detail.

Thanks for the reply. I see that you are trying to be helpful. But I think you 
misunderstand what I was trying to say or maybe I was not entirely clear. We 
have the 
correct analysis for the kernel and it does indeed pass FIPS-140, unaided. We 
know the 
entropy comes from what the minimum entropy estimation is, and quality. (The 
only 
issue is guaranteeing that any seed source must also include entropy.) 

But what I was trying to say is that we can't depend on these supplemental 
hardware 
devices like TPM because we don't have access to the proprietary technical 
details 
that would be necessary to supplement the analysis. And when it comes to TPM 
chips, I 
bet each chip has different details and entropy sources and entropy estimations 
and 
rates. Those details we can't get at, so we can't solve the problem by 
including that 
hardware. That is the point I was trying to make. :)

Thanks,
-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-09 Thread Ted Ts'o
On Fri, Sep 09, 2011 at 09:04:17AM -0400, Steve Grubb wrote: But what
 I was trying to say is that we can't depend on these supplemental
 hardware devices like TPM because we don't have access to the
 proprietary technical details that would be necessary to supplement
 the analysis. And when it comes to TPM chips, I bet each chip has
 different details and entropy sources and entropy estimations and
 rates. Those details we can't get at, so we can't solve the problem
 by including that hardware. That is the point I was trying to
 make. :)

Let's be clear that _we_ which Steve is referring to is Red Hat's
attempt to get a BSI certification so they can make $$$.  It has
nothing to do with security, except indirectly, and in my opinion,
breaking application by causing network daemons to suddenly lock up
randomly just so that Red Hat can make more $$$ is not a good reason
to push a incompatible behavioural change into /dev/random.

   - Ted
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Tomas Mraz
On Wed, 2011-09-07 at 19:57 -0400, Neil Horman wrote: 
 On Wed, Sep 07, 2011 at 04:56:49PM -0400, Steve Grubb wrote:
  On Wednesday, September 07, 2011 04:37:57 PM Sasha Levin wrote:
   Anyway, it won't happen fast enough to actually not block.
   
   Writing 1TB of urandom into a disk won't generate 1TB (or anything close
   to that) of randomness to cover for itself.
  
  We don't need a 1:1 mapping of RNG used to entropy acquired. Its more on 
  the scale of 
  8,000,000:1 or higher.
  
 Where are you getting that number from?
 
 You may not need it, but there are other people using this facility as well 
 that
 you're not considering.  If you assume that in the example Sasha has given, if
 conservatively, you have a modern disk with 4k sectors, and you fill each 4k
 sector with the value obtained from a 4 byte read from /dev/urandom, You will:
 
 1) Generate an interrupt for every page you write, which in turn will add at
 most 12 bits to the entropy pool
 
 2) Extract 32 bits from the entropy pool
 
 Thats just a loosing proposition.   Barring further entropy generation from
 another source, this is bound to stall with this feature enabled. 
Why so? In the case the blocking limit is on 8MBits of data read
from /dev/urandom per every 1 bit added to the entropy pool (this is not
the exact way how the patch behaves but we can approximate that) I do
not see the /dev/urandom can block if the bytes read from it are written
to disk device - of course only if the device adds entropy into the
primary pool when there are writes on the device.

Of course you can still easily make the /dev/urandom to occasionally
block with this patch, just read the data and drop it.

But you have to understand that the value that will be set with the
sysctl added by this patch will be large in the order of millions of
bits.

-- 
Tomas Mraz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Sasha Levin
On Wed, 2011-09-07 at 17:43 -0400, Steve Grubb wrote:
 On Wednesday, September 07, 2011 05:35:18 PM Jarod Wilson wrote:
  Another proposal that has been kicked around: a 3rd random chardev, 
  which implements this functionality, leaving urandom unscathed. Some 
  udev magic or a driver param could move/disable/whatever urandom and put 
  this alternate device in its place. Ultimately, identical behavior, but 
  the true urandom doesn't get altered at all.
 
 Right, and that's what I was trying to say is that if we do all that and 
 switch out 
 urandom with something new that does what we need, what's the difference in 
 just 
 patching the behavior into urandom and calling it a day? Its simpler, less 
 fragile, 
 admins won't make mistakes setting up the wrong one in a chroot, already has 
 the 
 FIPS-140 dressing, and is auditable.

Whats the difference between changing the behavior of a well defined
interface (/dev/urandom) which may cause userspace applications to fail,
in opposed to a non-intrusive usermode CUSE driver which can do exactly
what you need (and more - if more is required in the future)? None, none
at all...

CUSE supports kernel auditing, admins making mistakes is hardly the
kernels' problem (unless it makes it easy for them to do mistakes) and
code moved into the kernel doesn't suddenly become more stable and
simpler.

-- 

Sasha.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Christoph Hellwig
On Wed, Sep 07, 2011 at 04:02:24PM -0400, Steve Grubb wrote:
 The only time this kicks in is when a system is under attack. If you have set 
 this and 
 the system is running as normal, you will never notice it even there.

So your userspace will break exactly when you least need it and can't
debug it, awsome.


Could you security certification folks please get off your crack ASAP?

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Christoph Hellwig
On Wed, Sep 07, 2011 at 05:18:58PM -0400, Ted Ts'o wrote:
 If this is the basis for the patch, then we should definitely NACK it.
 It sounds like snake oil fear mongering.

You're around long enough to know that Steve and his gang do nothing but
selling snake oil.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Christoph Hellwig
On Wed, Sep 07, 2011 at 11:27:12PM +0200, Stephan Mueller wrote:
 And exactly that is the concern from organizations like BSI. Their
 cryptographer's concern is that due to the volume of data that you can
 extract from /dev/urandom, you may find cycles or patterns that increase
 the probability to guess the next random value compared to brute force
 attack. Note, it is all about probabilities.

So don't use /dev/urandom if you don't like the behaviour.  Breaking all
existing application because of a certification is simply not an option.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Steve Grubb
On Thursday, September 08, 2011 04:44:20 AM Christoph Hellwig wrote:
 On Wed, Sep 07, 2011 at 11:27:12PM +0200, Stephan Mueller wrote:
  And exactly that is the concern from organizations like BSI. Their
  cryptographer's concern is that due to the volume of data that you can
  extract from /dev/urandom, you may find cycles or patterns that increase
  the probability to guess the next random value compared to brute force
  attack. Note, it is all about probabilities.
 
 So don't use /dev/urandom if you don't like the behaviour.  Breaking all
 existing application because of a certification is simply not an option.

This patch does not _break_ all existing applications. If a system were under 
attack, 
they might pause momentarily, but they do not break. Please, try the patch and 
use a 
nice large number like 200 and see for yourself. Right now, everyone 
arguing about 
this breaking things have not tried it to see if in fact things do break and 
how they 
break if they do.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Neil Horman
On Thu, Sep 08, 2011 at 08:41:57AM +0200, Tomas Mraz wrote:
 On Wed, 2011-09-07 at 19:57 -0400, Neil Horman wrote: 
  On Wed, Sep 07, 2011 at 04:56:49PM -0400, Steve Grubb wrote:
   On Wednesday, September 07, 2011 04:37:57 PM Sasha Levin wrote:
Anyway, it won't happen fast enough to actually not block.

Writing 1TB of urandom into a disk won't generate 1TB (or anything close
to that) of randomness to cover for itself.
   
   We don't need a 1:1 mapping of RNG used to entropy acquired. Its more on 
   the scale of 
   8,000,000:1 or higher.
   
  Where are you getting that number from?
  
  You may not need it, but there are other people using this facility as well 
  that
  you're not considering.  If you assume that in the example Sasha has given, 
  if
  conservatively, you have a modern disk with 4k sectors, and you fill each 4k
  sector with the value obtained from a 4 byte read from /dev/urandom, You 
  will:
  
  1) Generate an interrupt for every page you write, which in turn will add at
  most 12 bits to the entropy pool
  
  2) Extract 32 bits from the entropy pool
  
  Thats just a loosing proposition.   Barring further entropy generation from
  another source, this is bound to stall with this feature enabled. 
 Why so? In the case the blocking limit is on 8MBits of data read
 from /dev/urandom per every 1 bit added to the entropy pool (this is not
 the exact way how the patch behaves but we can approximate that) I do
 not see the /dev/urandom can block if the bytes read from it are written
Easy, all you have to do is read 8MB of data out of /dev/urandom (plus whatever
other conditions are needed to first drain the entropy pool), prior to that bit
of entropy getting added.

 to disk device - of course only if the device adds entropy into the
 primary pool when there are writes on the device.
Yes, and thats a problem.  We're assuming in the above case that writes to disk
generate interrupts which in turn generate entropy in the pool.  If that
happens, then yes, it can be difficult (though far from impossible) to block on
urandom with this patch and a sufficiently high blocking threshold.  But
interrupt randomness is only added for interrupts flagged with
IRQF_SAMPLE_RANDOM, and if you look, almost no hard irqs add that flag.  So its
possible (and even likely) that writing to disk will not generate additional
entropy.

 
 Of course you can still easily make the /dev/urandom to occasionally
 block with this patch, just read the data and drop it.
 
 But you have to understand that the value that will be set with the
 sysctl added by this patch will be large in the order of millions of
 bits.
 
You can guarantee that?  This sysctl allows for a setting of 2 just as easily as
it allows for a setting of 8,000,000.  And the former is sure to break or
otherwise adversely affect applications that expect urandom to never block.
Thats what Sasha was referring to, saying that patch makes it easy for admins to
make serious mistakes.

Neil

 -- 
 Tomas Mraz
 No matter how far down the wrong road you've gone, turn back.
   Turkish proverb
 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Steve Grubb
On Thursday, September 08, 2011 08:52:34 AM Neil Horman wrote:
  to disk device - of course only if the device adds entropy into the
  primary pool when there are writes on the device.
 
 Yes, and thats a problem.  We're assuming in the above case that writes to
 disk generate interrupts which in turn generate entropy in the pool.  If
 that happens, then yes, it can be difficult (though far from impossible)
 to block on urandom with this patch and a sufficiently high blocking
 threshold.  But interrupt randomness is only added for interrupts flagged
 with
 IRQF_SAMPLE_RANDOM, and if you look, almost no hard irqs add that flag.  So
 its possible (and even likely) that writing to disk will not generate
 additional entropy.

The system being low on entropy is another problem that should be addressed. 
For our 
purposes, we cannot say take it from TPM or RDRND or any plugin board. We have 
to have 
the mathematical analysis that goes with it, we need to know where the entropy 
comes 
from, and a worst case entropy estimation. It has to be documented in detail. 
The only 
way we can be certain is if its based on system events. Linux systems are 
constantly 
low on entropy and this really needs addressing. But that is a separate issue. 
For 
real world use, I'd recommend everyone use a TPM chip + rngd and you'll never 
be short 
on random numbers. But in the case where we are certifying the OS, we need the 
mathematical argument to prove that unaided, things are correct.

 
  Of course you can still easily make the /dev/urandom to occasionally
  block with this patch, just read the data and drop it.
  
  But you have to understand that the value that will be set with the
  sysctl added by this patch will be large in the order of millions of
  bits.
 
 You can guarantee that?  

One proposal I made to Jarod was to add some minimum threshold that would 
prevent 
people from setting a value of 2, for example. Maybe the threshold could be set 
at 64K 
or higher depending on what number we get back from BSI.

 This sysctl allows for a setting of 2 just as  easily as it allows for a 
 setting of
 8,000,000.  And the former is sure to break or otherwise adversely affect
 applications that expect urandom to never block. Thats what Sasha was 
 referring to,
 saying that patch makes it easy for admins to make serious mistakes.

Would a sufficiently high threshold make this easier to accept?

-Steve

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Neil Horman
On Thu, Sep 08, 2011 at 09:11:12AM -0400, Steve Grubb wrote:
 On Thursday, September 08, 2011 08:52:34 AM Neil Horman wrote:
   to disk device - of course only if the device adds entropy into the
   primary pool when there are writes on the device.
  
  Yes, and thats a problem.  We're assuming in the above case that writes to
  disk generate interrupts which in turn generate entropy in the pool.  If
  that happens, then yes, it can be difficult (though far from impossible)
  to block on urandom with this patch and a sufficiently high blocking
  threshold.  But interrupt randomness is only added for interrupts flagged
  with
  IRQF_SAMPLE_RANDOM, and if you look, almost no hard irqs add that flag.  So
  its possible (and even likely) that writing to disk will not generate
  additional entropy.
 
 The system being low on entropy is another problem that should be addressed. 
 For our 
 purposes, we cannot say take it from TPM or RDRND or any plugin board. We 
 have to have 
 the mathematical analysis that goes with it, we need to know where the 
 entropy comes 
 from, and a worst case entropy estimation. It has to be documented in detail. 
 The only 
 way we can be certain is if its based on system events. Linux systems are 
 constantly 
 low on entropy and this really needs addressing. But that is a separate 
 issue. For 
 real world use, I'd recommend everyone use a TPM chip + rngd and you'll never 
 be short 
 on random numbers. But in the case where we are certifying the OS, we need 
 the 
 mathematical argument to prove that unaided, things are correct.
 
I agree, it would be great if we had more entropy as a rule, but thats not
really what this patch is about.  Its about how we behave in our various
interfaces when we don't have entropy.

  
   Of course you can still easily make the /dev/urandom to occasionally
   block with this patch, just read the data and drop it.
   
   But you have to understand that the value that will be set with the
   sysctl added by this patch will be large in the order of millions of
   bits.
  
  You can guarantee that?  
 
 One proposal I made to Jarod was to add some minimum threshold that would 
 prevent 
 people from setting a value of 2, for example. Maybe the threshold could be 
 set at 64K 
 or higher depending on what number we get back from BSI.
 
  This sysctl allows for a setting of 2 just as  easily as it allows for a 
  setting of
  8,000,000.  And the former is sure to break or otherwise adversely affect
  applications that expect urandom to never block. Thats what Sasha was 
  referring to,
  saying that patch makes it easy for admins to make serious mistakes.
 
 Would a sufficiently high threshold make this easier to accept?
 

I don't know, but IMO, no.  The problems with this implementation go beyond just
picking the appropriate threshold.  As several others have commented, theres
problems:

1) With having a threshold at all - I still don't think its clear what a 'good'
theshold is and why.  I've seen 8,000,000 bytes beyond zero entropy tossed
about.  I presume thats used because its been shown that after 8,000,000 bytes
read beyond zero entropy, the internal state of the urandom device can be
guessed?  If so, how?  If not, what the magic number?

2) With the implementation.  There are still unaddressed concerns about
applications which expect urandom to never block living in conjunction with
applications that can tolerate it.  As you noted above entropy is in short
supply in Linux systems.  Regardless of what threshold you set, its possible
that it will not be high enough to prevent urandom blocking for indefinate
periods of time.  Not addressing this I think is a complete show stopper.  The
CUSE driver has been proposed as a solution here and I think its a good one.  It
lets those that are worried about this sort of attack mitigate it and leaves the
rest of the world alone (and ostensibly is auditable)

Neil

 -Steve
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-crypto in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread David Miller
From: Steve Grubb sgr...@redhat.com
Date: Thu, 8 Sep 2011 07:48:27 -0400

 On Thursday, September 08, 2011 04:44:20 AM Christoph Hellwig wrote:
 On Wed, Sep 07, 2011 at 11:27:12PM +0200, Stephan Mueller wrote:
  And exactly that is the concern from organizations like BSI. Their
  cryptographer's concern is that due to the volume of data that you can
  extract from /dev/urandom, you may find cycles or patterns that increase
  the probability to guess the next random value compared to brute force
  attack. Note, it is all about probabilities.
 
 So don't use /dev/urandom if you don't like the behaviour.  Breaking all
 existing application because of a certification is simply not an option.
 
 This patch does not _break_ all existing applications. If a system were under 
 attack, 
 they might pause momentarily, but they do not break. Please, try the patch 
 and use a 
 nice large number like 200 and see for yourself. Right now, everyone 
 arguing about 
 this breaking things have not tried it to see if in fact things do break and 
 how they 
 break if they do.

If the application holds a critical resource other threads want when it
blocks on /dev/urandom, then your change breaks things.  I can come up
with more examples if you like.

Please get off this idea that you can just change the blocking behavior
for a file descriptor and nothing of consequence will happen.

When this happens in the networking due to a bug or similar, we know
it does break things.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Sandy Harris
On Thu, Sep 8, 2011 at 9:11 PM, Steve Grubb sgr...@redhat.com wrote:

 The system being low on entropy is another problem that should be addressed. 
 For our
 purposes, we cannot say take it from TPM or RDRND or any plugin board. We 
 have to have
 the mathematical analysis that goes with it, we need to know where the 
 entropy comes
 from, and a worst case entropy estimation.

Much of that is in the driver code's comments or previous email
threads. For example,
this thread cover many of the issues:
http://yarchive.net/comp/linux/dev_random.html
There are plenty of others as well.

 It has to be documented in detail.

Yes. But apart from code comments, what documentation
are we talking about? Googling for /dev/random on tldp.org
turns up nothing that treats this in any detail.


 The only
 way we can be certain is if its based on system events. Linux systems are 
 constantly
 low on entropy and this really needs addressing. But that is a separate 
 issue. For
 real world use, I'd recommend everyone use a TPM chip + rngd and you'll never 
 be short
 on random numbers.

Yes. Here's something I wrote on the Debian Freedombox list:

| No problem on a typical Linux desktop; it does not
| do much crypto and /dev/random gets input from
| keyboard  mouse movement, disk delays, etc.
| However, it might be a major problem for a plug
| server that does more crypto, runs headless, and
| use solid state storage.

| Some plug computers may have a hardware RNG,
| which is the best solution, but we cannot count on
| that in the general case.

| Where the plug has a sound card equivalent, and
| it isn't used for sound, there is a good solution
| using circuit noise in the card as the basis for
| a hardware RNG.
| http://www.av8n.com/turbid/paper/turbid.htm

| A good academic paper on the problem is:
| https://db.usenix.org/publications/library/proceedings/sec98/gutmann.html

| However, his software does not turn up in
| the Ubuntu repository. Is it in Debian?
| Could it be?

| Ubuntu, and I assume Debian, does have
| Havege, another researcher's solution
| to the same problem.
| http://www.irisa.fr/caps/projects/hipsor/

Some of that sort of discussion should be in the documentation.
I'm not sure how much currently is.

 But in the case where we are certifying the OS, we need the
 mathematical argument to prove that unaided, things are correct.

No, we cannot prove that unaided, things are correct if
by correct you mean urandom output is safe against all
conceivable attacks and by unaided you mean without
new entropy inputs. It is a PRNG, so without reseeding it
must be breakable in theory; that comes with the territory.

That need not be a problem, though. We cannot /prove/
that any of the ciphers or hashes in widespread use are
correct either. In fact, we can prove the opposite; they
are all provably breakable by an opponent with enough
resources, for extremely large values of enough.

Consider a block cipher like AES: there are three known
attacks that must break it in theory -- brute force search
for the key, or reduce the cipher to a set of equations
then feed in some known plaintext/ciphertext pairs and
solve for the key, or just collect enough known pairs to
build a codebook that breaks the cipher. We know the
brute force and codebook attacks are astronomically
expensive, and there are good arguments that algebra
is as well, but they all work in theory. Despite that, we
can use AES with reasonable confidence and with
certifications from various government bodies.

There are similar arguments for confidence in urandom.
The simplest are the size of the state relative to the
outputs and the XOR that reduces 160 bits of SHA-1
output to 80 of generator output. More detailed discussion is
in the first thread I cited above.

Barring a complete failure of SHA-1, an enemy who wants to
infer the state from outputs needs astronomically large amounts
of both data and effort.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] random: add blocking facility to urandom

2011-09-07 Thread Jarod Wilson
Certain security-related certifications and their respective review
bodies have said that they find use of /dev/urandom for certain
functions, such as setting up ssh connections, is acceptable, but if and
only if /dev/urandom can block after a certain threshold of bytes have
been read from it with the entropy pool exhausted. Initially, we were
investigating increasing entropy pool contributions, so that we could
simply use /dev/random, but since that hasn't (yet) panned out, and
upwards of five minutes to establsh an ssh connection using an
entropy-starved /dev/random is unacceptable, we started looking at the
blocking urandom approach.

At present, urandom never blocks, even after all entropy has been
exhausted from the entropy input pool. random immediately blocks when
the input pool is exhausted. Some use cases want behavior somewhere in
between these two, where blocking only occurs after some number have
bytes have been read following input pool entropy exhaustion. Its
possible to accomplish this and make it fully user-tunable, by adding a
sysctl to set a max-bytes-after-0-entropy read threshold for urandom. In
the out-of-the-box configuration, urandom behaves as it always has, but
with a threshold value set, we'll block when its been exceeded.

Tested by dd'ing from /dev/urandom in one window, and starting/stopping
a cat of /dev/random in the other, with some debug spew added to the
urandom read function to verify functionality.

CC: Matt Mackall m...@selenic.com
CC: Neil Horman nhor...@redhat.com
CC: Herbert Xu herbert...@redhat.com
CC: Steve Grubb sgr...@redhat.com
CC: Stephan Mueller stephan.muel...@atsec.com
CC: lkml linux-ker...@vger.kernel.org
Signed-off-by: Jarod Wilson ja...@redhat.com
---

Resending, neglected to cc lkml the first time, and this change could
have implications outside just the crypto layer...

 drivers/char/random.c |   82 -
 1 files changed, 81 insertions(+), 1 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index c35a785..cf48b0f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -289,6 +289,13 @@ static int trickle_thresh __read_mostly = INPUT_POOL_WORDS 
* 28;
 static DEFINE_PER_CPU(int, trickle_count);
 
 /*
+ * In normal operation, urandom never blocks, but optionally, you can
+ * set urandom to block after urandom_block_thresh bytes are read with
+ * the entropy pool exhausted.
+ */
+static int urandom_block_thresh = 0;
+
+/*
  * A pool of size .poolwords is stirred with a primitive polynomial
  * of degree .poolwords over GF(2).  The taps for various sizes are
  * defined below.  They are chosen to be evenly spaced (minimum RMS
@@ -383,6 +390,7 @@ static struct poolinfo {
  * Static global variables
  */
 static DECLARE_WAIT_QUEUE_HEAD(random_read_wait);
+static DECLARE_WAIT_QUEUE_HEAD(urandom_read_wait);
 static DECLARE_WAIT_QUEUE_HEAD(random_write_wait);
 static struct fasync_struct *fasync;
 
@@ -554,6 +562,7 @@ static void credit_entropy_bits(struct entropy_store *r, 
int nbits)
/* should we wake readers? */
if (r == input_pool  entropy_count = random_read_wakeup_thresh) {
wake_up_interruptible(random_read_wait);
+   wake_up_interruptible(urandom_read_wait);
kill_fasync(fasync, SIGIO, POLL_IN);
}
spin_unlock_irqrestore(r-lock, flags);
@@ -1060,7 +1069,55 @@ random_read(struct file *file, char __user *buf, size_t 
nbytes, loff_t *ppos)
 static ssize_t
 urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
 {
-   return extract_entropy_user(nonblocking_pool, buf, nbytes);
+   ssize_t n;
+   static int excess_bytes_read;
+
+   /* this is the default case with no urandom blocking threshold set */
+   if (!urandom_block_thresh)
+   return extract_entropy_user(nonblocking_pool, buf, nbytes);
+
+   if (nbytes == 0)
+   return 0;
+
+   DEBUG_ENT(reading %d bits\n, nbytes*8);
+
+   /* urandom blocking threshold set, but we have sufficient entropy */
+   if (input_pool.entropy_count = random_read_wakeup_thresh) {
+   excess_bytes_read = 0;
+   return extract_entropy_user(nonblocking_pool, buf, nbytes);
+   }
+
+   /* low on entropy, start counting bytes read */
+   if (excess_bytes_read + nbytes  urandom_block_thresh) {
+   n = extract_entropy_user(nonblocking_pool, buf, nbytes);
+   excess_bytes_read += n;
+   return n;
+   }
+
+   /* low entropy read threshold exceeded, now we have to block */
+   n = nbytes;
+   if (n  SEC_XFER_SIZE)
+   n = SEC_XFER_SIZE;
+
+   n = extract_entropy_user(nonblocking_pool, buf, n);
+   excess_bytes_read += n;
+
+   if (file-f_flags  O_NONBLOCK)
+   return -EAGAIN;
+
+   DEBUG_ENT(sleeping?\n);
+
+   wait_event_interruptible(urandom_read_wait,
+   

Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Sasha Levin
On Wed, 2011-09-07 at 13:38 -0400, Jarod Wilson wrote:
 Certain security-related certifications and their respective review
 bodies have said that they find use of /dev/urandom for certain
 functions, such as setting up ssh connections, is acceptable, but if and
 only if /dev/urandom can block after a certain threshold of bytes have
 been read from it with the entropy pool exhausted. Initially, we were
 investigating increasing entropy pool contributions, so that we could
 simply use /dev/random, but since that hasn't (yet) panned out, and
 upwards of five minutes to establsh an ssh connection using an
 entropy-starved /dev/random is unacceptable, we started looking at the
 blocking urandom approach.

Can't you accomplish this in userspace by trying to read as much as you
can out of /dev/random without blocking, then reading out
of /dev/urandom the minimum between allowed threshold and remaining
bytes, and then blocking on /dev/random?

For example, lets say you need 100 bytes of randomness, and your
threshold is 30 bytes. You try reading out of /dev/random and get 50
bytes, at that point you'll read another 30 (=threshold) bytes
out /dev/urandom and then you'll need to block on /dev/random until you
get the remaining 20 bytes.

-- 

Sasha.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Jarod Wilson

Sasha Levin wrote:

On Wed, 2011-09-07 at 13:38 -0400, Jarod Wilson wrote:

Certain security-related certifications and their respective review
bodies have said that they find use of /dev/urandom for certain
functions, such as setting up ssh connections, is acceptable, but if and
only if /dev/urandom can block after a certain threshold of bytes have
been read from it with the entropy pool exhausted. Initially, we were
investigating increasing entropy pool contributions, so that we could
simply use /dev/random, but since that hasn't (yet) panned out, and
upwards of five minutes to establsh an ssh connection using an
entropy-starved /dev/random is unacceptable, we started looking at the
blocking urandom approach.


Can't you accomplish this in userspace by trying to read as much as you
can out of /dev/random without blocking, then reading out
of /dev/urandom the minimum between allowed threshold and remaining
bytes, and then blocking on /dev/random?

For example, lets say you need 100 bytes of randomness, and your
threshold is 30 bytes. You try reading out of /dev/random and get 50
bytes, at that point you'll read another 30 (=threshold) bytes
out /dev/urandom and then you'll need to block on /dev/random until you
get the remaining 20 bytes.


We're looking for a generic solution here that doesn't require 
re-educating every single piece of userspace. And anything done in 
userspace is going to be full of possible holes -- there needs to be 
something in place that actually *enforces* the policy, and centralized 
accounting/tracking, lest you wind up with multiple processes racing to 
grab the entropy.


--
Jarod Wilson
ja...@redhat.com


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Sasha Levin
On Wed, 2011-09-07 at 14:26 -0400, Jarod Wilson wrote:
 Sasha Levin wrote:
  On Wed, 2011-09-07 at 13:38 -0400, Jarod Wilson wrote:
  Certain security-related certifications and their respective review
  bodies have said that they find use of /dev/urandom for certain
  functions, such as setting up ssh connections, is acceptable, but if and
  only if /dev/urandom can block after a certain threshold of bytes have
  been read from it with the entropy pool exhausted. Initially, we were
  investigating increasing entropy pool contributions, so that we could
  simply use /dev/random, but since that hasn't (yet) panned out, and
  upwards of five minutes to establsh an ssh connection using an
  entropy-starved /dev/random is unacceptable, we started looking at the
  blocking urandom approach.
 
  Can't you accomplish this in userspace by trying to read as much as you
  can out of /dev/random without blocking, then reading out
  of /dev/urandom the minimum between allowed threshold and remaining
  bytes, and then blocking on /dev/random?
 
  For example, lets say you need 100 bytes of randomness, and your
  threshold is 30 bytes. You try reading out of /dev/random and get 50
  bytes, at that point you'll read another 30 (=threshold) bytes
  out /dev/urandom and then you'll need to block on /dev/random until you
  get the remaining 20 bytes.
 
 We're looking for a generic solution here that doesn't require 
 re-educating every single piece of userspace. [...]

A flip-side here is that you're going to break every piece of userspace
which assumed (correctly) that /dev/urandom never blocks. Since this is
a sysctl you can't fine tune which processes/threads/file-handles will
block on /dev/urandom and which ones won't.

 [..] And anything done in 
 userspace is going to be full of possible holes [..]

Such as? Is there an example of a case which can't be handled in
userspace?

  [..] there needs to be 
 something in place that actually *enforces* the policy, and centralized 
 accounting/tracking, lest you wind up with multiple processes racing to 
 grab the entropy.

Does the weak entropy you get out of /dev/urandom get weaker the more
you pull out of it? I assumed that this change is done because you want
to limit the amount of weak entropy mixed in with strong entropy.


btw, Is the threshold based on a research done on the linux RNG? Or is
it an arbitrary number that would be set by your local sysadmin? 

-- 

Sasha.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Ted Ts'o
On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
 We're looking for a generic solution here that doesn't require
 re-educating every single piece of userspace. And anything done in
 userspace is going to be full of possible holes -- there needs to be
 something in place that actually *enforces* the policy, and
 centralized accounting/tracking, lest you wind up with multiple
 processes racing to grab the entropy.

Yeah, but there are userspace programs that depend on urandom not
blocking... so your proposed change would break them.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Jarod Wilson

Sasha Levin wrote:

On Wed, 2011-09-07 at 14:26 -0400, Jarod Wilson wrote:

Sasha Levin wrote:

On Wed, 2011-09-07 at 13:38 -0400, Jarod Wilson wrote:

Certain security-related certifications and their respective review
bodies have said that they find use of /dev/urandom for certain
functions, such as setting up ssh connections, is acceptable, but if and
only if /dev/urandom can block after a certain threshold of bytes have
been read from it with the entropy pool exhausted. Initially, we were
investigating increasing entropy pool contributions, so that we could
simply use /dev/random, but since that hasn't (yet) panned out, and
upwards of five minutes to establsh an ssh connection using an
entropy-starved /dev/random is unacceptable, we started looking at the
blocking urandom approach.

Can't you accomplish this in userspace by trying to read as much as you
can out of /dev/random without blocking, then reading out
of /dev/urandom the minimum between allowed threshold and remaining
bytes, and then blocking on /dev/random?

For example, lets say you need 100 bytes of randomness, and your
threshold is 30 bytes. You try reading out of /dev/random and get 50
bytes, at that point you'll read another 30 (=threshold) bytes
out /dev/urandom and then you'll need to block on /dev/random until you
get the remaining 20 bytes.

We're looking for a generic solution here that doesn't require
re-educating every single piece of userspace. [...]


A flip-side here is that you're going to break every piece of userspace
which assumed (correctly) that /dev/urandom never blocks.


Out of the box, that continues to be the case. This just adds a knob so 
that it *can* block at a desired threshold.



Since this is
a sysctl you can't fine tune which processes/threads/file-handles will
block on /dev/urandom and which ones won't.


The security requirement is that everything blocks.


[..] And anything done in
userspace is going to be full of possible holes [..]


Such as? Is there an example of a case which can't be handled in
userspace?


How do you mandate preventing reads from urandom when there isn't 
sufficient entropy? You likely wind up needing to restrict access to the 
actual urandom via permissions and selinux policy or similar, and then 
run a daemon or something that provides a pseudo-urandom that brokers 
access to the real urandom. Get the permissions or policy wrong, and 
havoc ensues. An issue with the initscript or udev rule to hide the real 
urandom, and things can fall down. Its a whole lot more fragile than 
this approach, and a lot more involved in setting it up.



  [..] there needs to be
something in place that actually *enforces* the policy, and centralized
accounting/tracking, lest you wind up with multiple processes racing to
grab the entropy.


Does the weak entropy you get out of /dev/urandom get weaker the more
you pull out of it? I assumed that this change is done because you want
to limit the amount of weak entropy mixed in with strong entropy.


The argument is that once there's no entropy left, an attacker only 
needs X number of samples before they can start accurately determining 
what the next random number will be.



btw, Is the threshold based on a research done on the linux RNG? Or is
it an arbitrary number that would be set by your local sysadmin?


Stephan (cc'd on the thread) is attempting to get some feedback from BSI 
as to what they have in the way of an actual number. The implementation 
has a goal of being flexible enough for whatever a given certification 
or security requirement says that number is.


--
Jarod Wilson
ja...@redhat.com


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Jarod Wilson

Ted Ts'o wrote:

On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:

We're looking for a generic solution here that doesn't require
re-educating every single piece of userspace. And anything done in
userspace is going to be full of possible holes -- there needs to be
something in place that actually *enforces* the policy, and
centralized accounting/tracking, lest you wind up with multiple
processes racing to grab the entropy.


Yeah, but there are userspace programs that depend on urandom not
blocking... so your proposed change would break them.


But only if you've set the sysctl to a non-zero value, and even then, 
only if someone is actively draining entropy from /dev/random. 
Otherwise, in practice, it behaves the same as always. Granted, I 
haven't tested with all possible userspace to see how it might fall 
down, but suggestions for progs to try would be welcomed.


But again, I want to stress that out of the box, there's absolutely no 
change to the way urandom behaves, no blocking, this *only* kicks in if 
you twiddle the sysctl because you have some sort of security 
requirement that mandates it.



--
Jarod Wilson
ja...@redhat.com


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread David Miller
From: Ted Ts'o ty...@mit.edu
Date: Wed, 7 Sep 2011 15:27:37 -0400

 On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
 We're looking for a generic solution here that doesn't require
 re-educating every single piece of userspace. And anything done in
 userspace is going to be full of possible holes -- there needs to be
 something in place that actually *enforces* the policy, and
 centralized accounting/tracking, lest you wind up with multiple
 processes racing to grab the entropy.
 
 Yeah, but there are userspace programs that depend on urandom not
 blocking... so your proposed change would break them.

Agreed, and this is a really poor approach to solving the problem.

If you change semantics, you have to create a new facility and then
convert the userland pieces over to it.

Yes, this is harder and requires more work, but it is necessary as
it is the only way to ensure that we won't break something.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Sasha Levin
On Wed, 2011-09-07 at 15:30 -0400, Jarod Wilson wrote:
 Sasha Levin wrote:
  On Wed, 2011-09-07 at 14:26 -0400, Jarod Wilson wrote:
  Sasha Levin wrote:
  [..] And anything done in
  userspace is going to be full of possible holes [..]
 
  Such as? Is there an example of a case which can't be handled in
  userspace?
 
 How do you mandate preventing reads from urandom when there isn't 
 sufficient entropy? You likely wind up needing to restrict access to the 
 actual urandom via permissions and selinux policy or similar, and then 
 run a daemon or something that provides a pseudo-urandom that brokers 
 access to the real urandom. Get the permissions or policy wrong, and 
 havoc ensues. An issue with the initscript or udev rule to hide the real 
 urandom, and things can fall down. Its a whole lot more fragile than 
 this approach, and a lot more involved in setting it up.

Replace /dev/urandom with a simple CUSE driver, redirect reads to the
real urandom after applying your threshold.

-- 

Sasha.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Steve Grubb
On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote:
 On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
  We're looking for a generic solution here that doesn't require
  re-educating every single piece of userspace. And anything done in
  userspace is going to be full of possible holes -- there needs to be
  something in place that actually *enforces* the policy, and
  centralized accounting/tracking, lest you wind up with multiple
  processes racing to grab the entropy.
 
 Yeah, but there are userspace programs that depend on urandom not
 blocking... so your proposed change would break them.

The only time this kicks in is when a system is under attack. If you have set 
this and 
the system is running as normal, you will never notice it even there. Almost 
all uses 
of urandom grab 4 bytes and seed openssl or libgcrypt or nss. It then uses 
those 
libraries. There are the odd cases where something uses urandom to generate a 
key or 
otherwise grab a chunk of bytes, but these are still small reads in the scheme 
of 
things. Can you think of any legitimate use of urandom that grabs 100K or 1M 
from 
urandom? Even those numbers still won't hit the sysctl on a normally function 
system.

When a system is underattack, do you really want to be using a PRNG for 
anything like 
seeding openssl? Because a PRNG is what urandom degrades into when its 
attacked. If 
enough bytes are read that an attacker can guess the internal state of the RNG, 
do you 
really want it seeding a openssh session? At that point you really need it to 
stop 
momentarily until it gets fresh entropy so the internal state is unknown. 
That's what 
this is really about.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Sasha Levin
On Wed, 2011-09-07 at 16:02 -0400, Steve Grubb wrote:
 On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote:
  On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
   We're looking for a generic solution here that doesn't require
   re-educating every single piece of userspace. And anything done in
   userspace is going to be full of possible holes -- there needs to be
   something in place that actually *enforces* the policy, and
   centralized accounting/tracking, lest you wind up with multiple
   processes racing to grab the entropy.
  
  Yeah, but there are userspace programs that depend on urandom not
  blocking... so your proposed change would break them.
 
 The only time this kicks in is when a system is under attack. If you have set 
 this and 
 the system is running as normal, you will never notice it even there. Almost 
 all uses 
 of urandom grab 4 bytes and seed openssl or libgcrypt or nss. It then uses 
 those 
 libraries. There are the odd cases where something uses urandom to generate a 
 key or 
 otherwise grab a chunk of bytes, but these are still small reads in the 
 scheme of 
 things. Can you think of any legitimate use of urandom that grabs 100K or 1M 
 from 
 urandom? Even those numbers still won't hit the sysctl on a normally function 
 system.
 

As far as I remember, several wipe utilities are using /dev/urandom to
overwrite disks (possibly several times).

Something similar probably happens for getting junk on disks before
creating an encrypted filesystem on top of them.

-- 

Sasha.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Steve Grubb
On Wednesday, September 07, 2011 04:23:13 PM Sasha Levin wrote:
 On Wed, 2011-09-07 at 16:02 -0400, Steve Grubb wrote:
  On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote:
   On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
We're looking for a generic solution here that doesn't require
re-educating every single piece of userspace. And anything done in
userspace is going to be full of possible holes -- there needs to be
something in place that actually *enforces* the policy, and
centralized accounting/tracking, lest you wind up with multiple
processes racing to grab the entropy.
   
   Yeah, but there are userspace programs that depend on urandom not
   blocking... so your proposed change would break them.
  
  The only time this kicks in is when a system is under attack. If you have
  set this and the system is running as normal, you will never notice it
  even there. Almost all uses of urandom grab 4 bytes and seed openssl or
  libgcrypt or nss. It then uses those libraries. There are the odd cases
  where something uses urandom to generate a key or otherwise grab a chunk
  of bytes, but these are still small reads in the scheme of things. Can
  you think of any legitimate use of urandom that grabs 100K or 1M from
  urandom? Even those numbers still won't hit the sysctl on a normally
  function system.
 
 As far as I remember, several wipe utilities are using /dev/urandom to
 overwrite disks (possibly several times).

Which should generate disk activity and feed entropy to urandom.
 
 Something similar probably happens for getting junk on disks before
 creating an encrypted filesystem on top of them.

During system install, this sysctl is not likely to be applied.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Neil Horman
On Wed, Sep 07, 2011 at 04:02:24PM -0400, Steve Grubb wrote:
 On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote:
  On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
   We're looking for a generic solution here that doesn't require
   re-educating every single piece of userspace. And anything done in
   userspace is going to be full of possible holes -- there needs to be
   something in place that actually *enforces* the policy, and
   centralized accounting/tracking, lest you wind up with multiple
   processes racing to grab the entropy.
  
  Yeah, but there are userspace programs that depend on urandom not
  blocking... so your proposed change would break them.
 
 The only time this kicks in is when a system is under attack. If you have set 
 this and 
 the system is running as normal, you will never notice it even there. Almost 
 all uses 
 of urandom grab 4 bytes and seed openssl or libgcrypt or nss. It then uses 
 those 
 libraries. There are the odd cases where something uses urandom to generate a 
 key or 
 otherwise grab a chunk of bytes, but these are still small reads in the 
 scheme of 
Theres no way you can guarantee that.  A quick lsof on my system here shows 27
unique pids that are holding /dev/urandom open, and while they may all be small 
reads,
taken in aggregate, I can imagine that they could pull a significant amount of
entropy out of /dev/urandom.

 things. Can you think of any legitimate use of urandom that grabs 100K or 1M 
 from 
 urandom? Even those numbers still won't hit the sysctl on a normally function 
 system.
 
How can you be sure of that?  This seems to make assumptions about both the rate
at which entropy is drained from /dev/urandom and the limit at which you will
start blocking, neither of which you can be sure of.

 When a system is underattack, do you really want to be using a PRNG for 
 anything like 
How can you be sure that this only happens when a system is under some sort of
attack.  /dev/urandom is there for user space to use, and we can't make
assumptions as to how it will get drawn from.  What if someone was running some
monte-carlo based test program?  That could completely exhaust the entropy in
/dev/urandom and would be perfectly legitimate.

 seeding openssl? Because a PRNG is what urandom degrades into when its 
 attacked. If 
 enough bytes are read that an attacker can guess the internal state of the 
 RNG, do you 
 really want it seeding a openssh session? At that point you really need it to 
 stop 
 momentarily until it gets fresh entropy so the internal state is unknown. 
 That's what 
 this is really about.
I never really want my ssh session to be be seeded with non-random data.  Of
course, in my mind thats an argument for making ssh use /dev/random rather than
/dev/urandom, but I'm willing to take the tradeoff in speed most of the time.

 
 -Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Sasha Levin
On Wed, 2011-09-07 at 16:30 -0400, Steve Grubb wrote:
 On Wednesday, September 07, 2011 04:23:13 PM Sasha Levin wrote:
  On Wed, 2011-09-07 at 16:02 -0400, Steve Grubb wrote:
   On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote:
On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
 We're looking for a generic solution here that doesn't require
 re-educating every single piece of userspace. And anything done in
 userspace is going to be full of possible holes -- there needs to be
 something in place that actually *enforces* the policy, and
 centralized accounting/tracking, lest you wind up with multiple
 processes racing to grab the entropy.

Yeah, but there are userspace programs that depend on urandom not
blocking... so your proposed change would break them.
   
   The only time this kicks in is when a system is under attack. If you have
   set this and the system is running as normal, you will never notice it
   even there. Almost all uses of urandom grab 4 bytes and seed openssl or
   libgcrypt or nss. It then uses those libraries. There are the odd cases
   where something uses urandom to generate a key or otherwise grab a chunk
   of bytes, but these are still small reads in the scheme of things. Can
   you think of any legitimate use of urandom that grabs 100K or 1M from
   urandom? Even those numbers still won't hit the sysctl on a normally
   function system.
  
  As far as I remember, several wipe utilities are using /dev/urandom to
  overwrite disks (possibly several times).
 
 Which should generate disk activity and feed entropy to urandom.

I thought you need to feed random, not urandom.

Anyway, it won't happen fast enough to actually not block.

Writing 1TB of urandom into a disk won't generate 1TB (or anything close
to that) of randomness to cover for itself.

  Something similar probably happens for getting junk on disks before
  creating an encrypted filesystem on top of them.
 
 During system install, this sysctl is not likely to be applied.

It may happen at any time you need to create a new filesystem, which
won't necessarily happen during system install. 

See for example the instructions on how to set up a LUKS filesystem:
https://wiki.archlinux.org/index.php/System_Encryption_with_LUKS#Preparation_and_mapping

-- 

Sasha.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Steve Grubb
On Wednesday, September 07, 2011 04:33:05 PM Neil Horman wrote:
 On Wed, Sep 07, 2011 at 04:02:24PM -0400, Steve Grubb wrote:
  On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote:
   On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
We're looking for a generic solution here that doesn't require
re-educating every single piece of userspace. And anything done in
userspace is going to be full of possible holes -- there needs to be
something in place that actually *enforces* the policy, and
centralized accounting/tracking, lest you wind up with multiple
processes racing to grab the entropy.
   
   Yeah, but there are userspace programs that depend on urandom not
   blocking... so your proposed change would break them.
  
  The only time this kicks in is when a system is under attack. If you have
  set this and the system is running as normal, you will never notice it
  even there. Almost all uses of urandom grab 4 bytes and seed openssl or
  libgcrypt or nss. It then uses those libraries. There are the odd cases
  where something uses urandom to generate a key or otherwise grab a chunk
  of bytes, but these are still small reads in the scheme of
 
 Theres no way you can guarantee that.  A quick lsof on my system here shows
 27 unique pids that are holding /dev/urandom open, and while they may all
 be small reads, taken in aggregate, I can imagine that they could pull a
 significant amount of entropy out of /dev/urandom.

These are likely for reseeding purposes. Even openssl/libgcrypt/nss need 
reseeding.


  things. Can you think of any legitimate use of urandom that grabs 100K or
  1M from urandom? Even those numbers still won't hit the sysctl on a
  normally function system.
 
 How can you be sure of that?  This seems to make assumptions about both the
 rate at which entropy is drained from /dev/urandom and the limit at which
 you will start blocking, neither of which you can be sure of.

You can try Jarod's patch for a day or two and see if it affects your system.

 
  When a system is underattack, do you really want to be using a PRNG for
  anything like
 
 How can you be sure that this only happens when a system is under some sort
 of attack.  /dev/urandom is there for user space to use, and we can't make
 assumptions as to how it will get drawn from.  What if someone was running
 some monte-carlo based test program?  That could completely exhaust the
 entropy in /dev/urandom and would be perfectly legitimate.

I doubt a Monte-Carlo simulation will be done in a high security setting where 
they 
also depend entirely on a PRNG. 
 
  seeding openssl? Because a PRNG is what urandom degrades into when its
  attacked. If enough bytes are read that an attacker can guess the
  internal state of the RNG, do you really want it seeding a openssh
  session? At that point you really need it to stop momentarily until it
  gets fresh entropy so the internal state is unknown. That's what this is
  really about.
 
 I never really want my ssh session to be be seeded with non-random data. 
 Of course, in my mind thats an argument for making ssh use /dev/random
 rather than /dev/urandom, but I'm willing to take the tradeoff in speed
 most of the time.

Bingo! You hit the problem. In some of our tests, it was shown that it takes 4 
minutes 
to establish a connection when using random. So, if the system is under attack, 
the 
seeding of openssh will be based on the output of a RNG where the attacker 
might be 
able to guess the internal state. This is a problem we have right now. It not 
theoretical. The best solution is Jarod's patch because any other solution will 
require teaching all of user space about the new RNG and dressing it up for 
FIPS-140. 
At that point, what's the difference?

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Steve Grubb
On Wednesday, September 07, 2011 04:37:57 PM Sasha Levin wrote:
 On Wed, 2011-09-07 at 16:30 -0400, Steve Grubb wrote:
  On Wednesday, September 07, 2011 04:23:13 PM Sasha Levin wrote:
   On Wed, 2011-09-07 at 16:02 -0400, Steve Grubb wrote:
On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote:
 On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
  We're looking for a generic solution here that doesn't require
  re-educating every single piece of userspace. And anything done
  in userspace is going to be full of possible holes -- there
  needs to be something in place that actually *enforces* the
  policy, and centralized accounting/tracking, lest you wind up
  with multiple processes racing to grab the entropy.
 
 Yeah, but there are userspace programs that depend on urandom not
 blocking... so your proposed change would break them.

The only time this kicks in is when a system is under attack. If you
have set this and the system is running as normal, you will never
notice it even there. Almost all uses of urandom grab 4 bytes and
seed openssl or libgcrypt or nss. It then uses those libraries.
There are the odd cases where something uses urandom to generate a
key or otherwise grab a chunk of bytes, but these are still small
reads in the scheme of things. Can you think of any legitimate use
of urandom that grabs 100K or 1M from urandom? Even those numbers
still won't hit the sysctl on a normally function system.
   
   As far as I remember, several wipe utilities are using /dev/urandom to
   overwrite disks (possibly several times).
  
  Which should generate disk activity and feed entropy to urandom.
 
 I thought you need to feed random, not urandom.

I think they draw from the same pool.

 
 Anyway, it won't happen fast enough to actually not block.
 
 Writing 1TB of urandom into a disk won't generate 1TB (or anything close
 to that) of randomness to cover for itself.

We don't need a 1:1 mapping of RNG used to entropy acquired. Its more on the 
scale of 
8,000,000:1 or higher.


   Something similar probably happens for getting junk on disks before
   creating an encrypted filesystem on top of them.
  
  During system install, this sysctl is not likely to be applied.
 
 It may happen at any time you need to create a new filesystem, which
 won't necessarily happen during system install.
 
 See for example the instructions on how to set up a LUKS filesystem:
 https://wiki.archlinux.org/index.php/System_Encryption_with_LUKS#Preparatio
 n_and_mapping

Those instructions might need to be changed. That is one way of many to get 
random 
numbers on the disk. Anyone really needing the security to have the sysctl on 
will 
also probably accept that its doing its job and keeping the numbers random. 
Again, no 
effect unless you turn it on.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Sasha Levin
On Wed, 2011-09-07 at 16:56 -0400, Steve Grubb wrote:
 On Wednesday, September 07, 2011 04:37:57 PM Sasha Levin wrote:
  On Wed, 2011-09-07 at 16:30 -0400, Steve Grubb wrote:
   On Wednesday, September 07, 2011 04:23:13 PM Sasha Levin wrote:
On Wed, 2011-09-07 at 16:02 -0400, Steve Grubb wrote:
 On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote:
  On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
   We're looking for a generic solution here that doesn't require
   re-educating every single piece of userspace. And anything done
   in userspace is going to be full of possible holes -- there
   needs to be something in place that actually *enforces* the
   policy, and centralized accounting/tracking, lest you wind up
   with multiple processes racing to grab the entropy.
  
  Yeah, but there are userspace programs that depend on urandom not
  blocking... so your proposed change would break them.
 
 The only time this kicks in is when a system is under attack. If you
 have set this and the system is running as normal, you will never
 notice it even there. Almost all uses of urandom grab 4 bytes and
 seed openssl or libgcrypt or nss. It then uses those libraries.
 There are the odd cases where something uses urandom to generate a
 key or otherwise grab a chunk of bytes, but these are still small
 reads in the scheme of things. Can you think of any legitimate use
 of urandom that grabs 100K or 1M from urandom? Even those numbers
 still won't hit the sysctl on a normally function system.

As far as I remember, several wipe utilities are using /dev/urandom to
overwrite disks (possibly several times).
   
   Which should generate disk activity and feed entropy to urandom.
  
  I thought you need to feed random, not urandom.
 
 I think they draw from the same pool.

There is a blocking and a non blocking pool.

  
  Anyway, it won't happen fast enough to actually not block.
  
  Writing 1TB of urandom into a disk won't generate 1TB (or anything close
  to that) of randomness to cover for itself.
 
 We don't need a 1:1 mapping of RNG used to entropy acquired. Its more on the 
 scale of 
 8,000,000:1 or higher.

I'm just saying that writing 1TB into a disk using urandom will start to
block, it won't generate enough randomness by itself.

 
Something similar probably happens for getting junk on disks before
creating an encrypted filesystem on top of them.
   
   During system install, this sysctl is not likely to be applied.
  
  It may happen at any time you need to create a new filesystem, which
  won't necessarily happen during system install.
  
  See for example the instructions on how to set up a LUKS filesystem:
  https://wiki.archlinux.org/index.php/System_Encryption_with_LUKS#Preparatio
  n_and_mapping
 
 Those instructions might need to be changed. That is one way of many to get 
 random 
 numbers on the disk. Anyone really needing the security to have the sysctl on 
 will 
 also probably accept that its doing its job and keeping the numbers random. 
 Again, no 
 effect unless you turn it on.

There are bunch of other places that would need to be changed in that
case :)

Why not implement it as a user mode CUSE driver that would
wrap /dev/urandom and make it behave any way you want to? why push it
into the kernel?

-- 

Sasha.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Ted Ts'o
On Wed, Sep 07, 2011 at 04:02:24PM -0400, Steve Grubb wrote:
 
 When a system is underattack, do you really want to be using a PRNG
 for anything like seeding openssl?  Because a PRNG is what urandom
 degrades into when its attacked.

This is not technically true.  urandom degrades into a CRNG
(cryptographic random number generator).  In fact what most security
experts recommend is to take a small amount of security, and then use
that to seed a CRNG.

 If enough bytes are read that an
 attacker can guess the internal state of the RNG, do you really want
 it seeding a openssh session?

In a cryptographic random number generator, there is a either a
cryptographic hash or a encryption algorithm at the core.  So you
would need a huge amounts of bytes, and then you would have to carry
out an attack on the cryptographic core.

If this is the basis for the patch, then we should definitely NACK it.
It sounds like snake oil fear mongering.

- Ted


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Nikos Mavrogiannopoulos

On 09/07/2011 10:02 PM, Steve Grubb wrote:


When a system is underattack, do you really want to be using a PRNG
for anything like seeding openssl? Because a PRNG is what urandom
degrades into when its attacked.


Using a PRNG is not a problem. Making sure it is well seeded and no
input from the attacker can compromise its state are the difficult
parts. Making predictable estimates and blocking when your estimates are
off, makes it a good target for DoS. When your system is under attack,
you want to use your services. If they block then the attack might
have just been successful.

regards,
Nikos
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Stephan Mueller
On 07.09.2011 23:18:58, +0200, Ted Ts'o ty...@mit.edu wrote:

Hi Ted,

 On Wed, Sep 07, 2011 at 04:02:24PM -0400, Steve Grubb wrote:

 When a system is underattack, do you really want to be using a PRNG
 for anything like seeding openssl?  Because a PRNG is what urandom
 degrades into when its attacked.
 
 This is not technically true.  urandom degrades into a CRNG
 (cryptographic random number generator).  In fact what most security
 experts recommend is to take a small amount of security, and then use
 that to seed a CRNG.

Correct.

However, a CRNG shall be reseeded once in a while - see standard crypto
libraries and their CRNGs (OpenSSL being a notable exception here). And
that is what this entire discussion is all about: to ensure that the
CRNG is reseeded with entropy, eventually.

 
 If enough bytes are read that an
 attacker can guess the internal state of the RNG, do you really want
 it seeding a openssh session?
 
 In a cryptographic random number generator, there is a either a
 cryptographic hash or a encryption algorithm at the core.  So you
 would need a huge amounts of bytes, and then you would have to carry
 out an attack on the cryptographic core.

Correct.

And exactly that is the concern from organizations like BSI. Their
cryptographer's concern is that due to the volume of data that you can
extract from /dev/urandom, you may find cycles or patterns that increase
the probability to guess the next random value compared to brute force
attack. Note, it is all about probabilities.
 
 If this is the basis for the patch, then we should definitely NACK it.
 It sounds like snake oil fear mongering.
 
   - Ted
 
 
 


-- 
Ciao
Stephan
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Steve Grubb
On Wednesday, September 07, 2011 05:10:27 PM Sasha Levin wrote:
 Something similar probably happens for getting junk on disks before
 creating an encrypted filesystem on top of them.

During system install, this sysctl is not likely to be applied.
   
   It may happen at any time you need to create a new filesystem, which
   won't necessarily happen during system install.
   
   See for example the instructions on how to set up a LUKS filesystem:
   https://wiki.archlinux.org/index.php/System_Encryption_with_LUKS#Prepar
   atio n_and_mapping
  
  Those instructions might need to be changed. That is one way of many to
  get random numbers on the disk. Anyone really needing the security to
  have the sysctl on will also probably accept that its doing its job and
  keeping the numbers random. Again, no effect unless you turn it on.
 
 There are bunch of other places that would need to be changed in that
 case :)
 
 Why not implement it as a user mode CUSE driver that would
 wrap /dev/urandom and make it behave any way you want to? why push it
 into the kernel?

For one, auditing does not work for FUSE or things like that. We have to be 
able to 
audit who is using what. Then there are the FIPS-140 requirements and this will 
spread 
it. There are problems sending crypto audit events from user space, too.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Jarod Wilson

Sasha Levin wrote:

On Wed, 2011-09-07 at 16:56 -0400, Steve Grubb wrote:

On Wednesday, September 07, 2011 04:37:57 PM Sasha Levin wrote:

On Wed, 2011-09-07 at 16:30 -0400, Steve Grubb wrote:

On Wednesday, September 07, 2011 04:23:13 PM Sasha Levin wrote:

On Wed, 2011-09-07 at 16:02 -0400, Steve Grubb wrote:

On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote:

On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:

We're looking for a generic solution here that doesn't require
re-educating every single piece of userspace. And anything done
in userspace is going to be full of possible holes -- there
needs to be something in place that actually *enforces* the
policy, and centralized accounting/tracking, lest you wind up
with multiple processes racing to grab the entropy.

Yeah, but there are userspace programs that depend on urandom not
blocking... so your proposed change would break them.

The only time this kicks in is when a system is under attack. If you
have set this and the system is running as normal, you will never
notice it even there. Almost all uses of urandom grab 4 bytes and
seed openssl or libgcrypt or nss. It then uses those libraries.
There are the odd cases where something uses urandom to generate a
key or otherwise grab a chunk of bytes, but these are still small
reads in the scheme of things. Can you think of any legitimate use
of urandom that grabs 100K or 1M from urandom? Even those numbers
still won't hit the sysctl on a normally function system.

As far as I remember, several wipe utilities are using /dev/urandom to
overwrite disks (possibly several times).

Which should generate disk activity and feed entropy to urandom.

I thought you need to feed random, not urandom.

I think they draw from the same pool.


There is a blocking and a non blocking pool.


There's a single shared input pool that both the blocking and 
non-blocking pools pull from. New entropy data is added to the input 
pool, then transferred to the interface-specific pools as needed.



Anyway, it won't happen fast enough to actually not block.

Writing 1TB of urandom into a disk won't generate 1TB (or anything close
to that) of randomness to cover for itself.

We don't need a 1:1 mapping of RNG used to entropy acquired. Its more on the 
scale of
8,000,000:1 or higher.


I'm just saying that writing 1TB into a disk using urandom will start to
block, it won't generate enough randomness by itself.


Writing 1TB of data to a disk using urandom won't block at all if nobody 
is using /dev/random. We seed /dev/urandom with entropy, then it just 
behaves as a Cryptographic RNG, its not pulling out any further entropy 
data until it needs to reseed, and thus the entropy count isn't dropping 
to 0, so we're not blocking. Someone has to actually drain the entropy, 
typically by pulling a fair bit of data from /dev/random, for the 
blocking to actually come into play.




Why not implement it as a user mode CUSE driver that would
wrap /dev/urandom and make it behave any way you want to? why push it
into the kernel?


Hadn't considered CUSE. But it does have the issues Steve mentioned in 
his earlier reply.


Another proposal that has been kicked around: a 3rd random chardev, 
which implements this functionality, leaving urandom unscathed. Some 
udev magic or a driver param could move/disable/whatever urandom and put 
this alternate device in its place. Ultimately, identical behavior, but 
the true urandom doesn't get altered at all.



--
Jarod Wilson
ja...@redhat.com


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Ted Ts'o
On Wed, Sep 07, 2011 at 11:27:12PM +0200, Stephan Mueller wrote:
 
 And exactly that is the concern from organizations like BSI. Their
 cryptographer's concern is that due to the volume of data that you can
 extract from /dev/urandom, you may find cycles or patterns that increase
 the probability to guess the next random value compared to brute force
 attack. Note, it is all about probabilities.

The internal state of urandom is huge, and it does automatically
reseed.  If you can find cycles that are significantly smaller than
what would be expected by the size of the internal state, (or any kind
of pattern at all) then there would be significant flaws in the crypto
algorithm used.

If the BSI folks think otherwise, then they're peddling snake oil FUD
(which is not unusual for security companies).

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Sasha Levin
On Wed, 2011-09-07 at 17:28 -0400, Steve Grubb wrote:
 On Wednesday, September 07, 2011 05:10:27 PM Sasha Levin wrote:
  Something similar probably happens for getting junk on disks before
  creating an encrypted filesystem on top of them.
 
 During system install, this sysctl is not likely to be applied.

It may happen at any time you need to create a new filesystem, which
won't necessarily happen during system install.

See for example the instructions on how to set up a LUKS filesystem:
https://wiki.archlinux.org/index.php/System_Encryption_with_LUKS#Prepar
atio n_and_mapping
   
   Those instructions might need to be changed. That is one way of many to
   get random numbers on the disk. Anyone really needing the security to
   have the sysctl on will also probably accept that its doing its job and
   keeping the numbers random. Again, no effect unless you turn it on.
  
  There are bunch of other places that would need to be changed in that
  case :)
  
  Why not implement it as a user mode CUSE driver that would
  wrap /dev/urandom and make it behave any way you want to? why push it
  into the kernel?
 
 For one, auditing does not work for FUSE or things like that. We have to be 
 able to 
 audit who is using what. Then there are the FIPS-140 requirements and this 
 will spread 
 it. There are problems sending crypto audit events from user space, too.

auditd doesn't work with FUSE? afaik it should, FUSE is a filesystem
like any other.

-- 

Sasha.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Steve Grubb
On Wednesday, September 07, 2011 05:35:18 PM Jarod Wilson wrote:
 Another proposal that has been kicked around: a 3rd random chardev, 
 which implements this functionality, leaving urandom unscathed. Some 
 udev magic or a driver param could move/disable/whatever urandom and put 
 this alternate device in its place. Ultimately, identical behavior, but 
 the true urandom doesn't get altered at all.

Right, and that's what I was trying to say is that if we do all that and switch 
out 
urandom with something new that does what we need, what's the difference in 
just 
patching the behavior into urandom and calling it a day? Its simpler, less 
fragile, 
admins won't make mistakes setting up the wrong one in a chroot, already has 
the 
FIPS-140 dressing, and is auditable.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Sven-Haegar Koch
On Wed, 7 Sep 2011, Steve Grubb wrote:

 On Wednesday, September 07, 2011 05:35:18 PM Jarod Wilson wrote:
  Another proposal that has been kicked around: a 3rd random chardev, 
  which implements this functionality, leaving urandom unscathed. Some 
  udev magic or a driver param could move/disable/whatever urandom and put 
  this alternate device in its place. Ultimately, identical behavior, but 
  the true urandom doesn't get altered at all.
 
 Right, and that's what I was trying to say is that if we do all that and 
 switch out 
 urandom with something new that does what we need, what's the difference in 
 just 
 patching the behavior into urandom and calling it a day? Its simpler, less 
 fragile, 
 admins won't make mistakes setting up the wrong one in a chroot, already has 
 the 
 FIPS-140 dressing, and is auditable.

I as a 0815 admin would never want such a thing by default.

I already replace /dev/random with /dev/urandom to keep stupid sshd from 
dying because there just is no entropy - I care more about all my 
services staying alive than about perfect random.

c'ya
sven-haegar

-- 
Three may keep a secret, if two of them are dead.
- Ben F.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Neil Horman
On Wed, Sep 07, 2011 at 04:56:49PM -0400, Steve Grubb wrote:
 On Wednesday, September 07, 2011 04:37:57 PM Sasha Levin wrote:
  On Wed, 2011-09-07 at 16:30 -0400, Steve Grubb wrote:
   On Wednesday, September 07, 2011 04:23:13 PM Sasha Levin wrote:
On Wed, 2011-09-07 at 16:02 -0400, Steve Grubb wrote:
 On Wednesday, September 07, 2011 03:27:37 PM Ted Ts'o wrote:
  On Wed, Sep 07, 2011 at 02:26:35PM -0400, Jarod Wilson wrote:
   We're looking for a generic solution here that doesn't require
   re-educating every single piece of userspace. And anything done
   in userspace is going to be full of possible holes -- there
   needs to be something in place that actually *enforces* the
   policy, and centralized accounting/tracking, lest you wind up
   with multiple processes racing to grab the entropy.
  
  Yeah, but there are userspace programs that depend on urandom not
  blocking... so your proposed change would break them.
 
 The only time this kicks in is when a system is under attack. If you
 have set this and the system is running as normal, you will never
 notice it even there. Almost all uses of urandom grab 4 bytes and
 seed openssl or libgcrypt or nss. It then uses those libraries.
 There are the odd cases where something uses urandom to generate a
 key or otherwise grab a chunk of bytes, but these are still small
 reads in the scheme of things. Can you think of any legitimate use
 of urandom that grabs 100K or 1M from urandom? Even those numbers
 still won't hit the sysctl on a normally function system.

As far as I remember, several wipe utilities are using /dev/urandom to
overwrite disks (possibly several times).
   
   Which should generate disk activity and feed entropy to urandom.
  
  I thought you need to feed random, not urandom.
 
 I think they draw from the same pool.
 
They share the primary pool, where timer/interrupt/etc randomness is fed in.
/dev/random and /dev/urandom each have their own secondary pools however.

  
  Anyway, it won't happen fast enough to actually not block.
  
  Writing 1TB of urandom into a disk won't generate 1TB (or anything close
  to that) of randomness to cover for itself.
 
 We don't need a 1:1 mapping of RNG used to entropy acquired. Its more on the 
 scale of 
 8,000,000:1 or higher.
 
Where are you getting that number from?

You may not need it, but there are other people using this facility as well that
you're not considering.  If you assume that in the example Sasha has given, if
conservatively, you have a modern disk with 4k sectors, and you fill each 4k
sector with the value obtained from a 4 byte read from /dev/urandom, You will:

1) Generate an interrupt for every page you write, which in turn will add at
most 12 bits to the entropy pool

2) Extract 32 bits from the entropy pool

Thats just a loosing proposition.   Barring further entropy generation from
another source, this is bound to stall with this feature enabled. 


 
Something similar probably happens for getting junk on disks before
creating an encrypted filesystem on top of them.
   
   During system install, this sysctl is not likely to be applied.
  
  It may happen at any time you need to create a new filesystem, which
  won't necessarily happen during system install.
  
  See for example the instructions on how to set up a LUKS filesystem:
  https://wiki.archlinux.org/index.php/System_Encryption_with_LUKS#Preparatio
  n_and_mapping
 
 Those instructions might need to be changed. That is one way of many to get 
 random 
 numbers on the disk. Anyone really needing the security to have the sysctl on 
 will 
 also probably accept that its doing its job and keeping the numbers random. 
 Again, no 
 effect unless you turn it on.
 

And then its enforced on everyone, even those applications that don't want
it/can't work with it on.  This has to be done in such a way that its opt-in on
a per-application basis.  The CUSE idea put up previously sounds like a pretty
good way to do this.  The ioctl for per-fd blocking thresholds is another way to
go.

Neil

 -Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-07 Thread Sandy Harris
Jarod Wilson ja...@redhat.com wrote:

 Ted Ts'o wrote:

 Yeah, but there are userspace programs that depend on urandom not
 blocking... so your proposed change would break them.
 ...

 But only if you've set the sysctl to a non-zero value, ...

 But again, I want to stress that out of the box, there's absolutely no
 change to the way urandom behaves, no blocking, this *only* kicks in if you
 twiddle the sysctl because you have some sort of security requirement that
 mandates it.

So it only breaks things on systems with high security requirements?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-06 Thread Stephan Mueller
On 05.09.2011 04:36:29, +0200, Sandy Harris sandyinch...@gmail.com wrote:

Hi Sandy,

 On Fri, Sep 2, 2011 at 10:37 PM, Jarod Wilson ja...@redhat.com wrote:
 
 Certain security-related certifications and their respective review
 bodies have said that they find use of /dev/urandom for certain
 functions, such as setting up ssh connections, is acceptable, but if and
 only if /dev/urandom can block after a certain threshold of bytes have
 been read from it with the entropy pool exhausted. ...

 At present, urandom never blocks, even after all entropy has been
 exhausted from the entropy input pool. random immediately blocks when
 the input pool is exhausted. Some use cases want behavior somewhere in
 between these two, where blocking only occurs after some number have
 bytes have been read following input pool entropy exhaustion. Its
 possible to accomplish this and make it fully user-tunable, by adding a
 sysctl to set a max-bytes-after-0-entropy read threshold for urandom. In
 the out-of-the-box configuration, urandom behaves as it always has, but
 with a threshold value set, we'll block when its been exceeded.
 
 Is it possible to calculate what that threshold should be? The Yarrow
 paper includes arguments about the frequency of rekeying required to
 keep a block cipher based generator secure. Is there any similar
 analysis for the has-based pool? ( If not, should we switch to a
 block cipher?)

The current /dev/?random implementation is quite unique. It does not
seem to follow standard implementation like Yarrow. Therefore, I have
not seen any analysis about how often a rekeying is required.

Switching to a standard implementation may be worthwhile, but may take
some effort to do it right. According to the crypto folks at the German
BSI, /dev/urandom is not allowed for generating key material precisely
due to the non-blocking behavior. It would be acceptable for BSI to use
/dev/urandom, if it blocks after some threshold. Therefore, considering
the patch from Jarod is the low-hanging fruit which should not upset
anybody as /dev/urandom behaves as expected per default. Moreover, in
more sensitive environments, we can use /dev/urandom with the
delayed-blocking behavior where using /dev/random is too restrictive.
 
 /dev/urandom should not block unless both it has produced enough
 output since the last rekey that it requires a rekey and there is not
 enough entropy in the input pool to drive that rekey.

That is exactly what this patch is supposed to do, is it not?
 
 But what is a reasonable value for enough in that sentence?

That is a good question. I will enter a discussion with BSI to see what
enough means from the German BSI. After conclusion of that discussion,
we would let you know.


Thanks
Stephan
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-04 Thread Sandy Harris
On Fri, Sep 2, 2011 at 10:37 PM, Jarod Wilson ja...@redhat.com wrote:

 Certain security-related certifications and their respective review
 bodies have said that they find use of /dev/urandom for certain
 functions, such as setting up ssh connections, is acceptable, but if and
 only if /dev/urandom can block after a certain threshold of bytes have
 been read from it with the entropy pool exhausted. ...

 At present, urandom never blocks, even after all entropy has been
 exhausted from the entropy input pool. random immediately blocks when
 the input pool is exhausted. Some use cases want behavior somewhere in
 between these two, where blocking only occurs after some number have
 bytes have been read following input pool entropy exhaustion. Its
 possible to accomplish this and make it fully user-tunable, by adding a
 sysctl to set a max-bytes-after-0-entropy read threshold for urandom. In
 the out-of-the-box configuration, urandom behaves as it always has, but
 with a threshold value set, we'll block when its been exceeded.

Is it possible to calculate what that threshold should be? The Yarrow
paper includes arguments about the frequency of rekeying required to
keep a block cipher based generator secure. Is there any similar
analysis for the has-based pool? ( If not, should we switch to a
block cipher?)

/dev/urandom should not block unless both it has produced enough
output since the last rekey that it requires a rekey and there is not
enough entropy in the input pool to drive that rekey.

But what is a reasonable value for enough in that sentence?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] random: add blocking facility to urandom

2011-09-02 Thread Jarod Wilson
Certain security-related certifications and their respective review
bodies have said that they find use of /dev/urandom for certain
functions, such as setting up ssh connections, is acceptable, but if and
only if /dev/urandom can block after a certain threshold of bytes have
been read from it with the entropy pool exhausted. Initially, we were
investigating increasing entropy pool contributions, so that we could
simply use /dev/random, but since that hasn't (yet) panned out, and
upwards of five minutes to establsh an ssh connection using an
entropy-starved /dev/random is unacceptable, we started looking at the
blocking urandom approach.

At present, urandom never blocks, even after all entropy has been
exhausted from the entropy input pool. random immediately blocks when
the input pool is exhausted. Some use cases want behavior somewhere in
between these two, where blocking only occurs after some number have
bytes have been read following input pool entropy exhaustion. Its
possible to accomplish this and make it fully user-tunable, by adding a
sysctl to set a max-bytes-after-0-entropy read threshold for urandom. In
the out-of-the-box configuration, urandom behaves as it always has, but
with a threshold value set, we'll block when its been exceeded.

Tested by dd'ing from /dev/urandom in one window, and starting/stopping
a cat of /dev/random in the other, with some debug spew added to the
urandom read function to verify functionality.

CC: Matt Mackall m...@selenic.com
CC: Neil Horman nhor...@redhat.com
CC: Herbert Xu herbert...@redhat.com
CC: Steve Grubb sgr...@redhat.com
CC: Stephan Mueller stephan.muel...@atsec.com
Signed-off-by: Jarod Wilson ja...@redhat.com
---
 drivers/char/random.c |   82 -
 1 files changed, 81 insertions(+), 1 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index c35a785..cf48b0f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -289,6 +289,13 @@ static int trickle_thresh __read_mostly = INPUT_POOL_WORDS 
* 28;
 static DEFINE_PER_CPU(int, trickle_count);
 
 /*
+ * In normal operation, urandom never blocks, but optionally, you can
+ * set urandom to block after urandom_block_thresh bytes are read with
+ * the entropy pool exhausted.
+ */
+static int urandom_block_thresh = 0;
+
+/*
  * A pool of size .poolwords is stirred with a primitive polynomial
  * of degree .poolwords over GF(2).  The taps for various sizes are
  * defined below.  They are chosen to be evenly spaced (minimum RMS
@@ -383,6 +390,7 @@ static struct poolinfo {
  * Static global variables
  */
 static DECLARE_WAIT_QUEUE_HEAD(random_read_wait);
+static DECLARE_WAIT_QUEUE_HEAD(urandom_read_wait);
 static DECLARE_WAIT_QUEUE_HEAD(random_write_wait);
 static struct fasync_struct *fasync;
 
@@ -554,6 +562,7 @@ static void credit_entropy_bits(struct entropy_store *r, 
int nbits)
/* should we wake readers? */
if (r == input_pool  entropy_count = random_read_wakeup_thresh) {
wake_up_interruptible(random_read_wait);
+   wake_up_interruptible(urandom_read_wait);
kill_fasync(fasync, SIGIO, POLL_IN);
}
spin_unlock_irqrestore(r-lock, flags);
@@ -1060,7 +1069,55 @@ random_read(struct file *file, char __user *buf, size_t 
nbytes, loff_t *ppos)
 static ssize_t
 urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
 {
-   return extract_entropy_user(nonblocking_pool, buf, nbytes);
+   ssize_t n;
+   static int excess_bytes_read;
+
+   /* this is the default case with no urandom blocking threshold set */
+   if (!urandom_block_thresh)
+   return extract_entropy_user(nonblocking_pool, buf, nbytes);
+
+   if (nbytes == 0)
+   return 0;
+
+   DEBUG_ENT(reading %d bits\n, nbytes*8);
+
+   /* urandom blocking threshold set, but we have sufficient entropy */
+   if (input_pool.entropy_count = random_read_wakeup_thresh) {
+   excess_bytes_read = 0;
+   return extract_entropy_user(nonblocking_pool, buf, nbytes);
+   }
+
+   /* low on entropy, start counting bytes read */
+   if (excess_bytes_read + nbytes  urandom_block_thresh) {
+   n = extract_entropy_user(nonblocking_pool, buf, nbytes);
+   excess_bytes_read += n;
+   return n;
+   }
+
+   /* low entropy read threshold exceeded, now we have to block */
+   n = nbytes;
+   if (n  SEC_XFER_SIZE)
+   n = SEC_XFER_SIZE;
+
+   n = extract_entropy_user(nonblocking_pool, buf, n);
+   excess_bytes_read += n;
+
+   if (file-f_flags  O_NONBLOCK)
+   return -EAGAIN;
+
+   DEBUG_ENT(sleeping?\n);
+
+   wait_event_interruptible(urandom_read_wait,
+   input_pool.entropy_count = random_read_wakeup_thresh);
+
+   DEBUG_ENT(awake\n);
+
+   if (signal_pending(current))
+   return -ERESTARTSYS;
+
+