Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Tomas Mraz
On Wed, 2011-09-07 at 19:57 -0400, Neil Horman wrote: 
 On Wed, Sep 07, 2011 at 04:56:49PM -0400, Steve Grubb wrote:
  On Wednesday, September 07, 2011 04:37:57 PM Sasha Levin wrote:
   Anyway, it won't happen fast enough to actually not block.
   
   Writing 1TB of urandom into a disk won't generate 1TB (or anything close
   to that) of randomness to cover for itself.
  
  We don't need a 1:1 mapping of RNG used to entropy acquired. Its more on 
  the scale of 
  8,000,000:1 or higher.
  
 Where are you getting that number from?
 
 You may not need it, but there are other people using this facility as well 
 that
 you're not considering.  If you assume that in the example Sasha has given, if
 conservatively, you have a modern disk with 4k sectors, and you fill each 4k
 sector with the value obtained from a 4 byte read from /dev/urandom, You will:
 
 1) Generate an interrupt for every page you write, which in turn will add at
 most 12 bits to the entropy pool
 
 2) Extract 32 bits from the entropy pool
 
 Thats just a loosing proposition.   Barring further entropy generation from
 another source, this is bound to stall with this feature enabled. 
Why so? In the case the blocking limit is on 8MBits of data read
from /dev/urandom per every 1 bit added to the entropy pool (this is not
the exact way how the patch behaves but we can approximate that) I do
not see the /dev/urandom can block if the bytes read from it are written
to disk device - of course only if the device adds entropy into the
primary pool when there are writes on the device.

Of course you can still easily make the /dev/urandom to occasionally
block with this patch, just read the data and drop it.

But you have to understand that the value that will be set with the
sysctl added by this patch will be large in the order of millions of
bits.

-- 
Tomas Mraz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Sasha Levin
On Wed, 2011-09-07 at 17:43 -0400, Steve Grubb wrote:
 On Wednesday, September 07, 2011 05:35:18 PM Jarod Wilson wrote:
  Another proposal that has been kicked around: a 3rd random chardev, 
  which implements this functionality, leaving urandom unscathed. Some 
  udev magic or a driver param could move/disable/whatever urandom and put 
  this alternate device in its place. Ultimately, identical behavior, but 
  the true urandom doesn't get altered at all.
 
 Right, and that's what I was trying to say is that if we do all that and 
 switch out 
 urandom with something new that does what we need, what's the difference in 
 just 
 patching the behavior into urandom and calling it a day? Its simpler, less 
 fragile, 
 admins won't make mistakes setting up the wrong one in a chroot, already has 
 the 
 FIPS-140 dressing, and is auditable.

Whats the difference between changing the behavior of a well defined
interface (/dev/urandom) which may cause userspace applications to fail,
in opposed to a non-intrusive usermode CUSE driver which can do exactly
what you need (and more - if more is required in the future)? None, none
at all...

CUSE supports kernel auditing, admins making mistakes is hardly the
kernels' problem (unless it makes it easy for them to do mistakes) and
code moved into the kernel doesn't suddenly become more stable and
simpler.

-- 

Sasha.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Christoph Hellwig
On Wed, Sep 07, 2011 at 04:02:24PM -0400, Steve Grubb wrote:
 The only time this kicks in is when a system is under attack. If you have set 
 this and 
 the system is running as normal, you will never notice it even there.

So your userspace will break exactly when you least need it and can't
debug it, awsome.


Could you security certification folks please get off your crack ASAP?

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Christoph Hellwig
On Wed, Sep 07, 2011 at 05:18:58PM -0400, Ted Ts'o wrote:
 If this is the basis for the patch, then we should definitely NACK it.
 It sounds like snake oil fear mongering.

You're around long enough to know that Steve and his gang do nothing but
selling snake oil.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Christoph Hellwig
On Wed, Sep 07, 2011 at 11:27:12PM +0200, Stephan Mueller wrote:
 And exactly that is the concern from organizations like BSI. Their
 cryptographer's concern is that due to the volume of data that you can
 extract from /dev/urandom, you may find cycles or patterns that increase
 the probability to guess the next random value compared to brute force
 attack. Note, it is all about probabilities.

So don't use /dev/urandom if you don't like the behaviour.  Breaking all
existing application because of a certification is simply not an option.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Steve Grubb
On Thursday, September 08, 2011 04:44:20 AM Christoph Hellwig wrote:
 On Wed, Sep 07, 2011 at 11:27:12PM +0200, Stephan Mueller wrote:
  And exactly that is the concern from organizations like BSI. Their
  cryptographer's concern is that due to the volume of data that you can
  extract from /dev/urandom, you may find cycles or patterns that increase
  the probability to guess the next random value compared to brute force
  attack. Note, it is all about probabilities.
 
 So don't use /dev/urandom if you don't like the behaviour.  Breaking all
 existing application because of a certification is simply not an option.

This patch does not _break_ all existing applications. If a system were under 
attack, 
they might pause momentarily, but they do not break. Please, try the patch and 
use a 
nice large number like 200 and see for yourself. Right now, everyone 
arguing about 
this breaking things have not tried it to see if in fact things do break and 
how they 
break if they do.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Neil Horman
On Thu, Sep 08, 2011 at 08:41:57AM +0200, Tomas Mraz wrote:
 On Wed, 2011-09-07 at 19:57 -0400, Neil Horman wrote: 
  On Wed, Sep 07, 2011 at 04:56:49PM -0400, Steve Grubb wrote:
   On Wednesday, September 07, 2011 04:37:57 PM Sasha Levin wrote:
Anyway, it won't happen fast enough to actually not block.

Writing 1TB of urandom into a disk won't generate 1TB (or anything close
to that) of randomness to cover for itself.
   
   We don't need a 1:1 mapping of RNG used to entropy acquired. Its more on 
   the scale of 
   8,000,000:1 or higher.
   
  Where are you getting that number from?
  
  You may not need it, but there are other people using this facility as well 
  that
  you're not considering.  If you assume that in the example Sasha has given, 
  if
  conservatively, you have a modern disk with 4k sectors, and you fill each 4k
  sector with the value obtained from a 4 byte read from /dev/urandom, You 
  will:
  
  1) Generate an interrupt for every page you write, which in turn will add at
  most 12 bits to the entropy pool
  
  2) Extract 32 bits from the entropy pool
  
  Thats just a loosing proposition.   Barring further entropy generation from
  another source, this is bound to stall with this feature enabled. 
 Why so? In the case the blocking limit is on 8MBits of data read
 from /dev/urandom per every 1 bit added to the entropy pool (this is not
 the exact way how the patch behaves but we can approximate that) I do
 not see the /dev/urandom can block if the bytes read from it are written
Easy, all you have to do is read 8MB of data out of /dev/urandom (plus whatever
other conditions are needed to first drain the entropy pool), prior to that bit
of entropy getting added.

 to disk device - of course only if the device adds entropy into the
 primary pool when there are writes on the device.
Yes, and thats a problem.  We're assuming in the above case that writes to disk
generate interrupts which in turn generate entropy in the pool.  If that
happens, then yes, it can be difficult (though far from impossible) to block on
urandom with this patch and a sufficiently high blocking threshold.  But
interrupt randomness is only added for interrupts flagged with
IRQF_SAMPLE_RANDOM, and if you look, almost no hard irqs add that flag.  So its
possible (and even likely) that writing to disk will not generate additional
entropy.

 
 Of course you can still easily make the /dev/urandom to occasionally
 block with this patch, just read the data and drop it.
 
 But you have to understand that the value that will be set with the
 sysctl added by this patch will be large in the order of millions of
 bits.
 
You can guarantee that?  This sysctl allows for a setting of 2 just as easily as
it allows for a setting of 8,000,000.  And the former is sure to break or
otherwise adversely affect applications that expect urandom to never block.
Thats what Sasha was referring to, saying that patch makes it easy for admins to
make serious mistakes.

Neil

 -- 
 Tomas Mraz
 No matter how far down the wrong road you've gone, turn back.
   Turkish proverb
 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Steve Grubb
On Thursday, September 08, 2011 08:52:34 AM Neil Horman wrote:
  to disk device - of course only if the device adds entropy into the
  primary pool when there are writes on the device.
 
 Yes, and thats a problem.  We're assuming in the above case that writes to
 disk generate interrupts which in turn generate entropy in the pool.  If
 that happens, then yes, it can be difficult (though far from impossible)
 to block on urandom with this patch and a sufficiently high blocking
 threshold.  But interrupt randomness is only added for interrupts flagged
 with
 IRQF_SAMPLE_RANDOM, and if you look, almost no hard irqs add that flag.  So
 its possible (and even likely) that writing to disk will not generate
 additional entropy.

The system being low on entropy is another problem that should be addressed. 
For our 
purposes, we cannot say take it from TPM or RDRND or any plugin board. We have 
to have 
the mathematical analysis that goes with it, we need to know where the entropy 
comes 
from, and a worst case entropy estimation. It has to be documented in detail. 
The only 
way we can be certain is if its based on system events. Linux systems are 
constantly 
low on entropy and this really needs addressing. But that is a separate issue. 
For 
real world use, I'd recommend everyone use a TPM chip + rngd and you'll never 
be short 
on random numbers. But in the case where we are certifying the OS, we need the 
mathematical argument to prove that unaided, things are correct.

 
  Of course you can still easily make the /dev/urandom to occasionally
  block with this patch, just read the data and drop it.
  
  But you have to understand that the value that will be set with the
  sysctl added by this patch will be large in the order of millions of
  bits.
 
 You can guarantee that?  

One proposal I made to Jarod was to add some minimum threshold that would 
prevent 
people from setting a value of 2, for example. Maybe the threshold could be set 
at 64K 
or higher depending on what number we get back from BSI.

 This sysctl allows for a setting of 2 just as  easily as it allows for a 
 setting of
 8,000,000.  And the former is sure to break or otherwise adversely affect
 applications that expect urandom to never block. Thats what Sasha was 
 referring to,
 saying that patch makes it easy for admins to make serious mistakes.

Would a sufficiently high threshold make this easier to accept?

-Steve

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Neil Horman
On Thu, Sep 08, 2011 at 09:11:12AM -0400, Steve Grubb wrote:
 On Thursday, September 08, 2011 08:52:34 AM Neil Horman wrote:
   to disk device - of course only if the device adds entropy into the
   primary pool when there are writes on the device.
  
  Yes, and thats a problem.  We're assuming in the above case that writes to
  disk generate interrupts which in turn generate entropy in the pool.  If
  that happens, then yes, it can be difficult (though far from impossible)
  to block on urandom with this patch and a sufficiently high blocking
  threshold.  But interrupt randomness is only added for interrupts flagged
  with
  IRQF_SAMPLE_RANDOM, and if you look, almost no hard irqs add that flag.  So
  its possible (and even likely) that writing to disk will not generate
  additional entropy.
 
 The system being low on entropy is another problem that should be addressed. 
 For our 
 purposes, we cannot say take it from TPM or RDRND or any plugin board. We 
 have to have 
 the mathematical analysis that goes with it, we need to know where the 
 entropy comes 
 from, and a worst case entropy estimation. It has to be documented in detail. 
 The only 
 way we can be certain is if its based on system events. Linux systems are 
 constantly 
 low on entropy and this really needs addressing. But that is a separate 
 issue. For 
 real world use, I'd recommend everyone use a TPM chip + rngd and you'll never 
 be short 
 on random numbers. But in the case where we are certifying the OS, we need 
 the 
 mathematical argument to prove that unaided, things are correct.
 
I agree, it would be great if we had more entropy as a rule, but thats not
really what this patch is about.  Its about how we behave in our various
interfaces when we don't have entropy.

  
   Of course you can still easily make the /dev/urandom to occasionally
   block with this patch, just read the data and drop it.
   
   But you have to understand that the value that will be set with the
   sysctl added by this patch will be large in the order of millions of
   bits.
  
  You can guarantee that?  
 
 One proposal I made to Jarod was to add some minimum threshold that would 
 prevent 
 people from setting a value of 2, for example. Maybe the threshold could be 
 set at 64K 
 or higher depending on what number we get back from BSI.
 
  This sysctl allows for a setting of 2 just as  easily as it allows for a 
  setting of
  8,000,000.  And the former is sure to break or otherwise adversely affect
  applications that expect urandom to never block. Thats what Sasha was 
  referring to,
  saying that patch makes it easy for admins to make serious mistakes.
 
 Would a sufficiently high threshold make this easier to accept?
 

I don't know, but IMO, no.  The problems with this implementation go beyond just
picking the appropriate threshold.  As several others have commented, theres
problems:

1) With having a threshold at all - I still don't think its clear what a 'good'
theshold is and why.  I've seen 8,000,000 bytes beyond zero entropy tossed
about.  I presume thats used because its been shown that after 8,000,000 bytes
read beyond zero entropy, the internal state of the urandom device can be
guessed?  If so, how?  If not, what the magic number?

2) With the implementation.  There are still unaddressed concerns about
applications which expect urandom to never block living in conjunction with
applications that can tolerate it.  As you noted above entropy is in short
supply in Linux systems.  Regardless of what threshold you set, its possible
that it will not be high enough to prevent urandom blocking for indefinate
periods of time.  Not addressing this I think is a complete show stopper.  The
CUSE driver has been proposed as a solution here and I think its a good one.  It
lets those that are worried about this sort of attack mitigate it and leaves the
rest of the world alone (and ostensibly is auditable)

Neil

 -Steve
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-crypto in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread David Miller
From: Steve Grubb sgr...@redhat.com
Date: Thu, 8 Sep 2011 07:48:27 -0400

 On Thursday, September 08, 2011 04:44:20 AM Christoph Hellwig wrote:
 On Wed, Sep 07, 2011 at 11:27:12PM +0200, Stephan Mueller wrote:
  And exactly that is the concern from organizations like BSI. Their
  cryptographer's concern is that due to the volume of data that you can
  extract from /dev/urandom, you may find cycles or patterns that increase
  the probability to guess the next random value compared to brute force
  attack. Note, it is all about probabilities.
 
 So don't use /dev/urandom if you don't like the behaviour.  Breaking all
 existing application because of a certification is simply not an option.
 
 This patch does not _break_ all existing applications. If a system were under 
 attack, 
 they might pause momentarily, but they do not break. Please, try the patch 
 and use a 
 nice large number like 200 and see for yourself. Right now, everyone 
 arguing about 
 this breaking things have not tried it to see if in fact things do break and 
 how they 
 break if they do.

If the application holds a critical resource other threads want when it
blocks on /dev/urandom, then your change breaks things.  I can come up
with more examples if you like.

Please get off this idea that you can just change the blocking behavior
for a file descriptor and nothing of consequence will happen.

When this happens in the networking due to a bug or similar, we know
it does break things.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] random: add blocking facility to urandom

2011-09-08 Thread Sandy Harris
On Thu, Sep 8, 2011 at 9:11 PM, Steve Grubb sgr...@redhat.com wrote:

 The system being low on entropy is another problem that should be addressed. 
 For our
 purposes, we cannot say take it from TPM or RDRND or any plugin board. We 
 have to have
 the mathematical analysis that goes with it, we need to know where the 
 entropy comes
 from, and a worst case entropy estimation.

Much of that is in the driver code's comments or previous email
threads. For example,
this thread cover many of the issues:
http://yarchive.net/comp/linux/dev_random.html
There are plenty of others as well.

 It has to be documented in detail.

Yes. But apart from code comments, what documentation
are we talking about? Googling for /dev/random on tldp.org
turns up nothing that treats this in any detail.


 The only
 way we can be certain is if its based on system events. Linux systems are 
 constantly
 low on entropy and this really needs addressing. But that is a separate 
 issue. For
 real world use, I'd recommend everyone use a TPM chip + rngd and you'll never 
 be short
 on random numbers.

Yes. Here's something I wrote on the Debian Freedombox list:

| No problem on a typical Linux desktop; it does not
| do much crypto and /dev/random gets input from
| keyboard  mouse movement, disk delays, etc.
| However, it might be a major problem for a plug
| server that does more crypto, runs headless, and
| use solid state storage.

| Some plug computers may have a hardware RNG,
| which is the best solution, but we cannot count on
| that in the general case.

| Where the plug has a sound card equivalent, and
| it isn't used for sound, there is a good solution
| using circuit noise in the card as the basis for
| a hardware RNG.
| http://www.av8n.com/turbid/paper/turbid.htm

| A good academic paper on the problem is:
| https://db.usenix.org/publications/library/proceedings/sec98/gutmann.html

| However, his software does not turn up in
| the Ubuntu repository. Is it in Debian?
| Could it be?

| Ubuntu, and I assume Debian, does have
| Havege, another researcher's solution
| to the same problem.
| http://www.irisa.fr/caps/projects/hipsor/

Some of that sort of discussion should be in the documentation.
I'm not sure how much currently is.

 But in the case where we are certifying the OS, we need the
 mathematical argument to prove that unaided, things are correct.

No, we cannot prove that unaided, things are correct if
by correct you mean urandom output is safe against all
conceivable attacks and by unaided you mean without
new entropy inputs. It is a PRNG, so without reseeding it
must be breakable in theory; that comes with the territory.

That need not be a problem, though. We cannot /prove/
that any of the ciphers or hashes in widespread use are
correct either. In fact, we can prove the opposite; they
are all provably breakable by an opponent with enough
resources, for extremely large values of enough.

Consider a block cipher like AES: there are three known
attacks that must break it in theory -- brute force search
for the key, or reduce the cipher to a set of equations
then feed in some known plaintext/ciphertext pairs and
solve for the key, or just collect enough known pairs to
build a codebook that breaks the cipher. We know the
brute force and codebook attacks are astronomically
expensive, and there are good arguments that algebra
is as well, but they all work in theory. Despite that, we
can use AES with reasonable confidence and with
certifications from various government bodies.

There are similar arguments for confidence in urandom.
The simplest are the size of the state relative to the
outputs and the XOR that reduces 160 bits of SHA-1
output to 80 of generator output. More detailed discussion is
in the first thread I cited above.

Barring a complete failure of SHA-1, an enemy who wants to
infer the state from outputs needs astronomically large amounts
of both data and effort.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html