Re: Worry about entropy?

2014-12-03 Thread Sven Hoexter
On Mon, Dec 01, 2014 at 04:48:36PM -0400, francis picabia wrote:

Hi,

 Has anyone experience with seeing significant
 performance boost, or at least avoiding timeouts
 when under load, related to keeping entropy fed
 some how?  I've already read the articles discussing
 use of /dev/random etc., but I'm talking about things
 I implement, not things I code.  I can imagine
 encrypted file system or owncloud and that
 sort of thing being aided, but could it also be
 important for SSL?

I've seen applications that block due to missing
entropy, but those were not DNSSEC related.
I'd recommend to try usage of haveged to see if
the situation improves. If you really need a lot
of entropy you can think about using the Simtec entropy
key daemon with ekeyd. I bought one a few years back
just to try it and can confirm that it's easy to
integrate.

Sven


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141203085207.ga5...@timegate.de



Re: Worry about entropy?

2014-12-03 Thread Aaron Toponce
On Mon, Dec 01, 2014 at 04:48:36PM -0400, francis picabia wrote:
 I'm looking at DNSSEC implementation.  One guide
 points out haveged as a way to speed up performance
 of dnssec-keygen.  It certainly did.  I'm wondering if
 anyone has noticed performance improvement by running
 haveged on systems with certain applications.

Instead of trying to rely on /dev/random, use /dev/urandom. Haveged is
intetresting, but I think it might be a bit liberal on its entropy estimates.
At any event, it feeds data into the same CSPRNG that both /dev/random and
/dev/urandom read, so it's no more secure than just relying on /dev/urandom
directly.

 Commonly found advice on the net
 is to look at  /proc/sys/kernel/random/entropy_avail
 and it should be around 2000 or better.
 Another comment said that value is
 merely an estimate.  Checking some Redhat
 server systems I handle, I'm seeing values between
 100 and 200 most often.  One Debian KVM system wildly
 varies from 2000 down to 150 within a few seconds,
 but it isn't doing any noticeable load.

Entropy is _always_ an estimate. It's an approximate measurement of the
unpredictability of the state of the system. In physics, it's an approximate
measurement of the unpredictability of the state of gas particles in a closed
system. Entropy isn't something you use.

 Has anyone experience with seeing significant
 performance boost, or at least avoiding timeouts
 when under load, related to keeping entropy fed
 some how?  I've already read the articles discussing
 use of /dev/random etc., but I'm talking about things
 I implement, not things I code.  I can imagine
 encrypted file system or owncloud and that
 sort of thing being aided, but could it also be
 important for SSL?

OpenSSL, OpenSSH (which uses OpenSSL for random number generation), OpenVPN
(which also uses OpenSSL), Kerberos (ditto), and even GnuPG (except for key
generation), all use /dev/urandom.

You should too.

The only thing you'll get out of /dev/random is frustration due to blocking,
because the entropy estimate of the system is low. Use /dev/urandom, and be
happy. And secure.

-- 
. o .   o . o   . . o   o . .   . o .
. . o   . o o   o . o   . o o   . . o
o o o   . o .   . o o   o o .   o o o


pgpTao_Y0MK4j.pgp
Description: PGP signature


Re: Worry about entropy?

2014-12-03 Thread francis picabia
On Wed, Dec 3, 2014 at 2:38 PM, Aaron Toponce aaron.topo...@gmail.com
wrote:

 On Mon, Dec 01, 2014 at 04:48:36PM -0400, francis picabia wrote:
  I'm looking at DNSSEC implementation.  One guide
  points out haveged as a way to speed up performance
  of dnssec-keygen.  It certainly did.  I'm wondering if
  anyone has noticed performance improvement by running
  haveged on systems with certain applications.

 Instead of trying to rely on /dev/random, use /dev/urandom. Haveged is
 intetresting, but I think it might be a bit liberal on its entropy
 estimates.
 At any event, it feeds data into the same CSPRNG that both /dev/random and
 /dev/urandom read, so it's no more secure than just relying on /dev/urandom
 directly.

  Commonly found advice on the net
  is to look at  /proc/sys/kernel/random/entropy_avail
  and it should be around 2000 or better.
  Another comment said that value is
  merely an estimate.  Checking some Redhat
  server systems I handle, I'm seeing values between
  100 and 200 most often.  One Debian KVM system wildly
  varies from 2000 down to 150 within a few seconds,
  but it isn't doing any noticeable load.

 Entropy is _always_ an estimate. It's an approximate measurement of the
 unpredictability of the state of the system. In physics, it's an
 approximate
 measurement of the unpredictability of the state of gas particles in a
 closed
 system. Entropy isn't something you use.

  Has anyone experience with seeing significant
  performance boost, or at least avoiding timeouts
  when under load, related to keeping entropy fed
  some how?  I've already read the articles discussing
  use of /dev/random etc., but I'm talking about things
  I implement, not things I code.  I can imagine
  encrypted file system or owncloud and that
  sort of thing being aided, but could it also be
  important for SSL?

 OpenSSL, OpenSSH (which uses OpenSSL for random number generation), OpenVPN
 (which also uses OpenSSL), Kerberos (ditto), and even GnuPG (except for key
 generation), all use /dev/urandom.

 You should too.

 The only thing you'll get out of /dev/random is frustration due to
 blocking,
 because the entropy estimate of the system is low. Use /dev/urandom, and be
 happy. And secure.


So it seems it is mainly the *-keygen type applications which rely
on /dev/random and the rest use urandom.  In this case,
there would be little benefit to running haveged all the time
if few daily processes use /dev/random.


Worry about entropy?

2014-12-01 Thread francis picabia
I'm looking at DNSSEC implementation.  One guide
points out haveged as a way to speed up performance
of dnssec-keygen.  It certainly did.  I'm wondering if
anyone has noticed performance improvement by running
haveged on systems with certain applications.

Commonly found advice on the net
is to look at  /proc/sys/kernel/random/entropy_avail
and it should be around 2000 or better.
Another comment said that value is
merely an estimate.  Checking some Redhat
server systems I handle, I'm seeing values between
100 and 200 most often.  One Debian KVM system wildly
varies from 2000 down to 150 within a few seconds,
but it isn't doing any noticeable load.

Has anyone experience with seeing significant
performance boost, or at least avoiding timeouts
when under load, related to keeping entropy fed
some how?  I've already read the articles discussing
use of /dev/random etc., but I'm talking about things
I implement, not things I code.  I can imagine
encrypted file system or owncloud and that
sort of thing being aided, but could it also be
important for SSL?