[EMAIL PROTECTED] writes:

> Simon Josefsson <[EMAIL PROTECTED]> wrote:
>> [EMAIL PROTECTED] writes:
>>> That paper deserves a longer reply, but even granting every claim it
>>> makes, the only things it complains about are forward secrecy (is it
>>> feasible to reproduce earlier /dev/*random outputs after capturing the
>>> internal state of the pool from kernel memory?) and entropy estimation
>>> (is there really as much seed entropy as /dev/random estimates?).
>>>
>>> The latter is only relevant to /dev/random.
>>
>> Why's that?  If the entropy estimation are wrong, you may have too
>> little or no entropy.  /dev/urandom can't give you more entropy than
>> /dev/random in that case.
>
> The quality or lack thereof of the kernel's entropy estimation is relevant
> only to /dev/random because /dev/urandom's operation doesn't depend on
> the entropy estimates.  If you're using /dev/urandom, it doesn't matter
> if the kernel's entropy estimation is right, wrong, or commented out of
> the source code.

My point is that I believe that the quality of /dev/urandom depends on
the quality of /dev/random.  If you have found a problem in /dev/random,
I believe it would affect /dev/urandom as well.

>> However, my main concern with Linux's /dev/urandom is that it is too
>> slow, not that the entropy estimate may be wrong.  I don't see why it
>> couldn't be a fast PRNG with good properties (forward secrecy) seeded by
>> a continuously refreshed strong seed, and that reading GB's of data from
>> /dev/urandom would not deplete the /dev/random entropy pool.  This would
>> help 'dd if=/dev/urandom of=/dev/hda' as well.
>
> Being slow is/was one of Ted Ts'o's original design goals.  He wanted
> to do in kernel space ONLY what is not feasible to do in user space.
> The fast PRNG you propose is trivial to do in user space, where it
> can be seeded from /dev/urandom.

I don't think a PRNG seeded from /dev/urandom would be a good idea.  You
should seed a PRNG from /dev/random to make sure you get sufficient
entropy into the PRNG seed.

> An important design goal of /dev/random is that it is always available;
> note there is no CONFIG_RANDOM option, even if CONFIG_EMBEDDED.
> This requires that the code be kept small.  Additional features
> conflict with this goal.
>
> Some people have suggested a /dev/frandom (fast random) in the kernel
> for the application you're talking about, but AFAIK the question "why
> should it be in the kernel" has never been adequately answered.

So why does the kernel implement /dev/urandom?  Removing that code would
simplify the kernel further.  Obviously whatever outcome from the
original design goals no longer meet today's goals.

>>> So libgcrypt's seeding is "ugly and stupid" and is in desperate need of
>>> fixing.  Reading more bits and distilling them down only works on physical
>>> entropy sources, and /dev/urandom has already done that.  Doing it again
>>> is a complete and total waste of time.  If there are only 64 bits of
>>> entropy in /dev/urandom, then it doesn't matter whether you read 8 bytes,
>>> 8 kilobytes, or 8 gigabytes; there are only 2^64 possible outputs.
>>>
>>> Like openssl, it should read 256 bits from /dev/urandom and stop.
>>> There is zero benefit to asking for more.
>>
>
>> I'm concerned that the approach could be weak -- the quality of data
>> from /dev/urandom can be low if your system was just rebooted, and no
>> entropy has been gathered yet.  This is especially true for embedded
>> systems.  As it happens, GnuTLS could be involved in sending email early
>> in the boot process, so this is a practical scenario.
>
> Again, we appear to be talking past each other.  What part of this is
> weak?  libgcrypt already seeds itself from /dev/urandom.

Actually, Libgcrypt reads data from both /dev/random and /dev/urandom.
The former is used when GCRY_VERY_STRONG_RANDOM is requested.

> At any given
> time (such as at boot time), one of two things is true:
> 1) The kernel contains sufficient entropy (again, I'm not talking about
>    its fallible estimates, but the unknowable truth) to satisfy the
>    desired K-bit security level, or
> 2) It does not. 
>
> As long as you are not willing to wait, and thus are using /dev/urandom,
> reading more than K bits is pointless.  In case 1, you will get the K bits
> you want.  In case 2, you will get as much entropy as there is to be had.
> Reading more bytes won't get you the tinest shred of additional entropy.
>
> If you're going to open and read from /dev/urandom, you should stop after
> reading 32 bytes.  There is NEVER a good reason to read more when seeing
> a cryptographic PRNG.
>
> Reading more bytes from /dev/urandom is just loudly advertising one's
> cluelessness; it is exactly as stupid as attaching a huge spoiler and
> racing stripes to a Honda civic.

I think you should discuss this with the libgcrypt maintainer, I can't
change the libgcrypt code even if you convince me.

If you read the libgcrypt code (cipher/rnd*.c, cipher/random.c), you
will probably understand the motivation for reading more data (even if
you may not agree with it).

>> A seeds file would help here, and has been the suggestion from Werner.
>> If certtool used a libgcrypt seeds file, I believe it would have solved
>> your problem as well.
>
> My problem is that certtool reads a totally unreasonable amount of data
> from /dev/urandom.  It appears that specifying a seed file will change
> libgcrypt's behavior, but I haven't figured my way through the code well
> enough to be able to predict how.

It will reduce the amount of data read, but I'm not convinced forcing
every application to use a seed file is the best solution in general.
So I'm hesitant about making certtool use a seeds file.  There are
hundreds of applications using GnuTLS, forcing every one of them to use
a seeds file is unpractical.

>>> (If you want confirmation, please ask someone you trust.  David Wagner
>>> at Berkeley and Ian Goldberg at Waterloo are both pretty approachable.)
>>
>> I've been in a few discussions with David about /dev/random, for example
>> <http://thread.gmane.org/gmane.comp.encryption.general/11397/focus=11456>,
>> and I haven't noticed that we had any different opinions about this.
>
> Well, he calls /dev/random "blindingly fast" in that thread, which appears
> to differ from your opinion. :-)

It was /dev/urandom, but well, you are right.  On my machine, I get
about 3.9MB/s from /dev/urandom sustained.  That is slow.  /dev/zero
yields around 1.4GB/s.  I'm not sure David had understood this.  The
real problem is that reading a lot of data from /dev/urandom makes the
/dev/random unusable, so any process that reads a lot of data from
/dev/urandom will receive complaints from applications that reads data
from /dev/random.  I would instead consider this a design problem in the
kernel.

>>> Fair enough.  Are you saying that you prefer a patch to gnutls rather than
>>> one to libgcrypt?
>>
>> Yes, that could be discussed.  This problem is really in libgcrypt, so
>> the best would be if you were successful in fixing this problem at the
>> root.  Alternatively, work on improving /dev/urandom in Linux so that
>> GnuTLS can read from it directly and use it as the PRNG instead of
>> libgcrypt.  Until any of that materialize, I would certainly review and
>> consider patches to gnutls.
>
> Um, GnuTLS can already read from /dev/urandom directly.  What enhancements
> are required?  The hard thing is maintaining compatibility with
> systems without /dev/random.  For that, you need a whole pile of fragile
> user-space entropy harvesting code (which is what motivated Ted to write
> /dev/random in the first place), and a cryptographic compression function
> to distill the entropy.  Of course, once you have that code written,
> you might as well just seed it once from /dev/urandom and generate all
> the random bytes you need in user-space.

Libgcrypt contains user-space entropy havesting code.  GnuTLS doesn't
read directly from /dev/urandom, libgcrypt does.  So I think your effort
in this matter are better directed at the libgcrypt maintainers.

/Simon



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to