On Sat, 26 Aug 2000, Mark Murray wrote:

> My approach to this (and this is the first time I am stating this in
> such a wide forum) is to provide another device (say /dev/srandom) for
> folk who want to do their own randomness processing. This would provide
> a structure of data including the entropy, the system nanotime and the
> source, so the user can do his own hard work when he needs to, and the
> folk simply needing a stream of "good" random numbers can do do from
> /dev/random.

I think this is insufficient - the OS needs to provide this functionality,
otherwise it won't be used (because no-one else is going to do it). I
think having raw access to the entropy is very useful for determination of
sample source quality so the administrator can tune the in-kernel entropy
weight of the source, but I wouldn't expect such a device to be used in
practice to drive a userland PRNG.

It basically sounds like a cop-out to throw up your hands and say "well,
if you want such a good PRNG, here's the entropy, you make one" :-)

> > However it is not fair to impose that view on someone.  People can
> > have legitmate reasons to need more entropy.  Another very concrete
> > example is: say someone is using a yarrow-160 (3DES and SHA1)
> > implementation and they want to use an AES cipher with a 256 bit key
> > -- without the /dev/random API, you can't get 256 bit security, with
> > it you can.
> Sooner or later someone is going to come up with a requirement for
> M-bit randoness on Yarrow-N, where M > N. What then?

What then indeed :-)

> > | Even if I have a mechanism to wait for a reseed after each output and
> > | reserve that output for me, I get at best R*2^160 bits for R reseeds,
> > | rather than the 2^{R*160} bits I wanted.
> > | 
> > | Note the yarrow-160 API and design doesn't allow me to wait for and
> > | reserve the output of a reseed in a multi-tasking OS -- /dev/random
> > | does.
> Hmm. Most convincing argument I have heard so far. How much of a
> practical difference does that make, though. With ultra-conservative
> entropy estimation (eg, I am stirring in nanotime(9), but not making
> any randomness estimates from it, so the device is getting some "free"
> entropy.)?

I still maintain that as OS developers it's not our place to guess the
limits of the needs our users may have for the device. Since it's not
mathematically impossible to produce a PRNG with no artificial limits on
output strength or "bit concentration", we should try and do it :-)

> PC's are pretty low-entropy devices; users who need lots of random
> bits (as opposed to a steady supply of random numbers) are arguably
> going to need to go to extraordinary lengths to get them; their
> own statistical analysis is almost certainly going to be required.

I claim this to be untrue: my tests show an ordinary sound card (with no
recording source, at maximum input gain) will provide far more
(high-quality) entropy than Yarrow can make use of under even the most
punishing loads.


In God we Trust -- all others must submit an X.509 certificate.
    -- Charles Forsythe <[EMAIL PROTECTED]>

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to