On Tue, 11 Sep 2012 13:28:51 +0200
Dag-Erling Smørgrav wrote:

> Doug Barton <[email protected]> writes:
> > 1. Pseudo-randomize the order in which we utilize the files in
> > /var/db/entropy
> 
> There's no need for randomization if we make sure that *all* the data
> written to /dev/random is used, rather than just the first 4096 bytes;
> or that we reduce the amount of data to 4096 bytes before we write it
> so none of it is discarded.  My gut feeling is that compression is
> better than hashing for that purpose,

It's analogous to a passphrase, have you ever heard of a
passphrase being compressed rather than hashed? 

The only good reason for compression is if compression+hashing is
faster than hashing, and that sounds unlikely.

You all seem to be making very heavy weather of this - all that's needed
is to pass the low-grade stuff through a hash of your choice and then
follow that with the entropy file to fill-up the remaining 4k.
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-rc
To unsubscribe, send any mail to "[email protected]"

Reply via email to