Doug Barton <[email protected]> writes: > 1. Pseudo-randomize the order in which we utilize the files in > /var/db/entropy
There's no need for randomization if we make sure that *all* the data written to /dev/random is used, rather than just the first 4096 bytes; or that we reduce the amount of data to 4096 bytes before we write it so none of it is discarded. My gut feeling is that compression is better than hashing for that purpose, but at this point I'd be more comfortable if someone with an academic background in either cryptography or statistics (cperciva@?) weighed in. DES -- Dag-Erling Smørgrav - [email protected] _______________________________________________ [email protected] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-rc To unsubscribe, send any mail to "[email protected]"
