On 2012-Sep-05 02:12:48 +0100, RW <[email protected]> wrote: >All of the low-grade entropy should go through sha256.
Overall, I like the idea of feeding the high-volume mixed quality "entropy" through SHA-256 or similar. >Anything written into /dev/random is passed by random_yarrow_write() 16 >Bytes at time into random_harvest_internal() which copies it into a >buffer and queues it up. If there are 256 buffers queued >random_harvest_internal() simply returns without doing anything. This would seem to open up a denial-of-entropy attack on random(4): All entropy sources feed into Yarrow via random_harvest_internal() which queues the input into a single queue - harvestfifo. When this queue is full, further input is discarded. If I run "dd if=/dev/zero of=/dev/random" then harvestfifo will be kept full of NULs, resulting in other entropy events (particularly from within the kernel) being discarded. There would still be a small amount of entropy from the get_cyclecount() calls but this is minimal. Is it worth splitting harvestfifo into multiple queues to prevent this? At least a separate queue for RANDOM_WRITE and potentially separate queues for each entropy source. -- Peter Jeremy
pgpOBgqL3XeWs.pgp
Description: PGP signature
