I'm working on a demon that collects timer randomness, distills it
some, and pushes the results into /dev/random.

My code produces the random material in 32-bit chunks. The current
version sends it to /dev/random 32 bits at a time, doing a write() and
an entropy-update ioctl() for each chunk. Obviously I could add some
buffering and write fewer and larger chunks. My questions are whether
that is worth doing and, if so, what the optimum write() size is
likely to be.

I am not overly concerned about overheads on my side of the interface,
unless they are quite large. My concern is whether doing many small
writes wastes kernel resources.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to