> I developed a different approach, which I call Linux Random Number Generator
> (LRNG) to collect entropy within the Linux kernel. The main improvements
> compared to the legacy /dev/random is to provide sufficient entropy during 
> boot
> time as well as in virtual environments and when using SSDs.

After reading the paper, it is not clear to me how this goal is
achieved. As far as I can see, no new sources of entropy are
introduced; in fact a point is made to use only interrupt timings,
arguing that these effectively include other events. Why does this
design make more entropy available during boot and with solid-state
storage?

I'm also having trouble telling at a glance the exact blocking behavior
of the interfaces proposed. It seems that /dev/random and getrandom()
will block when the estimated entropy in the "seed buffer" is below some
threshold. But numbers mentioned are 32, 112, and 256; which is it and
when? About /dev/urandom it says that reseeds are required periodically.
Are these subject to blocking?


-SMH
_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to