Hi Stephan, thanks for your reply! Stephan Mueller <smuel...@chronox.de> on Fri, Apr 22 2016: >> > The main >> > improvements compared to the legacy /dev/random is to provide sufficient >> > entropy during boot time as well as in virtual environments and when >> > using SSDs. >> >> After reading the paper, it is not clear to me how this goal is >> achieved. > > May I ask you to direct your attention to section 1.1. The legacy /dev/random > has three noise sources: block devices, HID and interrupts. With those noise > sources, only the block device and HID noise sources are interpreted to > collect entropy between zero to 11 bits per event. The interrupt noise source > is credited one bit of entropy per 64 received interrupts (or the expiry of 1 > second, whatever comes later).
I see, thanks. But so it seems your main contribution is to change the weights on the entropy estimation... > I interpret each interrupt to have about 0.9 bits of entropy. And I > do not specifically look for block device and HID events > As during boot, hundreds of interrupts are generated, and I have the > valuation > of about 0.9 bits of entropy per interrupt event, the LRNG will collect > entropy much faster. No, as you explain, it will not collect it faster; it will increase its counter faster. Your estimation bounds are (0.9, 0, 0) compared to Linux with (0.015, 11, 11) or something. My criticism now would be the question what this has to do with the rest of the design. Why not just argue for an adjustment of the current kernel's estimator? In addition I would be interested in a more fine-grained analysis of the few hundred interrupts that you mention happen during early boot. Which sources typically produce these? > /dev/random gets its data from the primary DRBG. The primary DRBG is designed > to only release as many bytes as it was seeded with entropy. Thus, if you > noise sources can only deliver, say, 16 bytes of entropy, a read request will > receive those 16 bytes. Then, the caller is blocked. If new entropy comes in, > the caller is woken up when reaching the wakeup threshold. This has the same > logic as the legacy /dev/random. > > getrandom(NONBLOCK) will block until the secondary DRBG is fully seeded > during > initialization (i.e. 256 bits when using the suggested DRBG types which have > 256 bits of security strength). Afterwards, it operates like /dev/urandom. > [...] > > /dev/urandom will not block as it is the case with the legacy /dev/urandom. Thanks for the clarification. -SMH _______________________________________________ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography