Am Montag, 25. April 2016, 15:44:04 schrieb Sven M. Hallberg:

Hi Sven,

> Hi Stephan, thanks for your reply!
> 
> Stephan Mueller <smuel...@chronox.de> on Fri, Apr 22 2016:
> >> > The main
> >> > improvements compared to the legacy /dev/random is to provide
> >> > sufficient
> >> > entropy during boot time as well as in virtual environments and when
> >> > using SSDs.
> >> 
> >> After reading the paper, it is not clear to me how this goal is
> >> achieved.
> > 
> > May I ask you to direct your attention to section 1.1. The legacy
> > /dev/random has three noise sources: block devices, HID and interrupts.
> > With those noise sources, only the block device and HID noise sources are
> > interpreted to collect entropy between zero to 11 bits per event. The
> > interrupt noise source is credited one bit of entropy per 64 received
> > interrupts (or the expiry of 1 second, whatever comes later).
> 
> I see, thanks. But so it seems your main contribution is to change the
> weights on the entropy estimation...

This goes in the right direction, but does not hit the nail completely.
> 
> > I interpret each interrupt to have about 0.9 bits of entropy. And I
> > do not specifically look for block device and HID events
> > 
> > As during boot, hundreds of interrupts are generated, and I have the
> > valuation of about 0.9 bits of entropy per interrupt event, the LRNG will
> > collect entropy much faster.
> 
> No, as you explain, it will not collect it faster; it will increase its
> counter faster. Your estimation bounds are (0.9, 0, 0) compared to Linux

This is not correct. I only have 0.9 per interrupts. Thus the two following 
zeros are not correct.

> with (0.015, 11, 11) or something.
> 
> My criticism now would be the question what this has to do with the rest
> of the design. Why not just argue for an adjustment of the current
> kernel's estimator?

You cannot change it without a design change of the current code, that is the 
crux: there is a correlation between the time stamp processed in 
add_interrupt_randomness and add_disk/input_randomness. To wash that 
correlation away, the conservative estimate of 1/64th bit per interrupt is 
applied to the interrupt timings.

Thus, I changed the approach of collecting entropy right from the start.

Furthermore, I replaced the SHA-1 (based on a C implementation) RNG with an 
SP800-90A DRBG which can use hardware acceleration -- see the performance 
measurements in 3.4.7.
> 
> In addition I would be interested in a more fine-grained analysis of the
> few hundred interrupts that you mention happen during early boot. Which
> sources typically produce these?

I tested a worst case analysis as outlined in section 3.3 of my documentation 
(single interrupt, controlled by external without too much interference by 
other entities).

I provided the test tools so that you can re-test it with the interrupt load 
you like.

Ciao
Stephan
_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to