On Tue, 21 May 2013 12:04:13 +0200 Andy Polyakov <[email protected]> wrote:
Hi Andy, > Hi, > > > [1] patch at > > http://www.chronox.de/jent/jitterentropy-20130516.tar.bz2 > > > > To overcome the insufficient amount of entropy present (at least) > > on a Linux box, I implemented the CPU Jitter random number generator > > available at http://www.chronox.de/ . The heart of the RNG is about > > 30 lines of easy to read code. The readme in the main directory > > explains the different code files. The new version now implements > > the RNG as an OpenSSL engine as well as provides a patch for > > RAND_poll. > > > > The documentation of the CPU Jitter random number generator > > (http://www.chronox.de/jent/doc/index.html and PDF at > > http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.pdf -- the graphs > > and pictures are better in PDF) offers a full analysis of: > > > > - the root cause of entropy > > Consider following snippet: > > static inline unsigned int rdtsc() > { int eax; > asm volatile("rdtsc":"=a"(eax)::"edx"); > return eax; Unfortuately I am not an assembler wizzard, but I guess you only try to return parts of the rdtsc instruction? > } > main() > { int i; > unsigned int diff1, diff0 = rdtsc() - rdtsc(); > > for (i=0;i<10000000;i++) > if ((diff1 = rdtsc() - rdtsc()) != diff0) > printf("%u\n",-diff0), diff0=diff1; > } > > How many lines do you think it would print? If I compile it with > optimization on, my Sandy Bridge system prints ... ~100 lines. Hundred Without optimization: $ gcc -o test test.c $ ./test > test.out cat test.out | wc 128886 128886 386814 Test with optimizations: $ gcc -O2 -o test test.c ./test > test.out $ cat test.out | wc 270876 270876 812741 So, where is the problem? This is performed on $ cat /proc/cpuinfo | grep CPU model name : Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz > out of 10 million tries. Of the hundred half of values is 28 and half > is thousands and up, obviously timer interrupts. Thousands and up is > something you suggest to disregard, so all we have is single value of > 28. What "miniscule variations" of which "instruction" are we talking All values I see fluctuate according to the values in the graph in chapter 2. By any chance, did you disable your TSC (you could do that on a per-process basis)? Bottom line, with the code you suggest, I still see the same fluctuations I used to draw the graphs in chapter 2. Note, this is just a visual inspection of the values I see in test.out. > about? What I'm trying to say is that I can't see that you managed to > actually formulate what is "the root cause of entropy". "CPU execution > time jitter" does not describe it. I'd argue that variations originate Very interesting that you have a different reaction on your system. All tests I did so far on different CPUs show the expected results. Can you tell me more about your system? Can you please execute jent_entropy_init() all by itself? > from interactions with memory subsystem. You use call to function and > on x86 it involves writing return address to stack and fetching it at > return [as well as reading system timer value from memory in user-land > case involves memory reference]. What I'm also trying to say is that > the phenomena has to be platform-specific [as there are platforms > where call to subroutine does not involve reference to memory]. Even > with [plain] references to memory there is no guarantee that samples > won't form regular pattern on some platform or under some > circumstances such as really idle system and everything is in cache. > On the contrary, it was observed to form regular pattern. The fact > that it allegedly didn't on *your* system is not sufficient. One > should also keep in mind that on some platforms timer counter is fed > with alternative frequency and any variations that are less than > former simply can't be measured. Example of such platform is PPC, > where counter runs at frequencies several times lower than processor > operating frequency. One of extreme cases is factor of 81 on > PowerBook G4. Equivalent of above program for PPC prints sequence of > 0 and 1 (and occasional timer interrupts), where 1s indicate > transition to next counter value. All your system shows implies that the root cause is not present. Hence, the code requires to execute jent_entropy_init() and only continue when this function returns without an error. That function shall check whether you underlying system shows the expected functionality. > > If you ought to capture variations in interaction with memory > subsystem, you ought to ensure that there are memory references > between timer readings. References preferably should be non-trivial, > as slow as possible and involve as many components as possible, yet > have as little OS dependencies as possible. This is what > http://www.openssl.org/~appro/OPENSSL_instrument_bus/ is about. See > even http://marc.info/?t=132655907800004&r=1&w=2. Observing the memory accesses is not the intention. Thanks Stephan -- | Cui bono? | ______________________________________________________________________ OpenSSL Project http://www.openssl.org Development Mailing List [email protected] Automated List Manager [email protected]
