Hi,

> [1] patch at http://www.chronox.de/jent/jitterentropy-20130516.tar.bz2
> 
> To overcome the insufficient amount of entropy present (at least) on a
> Linux box, I implemented the CPU Jitter random number generator
> available at http://www.chronox.de/ . The heart of the RNG is about 30
> lines of easy to read code. The readme in the main directory explains
> the different code files. The new version now implements the RNG as an
> OpenSSL engine as well as provides a patch for RAND_poll.
> 
> The documentation of the CPU Jitter random number generator
> (http://www.chronox.de/jent/doc/index.html and PDF at
> http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.pdf -- the graphs and
> pictures are better in PDF) offers a full analysis of:
> 
> - the root cause of entropy

Consider following snippet:

static inline unsigned int rdtsc()
{ int eax;
        asm volatile("rdtsc":"=a"(eax)::"edx");
  return eax;
}
main()
{ int i;
  unsigned int diff1, diff0 = rdtsc() - rdtsc();

     for (i=0;i<10000000;i++)
        if ((diff1 = rdtsc() - rdtsc()) != diff0)
                printf("%u\n",-diff0), diff0=diff1;
}

How many lines do you think it would print? If I compile it with
optimization on, my Sandy Bridge system prints ... ~100 lines. Hundred
out of 10 million tries. Of the hundred half of values is 28 and half is
thousands and up, obviously timer interrupts. Thousands and up is
something you suggest to disregard, so all we have is single value of
28. What "miniscule variations" of which "instruction" are we talking
about? What I'm trying to say is that I can't see that you managed to
actually formulate what is "the root cause of entropy". "CPU execution
time jitter" does not describe it. I'd argue that variations originate
from interactions with memory subsystem. You use call to function and on
x86 it involves writing return address to stack and fetching it at
return [as well as reading system timer value from memory in user-land
case involves memory reference]. What I'm also trying to say is that the
phenomena has to be platform-specific [as there are platforms where call
to subroutine does not involve reference to memory]. Even with [plain]
references to memory there is no guarantee that samples won't form
regular pattern on some platform or under some circumstances such as
really idle system and everything is in cache. On the contrary, it was
observed to form regular pattern. The fact that it allegedly didn't on
*your* system is not sufficient. One should also keep in mind that on
some platforms timer counter is fed with alternative frequency and any
variations that are less than former simply can't be measured. Example
of such platform is PPC, where counter runs at frequencies several times
lower than processor operating frequency. One of extreme cases is factor
of 81 on PowerBook G4. Equivalent of above program for PPC prints
sequence of 0 and 1 (and occasional timer interrupts), where 1s indicate
transition to next counter value.

If you ought to capture variations in interaction with memory subsystem,
you ought to ensure that there are memory references between timer
readings. References preferably should be non-trivial, as slow as
possible and involve as many components as possible, yet have as little
OS dependencies as possible. This is what
http://www.openssl.org/~appro/OPENSSL_instrument_bus/ is about. See even
http://marc.info/?t=132655907800004&r=1&w=2.
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [email protected]
Automated List Manager                           [email protected]

Reply via email to