- the root cause of entropy
Consider following snippet:

static inline unsigned int rdtsc()
{ int eax;
        asm volatile("rdtsc":"=a"(eax)::"edx");
  return eax;

Unfortuately I am not an assembler wizzard, but I guess you only try to
return parts of the rdtsc instruction?
}
main()
{ int i;
  unsigned int diff1, diff0 = rdtsc() - rdtsc();

     for (i=0;i<10000000;i++)
        if ((diff1 = rdtsc() - rdtsc()) != diff0)
                printf("%u\n",-diff0), diff0=diff1;
}

How many lines do you think it would print? If I compile it with
optimization on, my Sandy Bridge system prints ... ~100 lines. Hundred

Without optimization:

$ gcc -o test test.c
$ ./test > test.out
cat test.out | wc
 128886  128886  386814

Test with optimizations:

$ gcc -O2 -o test test.c
./test > test.out
$ cat test.out | wc
 270876  270876  812741

So, where is the problem?

Once thread is scheduled for execution, it effectively gets execution core for exclusive use. On idle system for whole timer quantum. How many instructions would it execute then? On 3GHz processor with 250Hz system timer (as on my system) it will have 12.000.000, twelve million, clock cycles to spend. Whatever variations that are caused by TLB and cache misses, branch misprediction, are all settled in first several hundred cycles or in less than 0.001% of time. After that execution of pure computational code is very deterministic. Differences between rdtsc readings don't have to be constant, depending on how instructions interacts with pipeline they can form complex yet periodic structure, which bares no entropy. I could have emphasize "pure computational", but it doesn't seem to concern you. Assertion seems to be that *any* instruction sequence is subject to timing variations. Even if normal memory references are involved, one should remember that cache's purpose is to amortize and effectively mask variations in references to external memory. Yes, you can talk about hardware interrupts and shared caches, but then it would mean that you're *dependent* on system being busy with other tasks. But then you can't value on entropy without knowing the momentary load. If you don't know load, then you have to assume the worst, i.e. that system is completely idle and code executes for whole timer quantum without interruptions (go to beginning of paragraph and reiterate on how [un]predictable would be scheduler behaviour and return from interrupt).

out of 10 million tries. Of the hundred half of values is 28 and half
is thousands and up, obviously timer interrupts. Thousands and up is
something you suggest to disregard, so all we have is single value of
28. What "miniscule variations" of which "instruction" are we talking

All values I see fluctuate according to the values in the graph in
chapter 2.

By any chance, did you disable your TSC (you could do that on a
per-process basis)?

Bottom line, with the code you suggest, I still see the same
fluctuations I used to draw the graphs in chapter 2. Note, this is just
a visual inspection of the values I see in test.out.

about? What I'm trying to say is that I can't see that you managed to
actually formulate what is "the root cause of entropy". "CPU execution
time jitter" does not describe it. I'd argue that variations originate

Very interesting that you have a different reaction on your system.

You can't judge *all* the systems by yours, so why does it come as surprise? What's likely to be "special" [and relevant in the context] about my system is that I disable TurboBoost and HyperThreading. If TSC was disabled executable wouldn't work at all or most common difference between readings wouldn't be 28.

All tests I did so far on different CPUs show the expected results.

Can you tell me more about your system? Can you please execute
jent_entropy_init() all by itself?

jent_enropy_init returns success on my system. But does it manage to detect my system specifics? No, it simply assumes that it's running on your system. This is the problem.

All your system shows implies that the root cause is not present.
Hence, the code requires to execute jent_entropy_init() and only
continue when this function returns without an error.

Once again, I fail to see that you managed to explain what is the root cause. And without understanding you can't put value on it and make such broad statements.
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [email protected]
Automated List Manager                           [email protected]

Reply via email to