In the last about time stamps and time representations, I took the
position that kernel timestamps had to be as simple as possible and
time efficient as possible to not impact overall system performance.
I was asked for data about timekeeping overhad causing problems to
backup my claims.  I've spent a little time looking around, and I
cannot point to a paper on the topic.  However, I have found raw data
to support that small changes in timekeeping can have mesruable
effects on macro benchmarks:

Comparing apples to apples, changing only from TSC to ACPI-Fast, we
see the following results for one of the elements in the testing
matrix (others are similar):

    select_index    20000   0       0       14097.47
    select_index    20000   0       0       13741.43
    select_index    20000   1       0       13704.01
    select_index    20000   0       0       13626.05
    select_index    20000   0       0       13769.32


    select_index    20000   1       0       13638.09
    select_index    20000   1       0       15204.89
    select_index    20000   0       0       15126.16
    select_index    20000   1       0       15199.22
    select_index    20000   1       0       15111.09

The far right column is transactions per second.  Even the 'eyeball'
test suggests that there's a signficant different.  Eliminating the
one outlier from each dataset shows the difference here is big:

    N           Min           Max        Median           Avg         Stddev
x   4      13626.05      13769.32      13722.72     13710.203      62.155788
+   4      15111.09      15204.89      15162.69      15160.34      48.614784
Difference at 95.0% confidence
        1450.14 +/- 96.546
        10.5771% +/- 0.704191%
        (Student's t, pooled s = 55.7976)

This pattern is repeated again and again the different data sets,
although I won't bore the list with repetition (I just quickly
eyeballed them).  The only difference between the two runs above is
that the 'TSC' time counter takes on the order of a 2ns, while
ACPI-Fast is closer to 60ns (plus overhead of the time keeping system,
which is the same for each timecounter).  This small difference
results in a 10% tps difference for these tests.

While this is not a scientific study, and hasn't been put through the
riggors of peer review, it is a pattern than has repeated itself in
many of the other benchmarks that were done at the same time.  It
certainly is suggestive of a correlation between the time it takes to
do timekeeping and overall system performance.

It certainly would be worth testing other timestamp/timekeeping
operations to see if they also produce a similar effect, but I've not
had the time to implement something to test that hypothesis.  It is a
reasonable inference that overhead in timestamp math would have
similar effects to the timekeeping overhead, perhaps more so, as
timestamp math is done more often than raw timekeeping generation of


Reply via email to