On 10/11/2012 04:21 PM, Paul E. McKenney wrote:
> On Thu, Oct 11, 2012 at 12:51:44PM -0700, John Stultz wrote:
>> On 10/11/2012 11:52 AM, Prarit Bhargava wrote:
>>> I've been tracking an odd bug that may involve the RCU NOHZ code and
>>> just want to know if you have any ideas on debugging and/or what might be
>>> wrong.  Note the bug happens on *BOTH* upstream and the current RHEL6 tree.
>>> The data in this email is from running on RHEL6 because that's what I happen
>>> to be running ATM.  The result, however, is _identical_ to that of linux.git
>>> latest.
>>>
>>> The attached program compares userspace TSC reads to the time returned from
>>> the REALTIME_CLOCK[1].  The test does the following
>>>
>>> read tsc1
>>> get REALTIME_CLOCK value
>>> read tsc2
>>>
>>> and then does a comparison between the tsc read and the REALTIME_CLOCK value
>>> to see if they are in sync with each other.
>>>
>>> [I'm leaving out the guts of the analysis here.  It is sufficient to show
>>> examples of "good" data and "bad" data IMO.]
>>>
>>> On a good run, we see little variance in between the values:
>>>
>>>     0   144       0.1
>>>     1   138       1.8
>>>     2   147      -2.9
>> [snip]
>>>    29   144      -0.6
>>> n: 30, slope: 0.50 (1.99 GHz), dev: 1.1 ns, max: 2.9 ns
>>>
>>>
>>> On a bad run, there is a lot of variance between the values:
>>>
>>>     0   144    -346.0
>>>     1   138    1410.8
>>>     2   138    -806.9
>>>     3   141    4006.6
>>>     4   147   -3996.1
>> [snip]
>>>    29   141     -50.3
>>> n: 30, slope: 0.50 (1.99 GHz), dev: 1231.4 ns, max: 4006.6 ns
>>
>>
>> Do you see the same noisy variance when instead of doing:
>>     rdtsc()
>>     clock_gettime()
>>     rdtsc()
>>
>> you do:
>>     clock_gettime()
>>     clock_gettime()
>>
>> And calculate the delta of the timestamp results?

I do not see the noisy variance when comparing clock_gettime() to 
clock_gettime().

>>
>> Also does this behavior change if you select different clocksources
>> on the system?

No, if the clocksource is the hpet, I still see the large variance. ie) the
behaviour does not change.

>>
>>> It was noted by the bug reporter that specifying "nohz=off" resolved the
>>> problem.  I tested with "nohz=off" and AFAICT it fixes the issue.  I started
>>> out debugging by assuming that delays in the c-state transitions were not 
>>> being
>>> properly accounted for in the timing calculations.
>>>
>>> I ran a baseline test on an unmodified kernel (with no extra boot options) 
>>> and
>>> confirmed that powertop shows the CPUs entering deep c-states while the 
>>> test was
>>> running for 300 runs.
>>>
>>> I then instrumented the PM QoS and the power management code (specifically
>>> cpuidle).  I put in a large # of printk's to monitor the CPU transitions, 
>>> and
>>> monitored the power states via powertop in order to verify that the system 
>>> was
>>> behaving correctly wrt PM QoS.
>>>
>>> If you modify the tstsc script to run 300 times with this modified kernel, 
>>> and
>>> run powertop in the middle of the script, you will see that the processors 
>>> do
>>> NOT enter deep c-states.  **This means that PM QoS is doing its job 
>>> correctly**.
>>
>> So its not clear here,  do you see the same noisier latencies when
>> using PM QoS to limit deep c-states?

I see the same noisier latencies when using PM QoS to limit the deep c-state
transitions.

>>
>> Finally, how many cpus are on the machine you see this with?  Does
>> the effect go away with maxcpus=1?


24 physical/48 logical, 2G/core RAM

The large variance is still there if maxcpus=1.

> 
> Also, what is the value of NR_CPUS?  And exactly which kernel.org kernel
> are you using?

NR_CPUS=4096

I'm using the "main" kernel.org tree,

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

as of commit 4f1cd91497774488ed16119ec3f54b3daf1561de

(I recompiled this morning and re-ran the tests and they still show a large
variance)

> 
> The effect of removing the two functions you noted (on 3.6 and earlier)
> is to prevent RCU from checking for dyntick-idle CPUs, likely incurring
> a cache miss for each CPU with interrupts disabled.  If you have a lot
> of CPUs (or even if NR_CPUS is large and you have a smaller number of
> CPUs), this can result in user-space-visible delays.
> 

Interesting.  I'll lower NR_CPUS and see if that shows an improvement.

>                                                       Thanx, Paul
> 
>> It looks like the TSC clocksource is being used, just because the
>> latencies are so low, but Is the same clocksource being used in all
>> of these cases?


Yes, the TSC is always being used.

>>
>> One possibility is that if the cpu we're doing our timekeeping
>> accumulation on is different then the one running the test, we might
>> go into deeper idle for longer periods of time. Then when we
>> accumulate time, we have more then a single tick to accumulate and
>> that might require holding the timekeeper/xtime lock for longer
>> times.
>>
>> And the max 2.9ns variance seems particularly low, given that we do
>> call update_vsyscall every so often, and that should block
>> clock_gettime() callers while we update the vsyscall data.  Could it
>> be that the test is too short to see the locking effect, so you're
>> just getting lucky, and that adding nohz is jostling the regularity
>> of the execution so you then see the lock wait times?  If you
>> increase the samples and sample loops by 1000 does that change the
>> behavior?

That's a possiblity, although I suspect that this has more to do with not
executing the RCU NOHZ code given that we don't see a problem with the
clock_gettime() vs clock_gettime() test.  I wonder if not executing the RCU NOHZ
code somehow introduces a "regularity" with execution that results in the CPU
always being in C0/polling when the test is run?

P.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to