At Mon, 19 May 2014 12:32:47 +0200,
Peter Zijlstra wrote:
> 
> OK, so can someone explain this ->timer_active thing? esp. what's the
> 'obvious' difference with hrtimer_active()?

A good question :)

> Ideally we'd change the lot to not have this, but if we have to keep it
> we'll need to make it lockdep visible because all this stinks

If the patch below is what Ben means, timer_active will become even more unused.
It seems to me now, that it will be perfectly possible to just drop it.
I'll try to prepare a patch for.

--

index 7570dd9..be7865e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3408,7 +3408,6 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth 
*cfs_b, int overrun)
        u64 runtime, runtime_expires;
        int idle = 1, throttled;
 
-       raw_spin_lock(&cfs_b->lock);
        /* no need to continue the timer with no bandwidth constraint */
        if (cfs_b->quota == RUNTIME_INF)
                goto out_unlock;
@@ -3477,7 +3476,6 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth 
*cfs_b, int overrun)
 out_unlock:
        if (idle)
                cfs_b->timer_active = 0;
-       raw_spin_unlock(&cfs_b->lock);
 
        return idle;
 }
@@ -3656,6 +3654,8 @@ static enum hrtimer_restart sched_cfs_period_timer(struct 
hrtimer *timer)
        int overrun;
        int idle = 0;
 
+       raw_spin_lock(&cfs_b->lock);
+
        for (;;) {
                now = hrtimer_cb_get_time(timer);
                overrun = hrtimer_forward(timer, now, cfs_b->period);
@@ -3666,6 +3666,8 @@ static enum hrtimer_restart sched_cfs_period_timer(struct 
hrtimer *timer)
                idle = do_sched_cfs_period_timer(cfs_b, overrun);
        }
 
+       raw_spin_unlock(&cfs_b->lock);
+
        return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
 }

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to