On 24.06.2014 21:03, [email protected] wrote:
> Kirill Tkhai <[email protected]> writes:
> 
>> We kill rq->rd on the CPU_DOWN_PREPARE stage:
>>
>>      cpuset_cpu_inactive -> cpuset_update_active_cpus -> 
>> partition_sched_domains ->
>>      -> cpu_attach_domain -> rq_attach_root -> set_rq_offline
>>
>> This unthrottles all throttled cfs_rqs.
>>
>> But the cpu is still able to call schedule() till
>>
>>      take_cpu_down->__cpu_disable()
>>
>> is called from stop_machine.
>>
>> This case the tasks from just unthrottled cfs_rqs are pickable
>> in a standard scheduler way, and they are picked by dying cpu.
>> The cfs_rqs becomes throttled again, and migrate_tasks()
>> in migration_call skips their tasks (one more unthrottle
>> in migrate_tasks()->CPU_DYING does not happen, because rq->rd
>> is already NULL).
>>
>> Patch sets runtime_enabled to zero. This guarantees, the runtime
>> is not accounted, and the cfs_rqs won't exceed given
>> cfs_rq->runtime_remaining = 1, and tasks will be pickable
>> in migrate_tasks(). runtime_enabled is recalculated again
>> when rq becomes online again.
>>
>> Ben Segall also noticed, we always enable runtime in
>> tg_set_cfs_bandwidth(). Actually, we should do that for online
>> cpus only. To fix that, we check if a cpu is online when
>> its rq is locked. This guarantees we do not have races with
>> set_rq_offline(), which also requires rq->lock.
>>
>> v2: Fix race with tg_set_cfs_bandwidth().
>>     Move cfs_rq->runtime_enabled=0 above unthrottle_cfs_rq().
>>
>> Signed-off-by: Kirill Tkhai <[email protected]>
>> CC: Konstantin Khorenko <[email protected]>
>> CC: Ben Segall <[email protected]>
>> CC: Paul Turner <[email protected]>
>> CC: Srikar Dronamraju <[email protected]>
>> CC: Mike Galbraith <[email protected]>
>> CC: Peter Zijlstra <[email protected]>
>> CC: Ingo Molnar <[email protected]>
>> ---
>>  kernel/sched/core.c |   15 +++++++++++----
>>  kernel/sched/fair.c |   22 ++++++++++++++++++++++
>>  2 files changed, 33 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 7f3063c..707a3c5 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -7842,11 +7842,18 @@ static int tg_set_cfs_bandwidth(struct task_group 
>> *tg, u64 period, u64 quota)
>>              struct rq *rq = cfs_rq->rq;
>>  
>>              raw_spin_lock_irq(&rq->lock);
>> -            cfs_rq->runtime_enabled = runtime_enabled;
>> -            cfs_rq->runtime_remaining = 0;
>> +            /*
>> +             * Do not enable runtime on offline runqueues. We specially
>> +             * make it disabled in unthrottle_offline_cfs_rqs().
>> +             */
>> +            if (cpu_online(i)) {
>> +                    cfs_rq->runtime_enabled = runtime_enabled;
>> +                    cfs_rq->runtime_remaining = 0;
>> +
>> +                    if (cfs_rq->throttled)
>> +                            unthrottle_cfs_rq(cfs_rq);
>> +            }
> 
> We can just do for_each_online_cpu, yes? Also we probably need
> get_online_cpus/put_online_cpus, and/or want cpu_active_mask instead
> right?
> 

Yes, we can use for_each_online_cpu/for_each_active_cpu with
get_online_cpus() taken. But it adds one more lock dependence.
This looks worse for me.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to