At Google, we essentially reverted 88ec22d and the subsequent tweaks
to it, keeping vruntime absolute always , instead using
task_move_group_fair to change the basis of relative min_vruntime
between cpus.  We found this made it a lot easier to reason about and
work with corss-cpu computations.  I could post the patch if it would
be of interest...

On Wed, Mar 9, 2016 at 5:06 AM, pavankumar kondeti
<[email protected]> wrote:
> Hi Peter,
>
> On Wed, Mar 9, 2016 at 5:34 PM, Peter Zijlstra <[email protected]> wrote:
>> On Wed, Mar 09, 2016 at 02:52:57PM +0530, Pavan Kondeti wrote:
>>
>>> When a CFS task is enqueued during migration (load balance or change in
>>> affinity), its vruntime is normalized before updating the current and
>>> cfs_rq->min_vruntime.
>>
>> static void
>> enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>> {
>>         /*
>>          * Update the normalized vruntime before updating min_vruntime
>>          * through calling update_curr().
>>          */
>>         if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING))
>>                 se->vruntime += cfs_rq->min_vruntime;
>>
>>         update_curr(cfs_rq);
>>
>> This, right? Some idiot wrote a comment but forgot to explain why.
>>
>>> If the current entity is a low priority task or belongs to a cgroup
>>> that has lower cpu.shares and it is the only entity queued, there is a
>>> possibility of big update to the cfs_rq->min_vruntime.
>>
>>> As the migrated task is normalized before this update, it gets an
>>> unfair advantage over tasks queued after this point. If the migrated
>>> task is a CPU hogger, the other CFS tasks queued on this CPU gets
>>> starved.
>>
>> Because it takes a whole while for the newly placed task to gain on the
>> previous task, right?
>>
>
> Yes. The newly woken up task vruntime is adjusted wrt the 
> cfs_rq->min_vruntime.
> The cfs_rq->min_vruntime can potentially be hundreds of msec beyond the
> migrated task.
>
>>> If we add the migrated task to destination CPU cfs_rq's rb tree before
>>> updating the current in enqueue_entity(), the cfs_rq->min_vruntime
>>> does not go beyond the newly migrated task. Is this an acceptable
>>> solution?
>>
>> Hurm.. so I'm not sure how that would solve anything. The existing task
>> would still be shot far into the future.
>>
> In my testing, the problem is gone with this approach.
>
> The update_min_vruntime() called from  update_curr() has a check to make
> sure that cfs_rq->min_vruntime  does not go beyond the leftmost entity
> (in this case it would be the migrated task) vruntime. so we don't see
> the migrated task getting any advantage.
>
>> What you want is to normalize after update_curr()... but we cannot do
>> that in the case cfs_rq->curr == se (which I suppose is what that
>> comment is on about).
>>
>> Does something like the below work?
>>
>
> Thanks for providing this patch. It solved the problem.
>
>> ---
>>  kernel/sched/fair.c | 20 ++++++++++++++------
>>  1 file changed, 14 insertions(+), 6 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 33130529e9b5..3c114d971d84 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3157,17 +3157,25 @@ static inline void check_schedstat_required(void)
>>  static void
>>  enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>>  {
>> +       bool renorm = !(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING);
>> +       bool curr = cfs_rq->curr == se;
>> +
>>         /*
>> -        * Update the normalized vruntime before updating min_vruntime
>> -        * through calling update_curr().
>> +        * If we're the current task, we must renormalise before calling
>> +        * update_curr().
>>          */
>> -       if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING))
>> +       if (renorm && curr)
>>                 se->vruntime += cfs_rq->min_vruntime;
>>
>> +       update_curr(cfs_rq);
>> +
>>         /*
>> -        * Update run-time statistics of the 'current'.
>> +        * Otherwise, renormalise after, such that we're placed at the 
>> current
>> +        * moment in time, instead of some random moment in the past.
>>          */
>> -       update_curr(cfs_rq);
>> +       if (renorm && !curr)
>> +               se->vruntime += cfs_rq->min_vruntime;
>> +
>>         enqueue_entity_load_avg(cfs_rq, se);
>>         account_entity_enqueue(cfs_rq, se);
>>         update_cfs_shares(cfs_rq);
>> @@ -3183,7 +3191,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct 
>> sched_entity *se, int flags)
>>                 update_stats_enqueue(cfs_rq, se);
>>                 check_spread(cfs_rq, se);
>>         }
>> -       if (se != cfs_rq->curr)
>> +       if (!curr)
>>                 __enqueue_entity(cfs_rq, se);
>>         se->on_rq = 1;
>>

Reply via email to