On Wed, Jul 15, 2015 at 08:04:41AM +0800, Yuyang Du wrote:
> The cfs_rq's load_avg is composed of runnable_load_avg and blocked_load_avg.
> Before this series, sometimes the runnable_load_avg is used, and sometimes
> the load_avg is used. Completely replacing all uses of runnable_load_avg
> with load_avg may be too big a leap, i.e., the blocked_load_avg is concerned
> to result in overrated load. Therefore, we get runnable_load_avg back.
> 
> The new cfs_rq's runnable_load_avg is improved to be updated with all of the
> runnable sched_eneities at the same time, so the one sched_entity updated and
> the others stale problem is solved.
> 
> Signed-off-by: Yuyang Du <yuyang...@intel.com>
> ---

<snip>

> +/* Remove the runnable load generated by se from cfs_rq's runnable load 
> average */
> +static inline void
> +dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
> +{
> +     update_load_avg(se, 1);
> +

I think we need an update_cfs_rq_load_avg() here? Because the
runnable_load_avg may not be up to date when dequeue_entity_load_avg()
is called, right?

> +     cfs_rq->runnable_load_avg =
> +             max_t(long, cfs_rq->runnable_load_avg - se->avg.load_avg, 0);
> +     cfs_rq->runnable_load_sum =
> +             max_t(s64, cfs_rq->runnable_load_sum - se->avg.load_sum, 0);
> +}
> +
>  /*
>   * Task first catches up with cfs_rq, and then subtract
>   * itself from the cfs_rq (task must be off the queue now).

<snip>

> @@ -2982,7 +3015,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct 
> sched_entity *se, int flags)
>        * Update run-time statistics of the 'current'.
>        */
>       update_curr(cfs_rq);
> -     update_load_avg(se, 1);
> +     dequeue_entity_load_avg(cfs_rq, se);
>  
>       update_stats_dequeue(cfs_rq, se);
>       if (flags & DEQUEUE_SLEEP) {

Thanks and Best Regards,
Boqun

Attachment: signature.asc
Description: PGP signature

Reply via email to