Re: [PATCH] sched/fair: Fix util_avg of new tasks for asymmetric systems

2018-06-04 Thread Quentin Perret
On Monday 04 Jun 2018 at 14:23:42 (+0200), Vincent Guittot wrote:
> On 4 June 2018 at 13:58, Quentin Perret  wrote:
> > When a new task wakes-up for the first time, its initial utilization
> > is set to half of the spare capacity of its CPU. The current
> > implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE
> > directly as a capacity reference. As a result, on a big.LITTLE system, a
> > new task waking up on an idle little CPU will be given ~512 of util_avg,
> > even if the CPU's capacity is significantly less than that.
> >
> > Fix this by computing the spare capacity with arch_scale_cpu_capacity().
> >
> > Cc: Ingo Molnar 
> > Cc: Peter Zijlstra 
> > Signed-off-by: Quentin Perret 
> 
> Acked-by: Vincent Guittot 

Thanks !

> 
> > ---
> >  kernel/sched/fair.c | 10 ++
> >  1 file changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e497c05aab7f..f19432c17017 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity 
> > *se);
> >   * To solve this problem, we also cap the util_avg of successive tasks to
> >   * only 1/2 of the left utilization budget:
> >   *
> > - *   util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n
> > + *   util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n
> >   *
> > - * where n denotes the nth task.
> > + * where n denotes the nth task and cpu_scale the CPU capacity.
> >   *
> > - * For example, a simplest series from the beginning would be like:
> > + * For example, for a CPU with 1024 of capacity, a simplest series from
> > + * the beginning would be like:
> >   *
> >   *  task  util_avg: 512, 256, 128,  64,  32,   16,8, ...
> >   * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ...
> > @@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
> >  {
> > struct cfs_rq *cfs_rq = cfs_rq_of(se);
> > struct sched_avg *sa = >avg;
> > -   long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
> > +   long cpu_scale = arch_scale_cpu_capacity(NULL, 
> > cpu_of(rq_of(cfs_rq)));
> > +   long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2;
> >
> > if (cap > 0) {
> > if (cfs_rq->avg.util_avg != 0) {
> > --
> > 2.17.0
> >


Re: [PATCH] sched/fair: Fix util_avg of new tasks for asymmetric systems

2018-06-04 Thread Quentin Perret
On Monday 04 Jun 2018 at 14:23:42 (+0200), Vincent Guittot wrote:
> On 4 June 2018 at 13:58, Quentin Perret  wrote:
> > When a new task wakes-up for the first time, its initial utilization
> > is set to half of the spare capacity of its CPU. The current
> > implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE
> > directly as a capacity reference. As a result, on a big.LITTLE system, a
> > new task waking up on an idle little CPU will be given ~512 of util_avg,
> > even if the CPU's capacity is significantly less than that.
> >
> > Fix this by computing the spare capacity with arch_scale_cpu_capacity().
> >
> > Cc: Ingo Molnar 
> > Cc: Peter Zijlstra 
> > Signed-off-by: Quentin Perret 
> 
> Acked-by: Vincent Guittot 

Thanks !

> 
> > ---
> >  kernel/sched/fair.c | 10 ++
> >  1 file changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e497c05aab7f..f19432c17017 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity 
> > *se);
> >   * To solve this problem, we also cap the util_avg of successive tasks to
> >   * only 1/2 of the left utilization budget:
> >   *
> > - *   util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n
> > + *   util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n
> >   *
> > - * where n denotes the nth task.
> > + * where n denotes the nth task and cpu_scale the CPU capacity.
> >   *
> > - * For example, a simplest series from the beginning would be like:
> > + * For example, for a CPU with 1024 of capacity, a simplest series from
> > + * the beginning would be like:
> >   *
> >   *  task  util_avg: 512, 256, 128,  64,  32,   16,8, ...
> >   * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ...
> > @@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
> >  {
> > struct cfs_rq *cfs_rq = cfs_rq_of(se);
> > struct sched_avg *sa = >avg;
> > -   long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
> > +   long cpu_scale = arch_scale_cpu_capacity(NULL, 
> > cpu_of(rq_of(cfs_rq)));
> > +   long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2;
> >
> > if (cap > 0) {
> > if (cfs_rq->avg.util_avg != 0) {
> > --
> > 2.17.0
> >


Re: [PATCH] sched/fair: Fix util_avg of new tasks for asymmetric systems

2018-06-04 Thread Vincent Guittot
On 4 June 2018 at 13:58, Quentin Perret  wrote:
> When a new task wakes-up for the first time, its initial utilization
> is set to half of the spare capacity of its CPU. The current
> implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE
> directly as a capacity reference. As a result, on a big.LITTLE system, a
> new task waking up on an idle little CPU will be given ~512 of util_avg,
> even if the CPU's capacity is significantly less than that.
>
> Fix this by computing the spare capacity with arch_scale_cpu_capacity().
>
> Cc: Ingo Molnar 
> Cc: Peter Zijlstra 
> Signed-off-by: Quentin Perret 

Acked-by: Vincent Guittot 

> ---
>  kernel/sched/fair.c | 10 ++
>  1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e497c05aab7f..f19432c17017 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity 
> *se);
>   * To solve this problem, we also cap the util_avg of successive tasks to
>   * only 1/2 of the left utilization budget:
>   *
> - *   util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n
> + *   util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n
>   *
> - * where n denotes the nth task.
> + * where n denotes the nth task and cpu_scale the CPU capacity.
>   *
> - * For example, a simplest series from the beginning would be like:
> + * For example, for a CPU with 1024 of capacity, a simplest series from
> + * the beginning would be like:
>   *
>   *  task  util_avg: 512, 256, 128,  64,  32,   16,8, ...
>   * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ...
> @@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
>  {
> struct cfs_rq *cfs_rq = cfs_rq_of(se);
> struct sched_avg *sa = >avg;
> -   long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
> +   long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq)));
> +   long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2;
>
> if (cap > 0) {
> if (cfs_rq->avg.util_avg != 0) {
> --
> 2.17.0
>


Re: [PATCH] sched/fair: Fix util_avg of new tasks for asymmetric systems

2018-06-04 Thread Vincent Guittot
On 4 June 2018 at 13:58, Quentin Perret  wrote:
> When a new task wakes-up for the first time, its initial utilization
> is set to half of the spare capacity of its CPU. The current
> implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE
> directly as a capacity reference. As a result, on a big.LITTLE system, a
> new task waking up on an idle little CPU will be given ~512 of util_avg,
> even if the CPU's capacity is significantly less than that.
>
> Fix this by computing the spare capacity with arch_scale_cpu_capacity().
>
> Cc: Ingo Molnar 
> Cc: Peter Zijlstra 
> Signed-off-by: Quentin Perret 

Acked-by: Vincent Guittot 

> ---
>  kernel/sched/fair.c | 10 ++
>  1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e497c05aab7f..f19432c17017 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity 
> *se);
>   * To solve this problem, we also cap the util_avg of successive tasks to
>   * only 1/2 of the left utilization budget:
>   *
> - *   util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n
> + *   util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n
>   *
> - * where n denotes the nth task.
> + * where n denotes the nth task and cpu_scale the CPU capacity.
>   *
> - * For example, a simplest series from the beginning would be like:
> + * For example, for a CPU with 1024 of capacity, a simplest series from
> + * the beginning would be like:
>   *
>   *  task  util_avg: 512, 256, 128,  64,  32,   16,8, ...
>   * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ...
> @@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
>  {
> struct cfs_rq *cfs_rq = cfs_rq_of(se);
> struct sched_avg *sa = >avg;
> -   long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
> +   long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq)));
> +   long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2;
>
> if (cap > 0) {
> if (cfs_rq->avg.util_avg != 0) {
> --
> 2.17.0
>