Re: [PATCH v2] sched/fair: Fix util_avg of new tasks for asymmetric systems
On Thu, Jul 05, 2018 at 08:53:50AM +0100, Quentin Perret wrote: > Is there anything else I should do for this patch ? Pinging me was the right thing. Have it now, much thanks!
Re: [PATCH v2] sched/fair: Fix util_avg of new tasks for asymmetric systems
On Thu, Jul 05, 2018 at 08:53:50AM +0100, Quentin Perret wrote: > Is there anything else I should do for this patch ? Pinging me was the right thing. Have it now, much thanks!
Re: [PATCH v2] sched/fair: Fix util_avg of new tasks for asymmetric systems
Hi, On Tuesday 12 Jun 2018 at 12:22:15 (+0100), Quentin Perret wrote: > When a new task wakes-up for the first time, its initial utilization > is set to half of the spare capacity of its CPU. The current > implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE > directly as a capacity reference. As a result, on a big.LITTLE system, a > new task waking up on an idle little CPU will be given ~512 of util_avg, > even if the CPU's capacity is significantly less than that. > > Fix this by computing the spare capacity with arch_scale_cpu_capacity(). > > Cc: Ingo Molnar > Cc: Peter Zijlstra > Acked-by: Vincent Guittot > Signed-off-by: Quentin Perret > > --- > v2: added "Acked-by: Vincent Guittot " > --- > kernel/sched/fair.c | 10 ++ > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index e497c05aab7f..f19432c17017 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity > *se); > * To solve this problem, we also cap the util_avg of successive tasks to > * only 1/2 of the left utilization budget: > * > - * util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n > + * util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n > * > - * where n denotes the nth task. > + * where n denotes the nth task and cpu_scale the CPU capacity. > * > - * For example, a simplest series from the beginning would be like: > + * For example, for a CPU with 1024 of capacity, a simplest series from > + * the beginning would be like: > * > * task util_avg: 512, 256, 128, 64, 32, 16,8, ... > * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ... > @@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se) > { > struct cfs_rq *cfs_rq = cfs_rq_of(se); > struct sched_avg *sa = >avg; > - long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; > + long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq))); > + long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2; > > if (cap > 0) { > if (cfs_rq->avg.util_avg != 0) { > -- > 2.17.1 > Is there anything else I should do for this patch ? Thanks, Quentin
Re: [PATCH v2] sched/fair: Fix util_avg of new tasks for asymmetric systems
Hi, On Tuesday 12 Jun 2018 at 12:22:15 (+0100), Quentin Perret wrote: > When a new task wakes-up for the first time, its initial utilization > is set to half of the spare capacity of its CPU. The current > implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE > directly as a capacity reference. As a result, on a big.LITTLE system, a > new task waking up on an idle little CPU will be given ~512 of util_avg, > even if the CPU's capacity is significantly less than that. > > Fix this by computing the spare capacity with arch_scale_cpu_capacity(). > > Cc: Ingo Molnar > Cc: Peter Zijlstra > Acked-by: Vincent Guittot > Signed-off-by: Quentin Perret > > --- > v2: added "Acked-by: Vincent Guittot " > --- > kernel/sched/fair.c | 10 ++ > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index e497c05aab7f..f19432c17017 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity > *se); > * To solve this problem, we also cap the util_avg of successive tasks to > * only 1/2 of the left utilization budget: > * > - * util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n > + * util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n > * > - * where n denotes the nth task. > + * where n denotes the nth task and cpu_scale the CPU capacity. > * > - * For example, a simplest series from the beginning would be like: > + * For example, for a CPU with 1024 of capacity, a simplest series from > + * the beginning would be like: > * > * task util_avg: 512, 256, 128, 64, 32, 16,8, ... > * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ... > @@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se) > { > struct cfs_rq *cfs_rq = cfs_rq_of(se); > struct sched_avg *sa = >avg; > - long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; > + long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq))); > + long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2; > > if (cap > 0) { > if (cfs_rq->avg.util_avg != 0) { > -- > 2.17.1 > Is there anything else I should do for this patch ? Thanks, Quentin
[PATCH v2] sched/fair: Fix util_avg of new tasks for asymmetric systems
When a new task wakes-up for the first time, its initial utilization is set to half of the spare capacity of its CPU. The current implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE directly as a capacity reference. As a result, on a big.LITTLE system, a new task waking up on an idle little CPU will be given ~512 of util_avg, even if the CPU's capacity is significantly less than that. Fix this by computing the spare capacity with arch_scale_cpu_capacity(). Cc: Ingo Molnar Cc: Peter Zijlstra Acked-by: Vincent Guittot Signed-off-by: Quentin Perret --- v2: added "Acked-by: Vincent Guittot " --- kernel/sched/fair.c | 10 ++ 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e497c05aab7f..f19432c17017 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity *se); * To solve this problem, we also cap the util_avg of successive tasks to * only 1/2 of the left utilization budget: * - * util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n + * util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n * - * where n denotes the nth task. + * where n denotes the nth task and cpu_scale the CPU capacity. * - * For example, a simplest series from the beginning would be like: + * For example, for a CPU with 1024 of capacity, a simplest series from + * the beginning would be like: * * task util_avg: 512, 256, 128, 64, 32, 16,8, ... * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ... @@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se) { struct cfs_rq *cfs_rq = cfs_rq_of(se); struct sched_avg *sa = >avg; - long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; + long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq))); + long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2; if (cap > 0) { if (cfs_rq->avg.util_avg != 0) { -- 2.17.1
[PATCH v2] sched/fair: Fix util_avg of new tasks for asymmetric systems
When a new task wakes-up for the first time, its initial utilization is set to half of the spare capacity of its CPU. The current implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE directly as a capacity reference. As a result, on a big.LITTLE system, a new task waking up on an idle little CPU will be given ~512 of util_avg, even if the CPU's capacity is significantly less than that. Fix this by computing the spare capacity with arch_scale_cpu_capacity(). Cc: Ingo Molnar Cc: Peter Zijlstra Acked-by: Vincent Guittot Signed-off-by: Quentin Perret --- v2: added "Acked-by: Vincent Guittot " --- kernel/sched/fair.c | 10 ++ 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e497c05aab7f..f19432c17017 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity *se); * To solve this problem, we also cap the util_avg of successive tasks to * only 1/2 of the left utilization budget: * - * util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n + * util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n * - * where n denotes the nth task. + * where n denotes the nth task and cpu_scale the CPU capacity. * - * For example, a simplest series from the beginning would be like: + * For example, for a CPU with 1024 of capacity, a simplest series from + * the beginning would be like: * * task util_avg: 512, 256, 128, 64, 32, 16,8, ... * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ... @@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se) { struct cfs_rq *cfs_rq = cfs_rq_of(se); struct sched_avg *sa = >avg; - long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; + long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq))); + long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2; if (cap > 0) { if (cfs_rq->avg.util_avg != 0) { -- 2.17.1