[PATCH v5 2/4] sched/fair: use util_est in LB and WU paths

2018-02-22 Thread Patrick Bellasi
When the scheduler looks at the CPU utilization, the current PELT value
for a CPU is returned straight away. In certain scenarios this can have
undesired side effects on task placement.

For example, since the task utilization is decayed at wakeup time, when
a long sleeping big task is enqueued it does not add immediately a
significant contribution to the target CPU.
As a result we generate a race condition where other tasks can be placed
on the same CPU while it is still considered relatively empty.

In order to reduce this kind of race conditions, this patch introduces the
required support to integrate the usage of the CPU's estimated utilization
in cpu_util_wake as well as in update_sg_lb_stats.

The estimated utilization of a CPU is defined to be the maximum between
its PELT's utilization and the sum of the estimated utilization of the
tasks currently RUNNABLE on that CPU.
This allows to properly represent the spare capacity of a CPU which, for
example, has just got a big task running since a long sleep period.

Signed-off-by: Patrick Bellasi 
Reviewed-by: Dietmar Eggemann 
Cc: Ingo Molnar 
Cc: Peter Zijlstra 
Cc: Rafael J. Wysocki 
Cc: Viresh Kumar 
Cc: Paul Turner 
Cc: Vincent Guittot 
Cc: Morten Rasmussen 
Cc: Dietmar Eggemann 
Cc: linux-kernel@vger.kernel.org
Cc: linux...@vger.kernel.org

---
Changes in v5:
 - always use int instead of long whenever possible (Peter)
 - add missing READ_ONCE barriers (Peter)

Changes in v4:
 - rebased on today's tip/sched/core (commit 460e8c3340a2)
 - ensure cpu_util_wake() is cpu_capacity_orig()'s clamped (Pavan)

Changes in v3:
 - rebased on today's tip/sched/core (commit 07881166a892)

Changes in v2:
 - rebase on top of v4.15-rc2
 - tested that overhauled PELT code does not affect the util_est
---
 kernel/sched/fair.c | 81 +
 1 file changed, 76 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c8526687f107..8364771f7301 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6445,6 +6445,41 @@ static unsigned long cpu_util(int cpu)
return (util >= capacity) ? capacity : util;
 }
 
+/**
+ * cpu_util_est: estimated utilization for the specified CPU
+ * @cpu: the CPU to get the estimated utilization for
+ *
+ * The estimated utilization of a CPU is defined to be the maximum between its
+ * PELT's utilization and the sum of the estimated utilization of the tasks
+ * currently RUNNABLE on that CPU.
+ *
+ * This allows to properly represent the expected utilization of a CPU which
+ * has just got a big task running since a long sleep period. At the same time
+ * however it preserves the benefits of the "blocked utilization" in
+ * describing the potential for other tasks waking up on the same CPU.
+ *
+ * Return: the estimated utilization for the specified CPU
+ */
+static inline unsigned long cpu_util_est(int cpu)
+{
+   unsigned int util, util_est;
+   unsigned int capacity;
+   struct cfs_rq *cfs_rq;
+
+   if (!sched_feat(UTIL_EST))
+   return cpu_util(cpu);
+
+   cfs_rq = _rq(cpu)->cfs;
+   util = READ_ONCE(cfs_rq->avg.util_avg);
+   util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);
+   util_est = max(util, util_est);
+
+   capacity = capacity_orig_of(cpu);
+   util_est = min(util_est, capacity);
+
+   return util_est;
+}
+
 static inline unsigned long task_util(struct task_struct *p)
 {
return p->se.avg.util_avg;
@@ -6469,16 +6504,52 @@ static inline unsigned long task_util_est(struct 
task_struct *p)
  */
 static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
 {
-   unsigned long util, capacity;
+   unsigned int util, util_est;
+   unsigned int capacity;
 
/* Task has no contribution or is new */
if (cpu != task_cpu(p) || !p->se.avg.last_update_time)
-   return cpu_util(cpu);
+   return cpu_util_est(cpu);
+
+   /* Discount task's blocked util from CPU's util */
+   util  = cpu_util(cpu);
+   util -= min_t(unsigned int, util, task_util(p));
+
+   if (!sched_feat(UTIL_EST))
+   return util;
+
+   /*
+* Covered cases:
+* - if *p is the only task sleeping on this CPU, then:
+*  cpu_util (== task_util) > util_est (== 0)
+*   and thus we return:
+*  cpu_util_wake = (cpu_util - task_util) = 0
+*
+* - if other tasks are SLEEPING on the same CPU, which is just waking
+*   up, then:
+*  cpu_util >= task_util
+*  cpu_util > util_est (== 0)
+*   and thus we discount *p's blocked utilization to return:
+*  cpu_util_wake = (cpu_util - task_util) >= 0
+   

[PATCH v5 2/4] sched/fair: use util_est in LB and WU paths

2018-02-22 Thread Patrick Bellasi
When the scheduler looks at the CPU utilization, the current PELT value
for a CPU is returned straight away. In certain scenarios this can have
undesired side effects on task placement.

For example, since the task utilization is decayed at wakeup time, when
a long sleeping big task is enqueued it does not add immediately a
significant contribution to the target CPU.
As a result we generate a race condition where other tasks can be placed
on the same CPU while it is still considered relatively empty.

In order to reduce this kind of race conditions, this patch introduces the
required support to integrate the usage of the CPU's estimated utilization
in cpu_util_wake as well as in update_sg_lb_stats.

The estimated utilization of a CPU is defined to be the maximum between
its PELT's utilization and the sum of the estimated utilization of the
tasks currently RUNNABLE on that CPU.
This allows to properly represent the spare capacity of a CPU which, for
example, has just got a big task running since a long sleep period.

Signed-off-by: Patrick Bellasi 
Reviewed-by: Dietmar Eggemann 
Cc: Ingo Molnar 
Cc: Peter Zijlstra 
Cc: Rafael J. Wysocki 
Cc: Viresh Kumar 
Cc: Paul Turner 
Cc: Vincent Guittot 
Cc: Morten Rasmussen 
Cc: Dietmar Eggemann 
Cc: linux-kernel@vger.kernel.org
Cc: linux...@vger.kernel.org

---
Changes in v5:
 - always use int instead of long whenever possible (Peter)
 - add missing READ_ONCE barriers (Peter)

Changes in v4:
 - rebased on today's tip/sched/core (commit 460e8c3340a2)
 - ensure cpu_util_wake() is cpu_capacity_orig()'s clamped (Pavan)

Changes in v3:
 - rebased on today's tip/sched/core (commit 07881166a892)

Changes in v2:
 - rebase on top of v4.15-rc2
 - tested that overhauled PELT code does not affect the util_est
---
 kernel/sched/fair.c | 81 +
 1 file changed, 76 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c8526687f107..8364771f7301 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6445,6 +6445,41 @@ static unsigned long cpu_util(int cpu)
return (util >= capacity) ? capacity : util;
 }
 
+/**
+ * cpu_util_est: estimated utilization for the specified CPU
+ * @cpu: the CPU to get the estimated utilization for
+ *
+ * The estimated utilization of a CPU is defined to be the maximum between its
+ * PELT's utilization and the sum of the estimated utilization of the tasks
+ * currently RUNNABLE on that CPU.
+ *
+ * This allows to properly represent the expected utilization of a CPU which
+ * has just got a big task running since a long sleep period. At the same time
+ * however it preserves the benefits of the "blocked utilization" in
+ * describing the potential for other tasks waking up on the same CPU.
+ *
+ * Return: the estimated utilization for the specified CPU
+ */
+static inline unsigned long cpu_util_est(int cpu)
+{
+   unsigned int util, util_est;
+   unsigned int capacity;
+   struct cfs_rq *cfs_rq;
+
+   if (!sched_feat(UTIL_EST))
+   return cpu_util(cpu);
+
+   cfs_rq = _rq(cpu)->cfs;
+   util = READ_ONCE(cfs_rq->avg.util_avg);
+   util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);
+   util_est = max(util, util_est);
+
+   capacity = capacity_orig_of(cpu);
+   util_est = min(util_est, capacity);
+
+   return util_est;
+}
+
 static inline unsigned long task_util(struct task_struct *p)
 {
return p->se.avg.util_avg;
@@ -6469,16 +6504,52 @@ static inline unsigned long task_util_est(struct 
task_struct *p)
  */
 static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
 {
-   unsigned long util, capacity;
+   unsigned int util, util_est;
+   unsigned int capacity;
 
/* Task has no contribution or is new */
if (cpu != task_cpu(p) || !p->se.avg.last_update_time)
-   return cpu_util(cpu);
+   return cpu_util_est(cpu);
+
+   /* Discount task's blocked util from CPU's util */
+   util  = cpu_util(cpu);
+   util -= min_t(unsigned int, util, task_util(p));
+
+   if (!sched_feat(UTIL_EST))
+   return util;
+
+   /*
+* Covered cases:
+* - if *p is the only task sleeping on this CPU, then:
+*  cpu_util (== task_util) > util_est (== 0)
+*   and thus we return:
+*  cpu_util_wake = (cpu_util - task_util) = 0
+*
+* - if other tasks are SLEEPING on the same CPU, which is just waking
+*   up, then:
+*  cpu_util >= task_util
+*  cpu_util > util_est (== 0)
+*   and thus we discount *p's blocked utilization to return:
+*  cpu_util_wake = (cpu_util - task_util) >= 0
+*
+* - if other tasks are RUNNABLE on that CPU and
+*  util_est > cpu_util
+*   then we use util_est since it returns a more restrictive
+*   estimation of the spare capacity on that CPU, by just