Re: [PATCH -tip 15/32] sched: Improve snapshotting of min_vruntime for CGroups

2020-11-25 Thread Peter Zijlstra
On Tue, Nov 24, 2020 at 12:07:44PM -0500, Joel Fernandes wrote:

> Either way would be Ok with me, I would suggest retaining the history though
> so that the details in the changelog are preserved of the issues we faced,
> and in the future we can refer back to them.

Well, in part that's what we have mail archives for. My main concern is
that there's a lot of back and forth in the patches as presented and
that makes review really awkward.

I've done some of the patch munging proposed, I might do more if I find
the motivation.

But first I should probably go read all that API crud you did on top,
I've not read that before...


Re: [PATCH -tip 15/32] sched: Improve snapshotting of min_vruntime for CGroups

2020-11-24 Thread Joel Fernandes
Hi Peter,

On Tue, Nov 24, 2020 at 11:27:41AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 17, 2020 at 06:19:45PM -0500, Joel Fernandes (Google) wrote:
> > A previous patch improved cross-cpu vruntime comparison opertations in
> > pick_next_task(). Improve it further for tasks in CGroups.
> > 
> > In particular, for cross-CPU comparisons, we were previously going to
> > the root-level se(s) for both the task being compared. That was strange.
> > This patch instead finds the se(s) for both tasks that have the same
> > parent (which may be different from root).
> > 
> > A note about the min_vruntime snapshot and force idling:
> > Abbreviations: fi: force-idled now? ; fib: force-idled before?
> > During selection:
> > When we're not fi, we need to update snapshot.
> > when we're fi and we were not fi, we must update snapshot.
> > When we're fi and we were already fi, we must not update snapshot.
> > 
> > Which gives:
> > fib fi  update?
> > 0   0   1
> > 0   1   1
> > 1   0   1
> > 1   1   0
> > So the min_vruntime snapshot needs to be updated when: !(fib && fi).
> > 
> > Also, the cfs_prio_less() function needs to be aware of whether the core
> > is in force idle or not, since it will be use this information to know
> > whether to advance a cfs_rq's min_vruntime_fi in the hierarchy. So pass
> > this information along via pick_task() -> prio_less().
> 
> Hurmph.. so I'm tempted to smash a bunch of patches together.
> 
>  2 <- 3 (already done - bisection crashes are daft)
>  6 <- 11
>  7 <- {10, 12}
>  9 <- 15
> 
> I'm thinking that would result in an easier to read series, or do we
> want to preserve this history?
> 
> (fwiw, I pulled 15 before 13,14, as I think that makes more sense
> anyway).

Either way would be Ok with me, I would suggest retaining the history though
so that the details in the changelog are preserved of the issues we faced,
and in the future we can refer back to them.

thanks,

 - Joel



Re: [PATCH -tip 15/32] sched: Improve snapshotting of min_vruntime for CGroups

2020-11-24 Thread Peter Zijlstra
On Tue, Nov 17, 2020 at 06:19:45PM -0500, Joel Fernandes (Google) wrote:
> +static void se_fi_update(struct sched_entity *se, unsigned int fi_seq, bool 
> forceidle)
>  {
> - bool samecpu = task_cpu(a) == task_cpu(b);
> + bool root = true;
> + long old, new;

My compiler was not impressed by all those variable definitions.

> +
> + for_each_sched_entity(se) {
> + struct cfs_rq *cfs_rq = cfs_rq_of(se);
> +
> + if (forceidle) {
> + if (cfs_rq->forceidle_seq == fi_seq)
> + break;
> + cfs_rq->forceidle_seq = fi_seq;
> + }
> +
> + cfs_rq->min_vruntime_fi = cfs_rq->min_vruntime;
> + }
> +}


Re: [PATCH -tip 15/32] sched: Improve snapshotting of min_vruntime for CGroups

2020-11-24 Thread Peter Zijlstra
On Tue, Nov 17, 2020 at 06:19:45PM -0500, Joel Fernandes (Google) wrote:
> A previous patch improved cross-cpu vruntime comparison opertations in
> pick_next_task(). Improve it further for tasks in CGroups.
> 
> In particular, for cross-CPU comparisons, we were previously going to
> the root-level se(s) for both the task being compared. That was strange.
> This patch instead finds the se(s) for both tasks that have the same
> parent (which may be different from root).
> 
> A note about the min_vruntime snapshot and force idling:
> Abbreviations: fi: force-idled now? ; fib: force-idled before?
> During selection:
> When we're not fi, we need to update snapshot.
> when we're fi and we were not fi, we must update snapshot.
> When we're fi and we were already fi, we must not update snapshot.
> 
> Which gives:
> fib fi  update?
> 0   0   1
> 0   1   1
> 1   0   1
> 1   1   0
> So the min_vruntime snapshot needs to be updated when: !(fib && fi).
> 
> Also, the cfs_prio_less() function needs to be aware of whether the core
> is in force idle or not, since it will be use this information to know
> whether to advance a cfs_rq's min_vruntime_fi in the hierarchy. So pass
> this information along via pick_task() -> prio_less().

Hurmph.. so I'm tempted to smash a bunch of patches together.

 2 <- 3 (already done - bisection crashes are daft)
 6 <- 11
 7 <- {10, 12}
 9 <- 15

I'm thinking that would result in an easier to read series, or do we
want to preserve this history?

(fwiw, I pulled 15 before 13,14, as I think that makes more sense
anyway).

Hmm?


[PATCH -tip 15/32] sched: Improve snapshotting of min_vruntime for CGroups

2020-11-17 Thread Joel Fernandes (Google)
A previous patch improved cross-cpu vruntime comparison opertations in
pick_next_task(). Improve it further for tasks in CGroups.

In particular, for cross-CPU comparisons, we were previously going to
the root-level se(s) for both the task being compared. That was strange.
This patch instead finds the se(s) for both tasks that have the same
parent (which may be different from root).

A note about the min_vruntime snapshot and force idling:
Abbreviations: fi: force-idled now? ; fib: force-idled before?
During selection:
When we're not fi, we need to update snapshot.
when we're fi and we were not fi, we must update snapshot.
When we're fi and we were already fi, we must not update snapshot.

Which gives:
fib fi  update?
0   0   1
0   1   1
1   0   1
1   1   0
So the min_vruntime snapshot needs to be updated when: !(fib && fi).

Also, the cfs_prio_less() function needs to be aware of whether the core
is in force idle or not, since it will be use this information to know
whether to advance a cfs_rq's min_vruntime_fi in the hierarchy. So pass
this information along via pick_task() -> prio_less().

Reviewed-by: Vineeth Pillai 
Suggested-by: Peter Zijlstra (Intel) 
Signed-off-by: Joel Fernandes (Google) 
---
 kernel/sched/core.c  | 61 +
 kernel/sched/fair.c  | 80 
 kernel/sched/sched.h |  7 +++-
 3 files changed, 97 insertions(+), 51 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3b373b592680..20125431af87 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -101,7 +101,7 @@ static inline int __task_prio(struct task_struct *p)
  */
 
 /* real prio, less is less */
-static inline bool prio_less(struct task_struct *a, struct task_struct *b)
+static inline bool prio_less(struct task_struct *a, struct task_struct *b, 
bool in_fi)
 {
 
int pa = __task_prio(a), pb = __task_prio(b);
@@ -116,7 +116,7 @@ static inline bool prio_less(struct task_struct *a, struct 
task_struct *b)
return !dl_time_before(a->dl.deadline, b->dl.deadline);
 
if (pa == MAX_RT_PRIO + MAX_NICE)   /* fair */
-   return cfs_prio_less(a, b);
+   return cfs_prio_less(a, b, in_fi);
 
return false;
 }
@@ -130,7 +130,7 @@ static inline bool __sched_core_less(struct task_struct *a, 
struct task_struct *
return false;
 
/* flip prio, so high prio is leftmost */
-   if (prio_less(b, a))
+   if (prio_less(b, a, task_rq(a)->core->core_forceidle))
return true;
 
return false;
@@ -5101,7 +5101,7 @@ static inline bool cookie_match(struct task_struct *a, 
struct task_struct *b)
  * - Else returns idle_task.
  */
 static struct task_struct *
-pick_task(struct rq *rq, const struct sched_class *class, struct task_struct 
*max)
+pick_task(struct rq *rq, const struct sched_class *class, struct task_struct 
*max, bool in_fi)
 {
struct task_struct *class_pick, *cookie_pick;
unsigned long cookie = rq->core->core_cookie;
@@ -5116,7 +5116,7 @@ pick_task(struct rq *rq, const struct sched_class *class, 
struct task_struct *ma
 * higher priority than max.
 */
if (max && class_pick->core_cookie &&
-   prio_less(class_pick, max))
+   prio_less(class_pick, max, in_fi))
return idle_sched_class.pick_task(rq);
 
return class_pick;
@@ -5135,13 +5135,15 @@ pick_task(struct rq *rq, const struct sched_class 
*class, struct task_struct *ma
 * the core (so far) and it must be selected, otherwise we must go with
 * the cookie pick in order to satisfy the constraint.
 */
-   if (prio_less(cookie_pick, class_pick) &&
-   (!max || prio_less(max, class_pick)))
+   if (prio_less(cookie_pick, class_pick, in_fi) &&
+   (!max || prio_less(max, class_pick, in_fi)))
return class_pick;
 
return cookie_pick;
 }
 
+extern void task_vruntime_update(struct rq *rq, struct task_struct *p, bool 
in_fi);
+
 static struct task_struct *
 pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 {
@@ -5230,9 +5232,14 @@ pick_next_task(struct rq *rq, struct task_struct *prev, 
struct rq_flags *rf)
 
if (!next->core_cookie) {
rq->core_pick = NULL;
+   /*
+* For robustness, update the min_vruntime_fi for
+* unconstrained picks as well.
+*/
+   WARN_ON_ONCE(fi_before);
+   task_vruntime_update(rq, next, false);
goto done;
}
-   need_sync = true;
}
 
for_each_cpu(i, smt_mask) {
@@ -5244,14 +5251,6 @@ pick_next_task(struct rq *rq, struct task_str