Re: [PATCH 2/3] sched/fair: trigger asym_packing during idle load balance

2018-12-20 Thread Valentin Schneider
On 20/12/2018 14:33, Vincent Guittot wrote:
[...]
>> As in the previous thread, I'll still argue that if you want to *reliably*
>> exploit newidle balances to do asym packing active balancing, you should
>> add some logic to raise rq->rd->overload when we notice some asym packing
>> could be done, so that it can be leveraged by a higher-priority CPU doing
>> a newidle balance.
> 
> The source cpu with the task is never overloaded.
> We need to start the load balance to know that it's worth migrating the task.
> 

Yep, that's my point exactly:  if you ever want to active balance a task
on a rq with nr_running == 1 when a higher priority CPU goes newidle, you'd
need an asym-packing version of:

  757ffdd705ee ("sched/fair: Set rq->rd->overload when misfit")

It's somewhat orthogonal to this patch, but the reason I'm bringing this up
is that the commit log says

  "newly idle load balance is not always triggered when a cpu becomes idle"

And not having root_domain->overload raised is a reason for that issue.

>>
>> Otherwise the few newidle asym-packing active balances you'll get will be
>> due to somewhat random luck because we happened to set that overload flag
>> at some point.


Re: [PATCH 2/3] sched/fair: trigger asym_packing during idle load balance

2018-12-20 Thread Vincent Guittot
On Thu, 20 Dec 2018 at 12:19, Valentin Schneider
 wrote:
>
> On 20/12/2018 07:55, Vincent Guittot wrote:
> > newly idle load balance is not always triggered when a cpu becomes idle.
> > This prevent the scheduler to get a chance to migrate task for asym packing.
> > Enable active migration because of asym packing during idle load balance 
> > too.
> >
> > Signed-off-by: Vincent Guittot 
> > ---
> >  kernel/sched/fair.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 9b31247..487c73e 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -8853,7 +8853,7 @@ static int need_active_balance(struct lb_env *env)
> >  {
> >   struct sched_domain *sd = env->sd;
> >
> > - if (env->idle == CPU_NEWLY_IDLE) {
> > + if (env->idle != CPU_NOT_IDLE) {
> >
> >   /*
> >* ASYM_PACKING needs to force migrate tasks from busy but
> >
>
> Regarding just extending the condition to include idle balance:
>
> Reviewed-by: Valentin Schneider 
>
>
> As in the previous thread, I'll still argue that if you want to *reliably*
> exploit newidle balances to do asym packing active balancing, you should
> add some logic to raise rq->rd->overload when we notice some asym packing
> could be done, so that it can be leveraged by a higher-priority CPU doing
> a newidle balance.

The source cpu with the task is never overloaded.
We need to start the load balance to know that it's worth migrating the task.

>
> Otherwise the few newidle asym-packing active balances you'll get will be
> due to somewhat random luck because we happened to set that overload flag
> at some point.


Re: [PATCH 2/3] sched/fair: trigger asym_packing during idle load balance

2018-12-20 Thread Valentin Schneider
On 20/12/2018 07:55, Vincent Guittot wrote:
> newly idle load balance is not always triggered when a cpu becomes idle.
> This prevent the scheduler to get a chance to migrate task for asym packing.
> Enable active migration because of asym packing during idle load balance too.
> 
> Signed-off-by: Vincent Guittot 
> ---
>  kernel/sched/fair.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 9b31247..487c73e 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8853,7 +8853,7 @@ static int need_active_balance(struct lb_env *env)
>  {
>   struct sched_domain *sd = env->sd;
>  
> - if (env->idle == CPU_NEWLY_IDLE) {
> + if (env->idle != CPU_NOT_IDLE) {
>  
>   /*
>* ASYM_PACKING needs to force migrate tasks from busy but
> 

Regarding just extending the condition to include idle balance:

Reviewed-by: Valentin Schneider 


As in the previous thread, I'll still argue that if you want to *reliably*
exploit newidle balances to do asym packing active balancing, you should
add some logic to raise rq->rd->overload when we notice some asym packing
could be done, so that it can be leveraged by a higher-priority CPU doing
a newidle balance.

Otherwise the few newidle asym-packing active balances you'll get will be
due to somewhat random luck because we happened to set that overload flag
at some point.


[PATCH 2/3] sched/fair: trigger asym_packing during idle load balance

2018-12-19 Thread Vincent Guittot
newly idle load balance is not always triggered when a cpu becomes idle.
This prevent the scheduler to get a chance to migrate task for asym packing.
Enable active migration because of asym packing during idle load balance too.

Signed-off-by: Vincent Guittot 
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9b31247..487c73e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8853,7 +8853,7 @@ static int need_active_balance(struct lb_env *env)
 {
struct sched_domain *sd = env->sd;
 
-   if (env->idle == CPU_NEWLY_IDLE) {
+   if (env->idle != CPU_NOT_IDLE) {
 
/*
 * ASYM_PACKING needs to force migrate tasks from busy but
-- 
2.7.4



[RESEND PATCH 2/3] sched/fair: trigger asym_packing during idle load balance

2018-10-02 Thread Vincent Guittot
newly idle load balance is not always triggered when a cpu becomes idle.
This prevent the scheduler to get a chance to migrate task for asym packing.
Enable active migration because of asym packing during idle load balance too.

Signed-off-by: Vincent Guittot 
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0ed99ad2..00f2171 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8460,7 +8460,7 @@ static int need_active_balance(struct lb_env *env)
 {
struct sched_domain *sd = env->sd;
 
-   if (env->idle == CPU_NEWLY_IDLE) {
+   if (env->idle != CPU_NOT_IDLE) {
 
/*
 * ASYM_PACKING needs to force migrate tasks from busy but
-- 
2.7.4



[RESEND PATCH 2/3] sched/fair: trigger asym_packing during idle load balance

2018-10-02 Thread Vincent Guittot
newly idle load balance is not always triggered when a cpu becomes idle.
This prevent the scheduler to get a chance to migrate task for asym packing.
Enable active migration because of asym packing during idle load balance too.

Signed-off-by: Vincent Guittot 
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0ed99ad2..00f2171 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8460,7 +8460,7 @@ static int need_active_balance(struct lb_env *env)
 {
struct sched_domain *sd = env->sd;
 
-   if (env->idle == CPU_NEWLY_IDLE) {
+   if (env->idle != CPU_NOT_IDLE) {
 
/*
 * ASYM_PACKING needs to force migrate tasks from busy but
-- 
2.7.4



[PATCH 2/3] sched/fair: trigger asym_packing during idle load balance

2018-08-07 Thread Vincent Guittot
newly idle load balance is not always triggered when a cpu becomes idle.
This prevent the scheduler to get a chance to migrate task for asym packing.
Enable active migration because of asym packing during idle load balance too.

Signed-off-by: Vincent Guittot 
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c376cd0..5f1b6c6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8364,7 +8364,7 @@ static int need_active_balance(struct lb_env *env)
 {
struct sched_domain *sd = env->sd;
 
-   if (env->idle == CPU_NEWLY_IDLE) {
+   if (env->idle != CPU_NOT_IDLE) {
 
/*
 * ASYM_PACKING needs to force migrate tasks from busy but
-- 
2.7.4



[PATCH 2/3] sched/fair: trigger asym_packing during idle load balance

2018-08-07 Thread Vincent Guittot
newly idle load balance is not always triggered when a cpu becomes idle.
This prevent the scheduler to get a chance to migrate task for asym packing.
Enable active migration because of asym packing during idle load balance too.

Signed-off-by: Vincent Guittot 
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c376cd0..5f1b6c6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8364,7 +8364,7 @@ static int need_active_balance(struct lb_env *env)
 {
struct sched_domain *sd = env->sd;
 
-   if (env->idle == CPU_NEWLY_IDLE) {
+   if (env->idle != CPU_NOT_IDLE) {
 
/*
 * ASYM_PACKING needs to force migrate tasks from busy but
-- 
2.7.4