On 14/06/18 15:18, Quentin Perret wrote:
> On Thursday 14 Jun 2018 at 16:11:18 (+0200), Juri Lelli wrote:
> > On 14/06/18 14:58, Quentin Perret wrote:
> >
> > [...]
> >
> > > Hmm not sure if this can help but I think that rebuild_sched_domains()
> >
On 14/06/18 14:58, Quentin Perret wrote:
[...]
> Hmm not sure if this can help but I think that rebuild_sched_domains()
> does _not_ take the hotplug lock before calling partition_sched_domains()
> when CONFIG_CPUSETS=n. But it does take it for CONFIG_CPUSETS=y.
Did you mean cpuset_mutex?
On 14/06/18 15:18, Quentin Perret wrote:
> On Thursday 14 Jun 2018 at 16:11:18 (+0200), Juri Lelli wrote:
> > On 14/06/18 14:58, Quentin Perret wrote:
> >
> > [...]
> >
> > > Hmm not sure if this can help but I think that rebuild_sched_domains()
> >
On 14/06/18 14:58, Quentin Perret wrote:
[...]
> Hmm not sure if this can help but I think that rebuild_sched_domains()
> does _not_ take the hotplug lock before calling partition_sched_domains()
> when CONFIG_CPUSETS=n. But it does take it for CONFIG_CPUSETS=y.
Did you mean cpuset_mutex?
On 14/06/18 09:45, Steven Rostedt wrote:
> On Wed, 13 Jun 2018 14:17:10 +0200
> Ju
> > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> > index b42037e6e81d..d26fd4795aa3 100644
> > --- a/kernel/cgroup/cpuset.c
> > +++ b/kernel/cgroup/cpuset.c
> > @@ -2409,6 +2409,22 @@ void __init
On 14/06/18 09:45, Steven Rostedt wrote:
> On Wed, 13 Jun 2018 14:17:10 +0200
> Ju
> > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> > index b42037e6e81d..d26fd4795aa3 100644
> > --- a/kernel/cgroup/cpuset.c
> > +++ b/kernel/cgroup/cpuset.c
> > @@ -2409,6 +2409,22 @@ void __init
On 14/06/18 09:47, Steven Rostedt wrote:
> On Thu, 14 Jun 2018 15:42:34 +0200
> Juri Lelli wrote:
>
> > On 14/06/18 09:33, Steven Rostedt wrote:
> > > On Wed, 13 Jun 2018 14:17:07 +0200
> > > Juri Lelli wrote:
> > >
> > > > From: Mathie
On 14/06/18 09:47, Steven Rostedt wrote:
> On Thu, 14 Jun 2018 15:42:34 +0200
> Juri Lelli wrote:
>
> > On 14/06/18 09:33, Steven Rostedt wrote:
> > > On Wed, 13 Jun 2018 14:17:07 +0200
> > > Juri Lelli wrote:
> > >
> > > > From: Mathie
On 14/06/18 09:35, Steven Rostedt wrote:
> On Wed, 13 Jun 2018 14:17:08 +0200
> Juri Lelli wrote:
[...]
> > +/*
> > + * Call with hotplug lock held
> > + */
> > +void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
> > +
On 14/06/18 09:35, Steven Rostedt wrote:
> On Wed, 13 Jun 2018 14:17:08 +0200
> Juri Lelli wrote:
[...]
> > +/*
> > + * Call with hotplug lock held
> > + */
> > +void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
> > +
On 14/06/18 09:33, Steven Rostedt wrote:
> On Wed, 13 Jun 2018 14:17:07 +0200
> Juri Lelli wrote:
>
> > From: Mathieu Poirier
> >
> > The comment above function partition_sched_domains() clearly state that
> > the cpu_hotplug_lock should be held
On 14/06/18 09:33, Steven Rostedt wrote:
> On Wed, 13 Jun 2018 14:17:07 +0200
> Juri Lelli wrote:
>
> > From: Mathieu Poirier
> >
> > The comment above function partition_sched_domains() clearly state that
> > the cpu_hotplug_lock should be held
From: Mathieu Poirier
Calls to task_rq_unlock() are done several times in function
__sched_setscheduler(). This is fine when only the rq lock needs to be
handled but not so much when other locks come into play.
This patch streamlines the release of the rq lock so that only one
location need to
From: Mathieu Poirier
Calls to task_rq_unlock() are done several times in function
__sched_setscheduler(). This is fine when only the rq lock needs to be
handled but not so much when other locks come into play.
This patch streamlines the release of the rq lock so that only one
location need to
Lelli
Signed-off-by: Mathieu Poirier
[modified changelog]
Signed-off-by: Juri Lelli
---
kernel/sched/topology.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 61a1125c1ae4..96eee22fafe8 100644
--- a/kernel/sched/topology.c
+++ b/kernel
Lelli
Signed-off-by: Mathieu Poirier
[modified changelog]
Signed-off-by: Juri Lelli
---
kernel/sched/topology.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 61a1125c1ae4..96eee22fafe8 100644
--- a/kernel/sched/topology.c
+++ b/kernel
Hi,
This is v4 of a series of patches, authored by Mathieu (thanks for your
work and for allowing me to try to move this forward), with the intent
of fixing a long standing issue of SCHED_DEADLINE bandwidth accounting.
As originally reported by Steve [1], when hotplug and/or (certain)
cpuset
Hi,
This is v4 of a series of patches, authored by Mathieu (thanks for your
work and for allowing me to try to move this forward), with the intent
of fixing a long standing issue of SCHED_DEADLINE bandwidth accounting.
As originally reported by Steve [1], when hotplug and/or (certain)
cpuset
-by: Juri Lelli
---
include/linux/cpuset.h | 6 ++
kernel/cgroup/cpuset.c | 16
kernel/sched/core.c| 14 ++
3 files changed, 36 insertions(+)
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 934633a05d20..a1970862ab8e 100644
--- a/include/linux
From: Mathieu Poirier
Introducing function partition_sched_domains_locked() by taking
the mutex locking code out of the original function. That way
the work done by partition_sched_domains_locked() can be reused
without dropping the mutex lock.
No change of functionality is introduced by this
From: Mathieu Poirier
When the topology of root domains is modified by CPUset or CPUhotplug
operations information about the current deadline bandwidth held in the
root domain is lost.
This patch address the issue by recalculating the lost deadline
bandwidth information by circling through the
From: Mathieu Poirier
Introducing function partition_sched_domains_locked() by taking
the mutex locking code out of the original function. That way
the work done by partition_sched_domains_locked() can be reused
without dropping the mutex lock.
No change of functionality is introduced by this
From: Mathieu Poirier
When the topology of root domains is modified by CPUset or CPUhotplug
operations information about the current deadline bandwidth held in the
root domain is lost.
This patch address the issue by recalculating the lost deadline
bandwidth information by circling through the
-by: Juri Lelli
---
include/linux/cpuset.h | 6 ++
kernel/cgroup/cpuset.c | 16
kernel/sched/core.c| 14 ++
3 files changed, 36 insertions(+)
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 934633a05d20..a1970862ab8e 100644
--- a/include/linux
On 08/06/18 14:54, Juri Lelli wrote:
> On 08/06/18 14:48, Vincent Guittot wrote:
> > On 8 June 2018 at 14:39, Juri Lelli wrote:
> > > Hi Vincent,
> > >
> > > On 08/06/18 14:09, Vincent Guittot wrote:
> > >> Now that we have both the
On 08/06/18 14:54, Juri Lelli wrote:
> On 08/06/18 14:48, Vincent Guittot wrote:
> > On 8 June 2018 at 14:39, Juri Lelli wrote:
> > > Hi Vincent,
> > >
> > > On 08/06/18 14:09, Vincent Guittot wrote:
> > >> Now that we have both the
On 08/06/18 14:48, Vincent Guittot wrote:
> On 8 June 2018 at 14:39, Juri Lelli wrote:
> > Hi Vincent,
> >
> > On 08/06/18 14:09, Vincent Guittot wrote:
> >> Now that we have both the dl class bandwidth requirement and the dl class
> >> utilization, we
On 08/06/18 14:48, Vincent Guittot wrote:
> On 8 June 2018 at 14:39, Juri Lelli wrote:
> > Hi Vincent,
> >
> > On 08/06/18 14:09, Vincent Guittot wrote:
> >> Now that we have both the dl class bandwidth requirement and the dl class
> >> utilization, we
Hi Vincent,
On 08/06/18 14:09, Vincent Guittot wrote:
> Now that we have both the dl class bandwidth requirement and the dl class
> utilization, we can detect when CPU is fully used so we should run at max.
> Otherwise, we keep using the dl bandwidth requirement to define the
> utilization of the
Hi Vincent,
On 08/06/18 14:09, Vincent Guittot wrote:
> Now that we have both the dl class bandwidth requirement and the dl class
> utilization, we can detect when CPU is fully used so we should run at max.
> Otherwise, we keep using the dl bandwidth requirement to define the
> utilization of the
On 08/06/18 12:19, Quentin Perret wrote:
> On Friday 08 Jun 2018 at 12:24:46 (+0200), Juri Lelli wrote:
> > Hi,
> >
> > On 21/05/18 15:25, Quentin Perret wrote:
> >
> > [...]
> >
> > > +static int find_energy_efficient_cpu(struct task_struct
On 08/06/18 12:19, Quentin Perret wrote:
> On Friday 08 Jun 2018 at 12:24:46 (+0200), Juri Lelli wrote:
> > Hi,
> >
> > On 21/05/18 15:25, Quentin Perret wrote:
> >
> > [...]
> >
> > > +static int find_energy_efficient_cpu(struct task_struct
On 21/05/18 15:25, Quentin Perret wrote:
[...]
> +static long compute_energy(struct task_struct *p, int dst_cpu)
> +{
> + long util, max_util, sum_util, energy = 0;
> + struct sched_energy_fd *sfd;
> + int cpu;
> +
> + for_each_freq_domain(sfd) {
> + max_util =
On 21/05/18 15:25, Quentin Perret wrote:
[...]
> +static long compute_energy(struct task_struct *p, int dst_cpu)
> +{
> + long util, max_util, sum_util, energy = 0;
> + struct sched_energy_fd *sfd;
> + int cpu;
> +
> + for_each_freq_domain(sfd) {
> + max_util =
Hi,
On 21/05/18 15:25, Quentin Perret wrote:
[...]
> +static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> +{
> + unsigned long cur_energy, prev_energy, best_energy, cpu_cap, task_util;
> + int cpu, best_energy_cpu = prev_cpu;
> + struct sched_energy_fd *sfd;
Hi,
On 21/05/18 15:25, Quentin Perret wrote:
[...]
> +static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> +{
> + unsigned long cur_energy, prev_energy, best_energy, cpu_cap, task_util;
> + int cpu, best_energy_cpu = prev_cpu;
> + struct sched_energy_fd *sfd;
On 08/06/18 09:25, Quentin Perret wrote:
> Hi Dietmar,
>
> On Thursday 07 Jun 2018 at 17:55:32 (+0200), Dietmar Eggemann wrote:
[...]
> > IMHO, part of the problem why this might be harder to understand is the fact
> > that the patches show the use of the 2. init call
> >
On 08/06/18 09:25, Quentin Perret wrote:
> Hi Dietmar,
>
> On Thursday 07 Jun 2018 at 17:55:32 (+0200), Dietmar Eggemann wrote:
[...]
> > IMHO, part of the problem why this might be harder to understand is the fact
> > that the patches show the use of the 2. init call
> >
On 07/06/18 17:02, Quentin Perret wrote:
> On Thursday 07 Jun 2018 at 16:44:22 (+0200), Juri Lelli wrote:
> > Hi,
> >
> > On 21/05/18 15:25, Quentin Perret wrote:
> > > In order to use EAS, the task scheduler has to know about the Energy
> > > Model (E
On 07/06/18 17:02, Quentin Perret wrote:
> On Thursday 07 Jun 2018 at 16:44:22 (+0200), Juri Lelli wrote:
> > Hi,
> >
> > On 21/05/18 15:25, Quentin Perret wrote:
> > > In order to use EAS, the task scheduler has to know about the Energy
> > > Model (E
On 07/06/18 16:19, Quentin Perret wrote:
> Hi Juri,
>
> On Thursday 07 Jun 2018 at 16:44:09 (+0200), Juri Lelli wrote:
> > On 21/05/18 15:24, Quentin Perret wrote:
[...]
> > > +static void fd_update_cs_table(struct em_cs_table *cs_table, int cpu)
> > &
On 07/06/18 16:19, Quentin Perret wrote:
> Hi Juri,
>
> On Thursday 07 Jun 2018 at 16:44:09 (+0200), Juri Lelli wrote:
> > On 21/05/18 15:24, Quentin Perret wrote:
[...]
> > > +static void fd_update_cs_table(struct em_cs_table *cs_table, int cpu)
> > &
Hi,
On 21/05/18 15:25, Quentin Perret wrote:
> In order to use EAS, the task scheduler has to know about the Energy
> Model (EM) of the platform. This commit extends the scheduler topology
> code to take references on the frequency domains objects of the EM
> framework for all online CPUs. Hence,
Hi,
On 21/05/18 15:25, Quentin Perret wrote:
> In order to use EAS, the task scheduler has to know about the Energy
> Model (EM) of the platform. This commit extends the scheduler topology
> code to take references on the frequency domains objects of the EM
> framework for all online CPUs. Hence,
On 21/05/18 15:24, Quentin Perret wrote:
> Several subsystems in the kernel (scheduler and/or thermal at the time
> of writing) can benefit from knowing about the energy consumed by CPUs.
> Yet, this information can come from different sources (DT or firmware for
> example), in different formats,
On 21/05/18 15:24, Quentin Perret wrote:
> Several subsystems in the kernel (scheduler and/or thermal at the time
> of writing) can benefit from knowing about the energy consumed by CPUs.
> Yet, this information can come from different sources (DT or firmware for
> example), in different formats,
Hi Quentin,
On 21/05/18 15:24, Quentin Perret wrote:
[...]
> +#ifdef CONFIG_ENERGY_MODEL
[...]
> +struct em_data_callback {
> + /**
> + * active_power() - Provide power at the next capacity state of a CPU
> + * @power : Active power at the capacity state (modified)
> +
Hi Quentin,
On 21/05/18 15:24, Quentin Perret wrote:
[...]
> +#ifdef CONFIG_ENERGY_MODEL
[...]
> +struct em_data_callback {
> + /**
> + * active_power() - Provide power at the next capacity state of a CPU
> + * @power : Active power at the capacity state (modified)
> +
On 06/06/18 15:37, Quentin Perret wrote:
> Hi Dietmar,
>
> On Wednesday 06 Jun 2018 at 15:12:15 (+0200), Dietmar Eggemann wrote:
> > > +static void fd_update_cs_table(struct em_cs_table *cs_table, int cpu)
> > > +{
> > > + unsigned long cmax = arch_scale_cpu_capacity(NULL, cpu);
> > > + int
On 06/06/18 15:37, Quentin Perret wrote:
> Hi Dietmar,
>
> On Wednesday 06 Jun 2018 at 15:12:15 (+0200), Dietmar Eggemann wrote:
> > > +static void fd_update_cs_table(struct em_cs_table *cs_table, int cpu)
> > > +{
> > > + unsigned long cmax = arch_scale_cpu_capacity(NULL, cpu);
> > > + int
On 05/06/18 16:11, Patrick Bellasi wrote:
[...]
> If I run an experiment with your example above, while using the
> performance governor to rule out any possible scale invariance
> difference, here is what I measure:
>
>Task1 (40ms delayed by the following Task2):
>
On 05/06/18 16:11, Patrick Bellasi wrote:
[...]
> If I run an experiment with your example above, while using the
> performance governor to rule out any possible scale invariance
> difference, here is what I measure:
>
>Task1 (40ms delayed by the following Task2):
>
On 05/06/18 16:18, Peter Zijlstra wrote:
> On Mon, Jun 04, 2018 at 08:08:58PM +0200, Vincent Guittot wrote:
[...]
> > As you mentioned, scale_rt_capacity give the remaining capacity for
> > cfs and it will behave like cfs util_avg now that it uses PELT. So as
> > long as cfs util_avg <
On 05/06/18 16:18, Peter Zijlstra wrote:
> On Mon, Jun 04, 2018 at 08:08:58PM +0200, Vincent Guittot wrote:
[...]
> > As you mentioned, scale_rt_capacity give the remaining capacity for
> > cfs and it will behave like cfs util_avg now that it uses PELT. So as
> > long as cfs util_avg <
On 05/06/18 15:01, Quentin Perret wrote:
> On Tuesday 05 Jun 2018 at 15:15:18 (+0200), Juri Lelli wrote:
> > On 05/06/18 14:05, Quentin Perret wrote:
> > > On Tuesday 05 Jun 2018 at 14:11:53 (+0200), Juri Lelli wrote:
> > > > Hi Quentin,
> > > >
>
On 05/06/18 15:01, Quentin Perret wrote:
> On Tuesday 05 Jun 2018 at 15:15:18 (+0200), Juri Lelli wrote:
> > On 05/06/18 14:05, Quentin Perret wrote:
> > > On Tuesday 05 Jun 2018 at 14:11:53 (+0200), Juri Lelli wrote:
> > > > Hi Quentin,
> > > >
>
On 05/06/18 14:05, Quentin Perret wrote:
> On Tuesday 05 Jun 2018 at 14:11:53 (+0200), Juri Lelli wrote:
> > Hi Quentin,
> >
> > On 05/06/18 11:57, Quentin Perret wrote:
> >
> > [...]
> >
> > > What about the diff below (just a quick hack t
On 05/06/18 14:05, Quentin Perret wrote:
> On Tuesday 05 Jun 2018 at 14:11:53 (+0200), Juri Lelli wrote:
> > Hi Quentin,
> >
> > On 05/06/18 11:57, Quentin Perret wrote:
> >
> > [...]
> >
> > > What about the diff below (just a quick hack t
Hi Quentin,
On 05/06/18 11:57, Quentin Perret wrote:
[...]
> What about the diff below (just a quick hack to show the idea) applied
> on tip/sched/core ?
>
> ---8<---
> diff --git a/kernel/sched/cpufreq_schedutil.c
> b/kernel/sched/cpufreq_schedutil.c
> index a8ba6d1f262a..23a4fb1c2c25 100644
Hi Quentin,
On 05/06/18 11:57, Quentin Perret wrote:
[...]
> What about the diff below (just a quick hack to show the idea) applied
> on tip/sched/core ?
>
> ---8<---
> diff --git a/kernel/sched/cpufreq_schedutil.c
> b/kernel/sched/cpufreq_schedutil.c
> index a8ba6d1f262a..23a4fb1c2c25 100644
On 04/06/18 09:14, Vincent Guittot wrote:
> On 4 June 2018 at 09:04, Juri Lelli wrote:
> > Hi Vincent,
> >
> > On 04/06/18 08:41, Vincent Guittot wrote:
> >> On 1 June 2018 at 19:45, Joel Fernandes wrote:
> >> > On Fri, Jun 01, 2018
On 04/06/18 09:14, Vincent Guittot wrote:
> On 4 June 2018 at 09:04, Juri Lelli wrote:
> > Hi Vincent,
> >
> > On 04/06/18 08:41, Vincent Guittot wrote:
> >> On 1 June 2018 at 19:45, Joel Fernandes wrote:
> >> > On Fri, Jun 01, 2018
Hi Vincent,
On 04/06/18 08:41, Vincent Guittot wrote:
> On 1 June 2018 at 19:45, Joel Fernandes wrote:
> > On Fri, Jun 01, 2018 at 03:53:07PM +0200, Vincent Guittot wrote:
[...]
> > IMO I feel its overkill to account dl_avg when we already have DL's running
> > bandwidth we can use. I
Hi Vincent,
On 04/06/18 08:41, Vincent Guittot wrote:
> On 1 June 2018 at 19:45, Joel Fernandes wrote:
> > On Fri, Jun 01, 2018 at 03:53:07PM +0200, Vincent Guittot wrote:
[...]
> > IMO I feel its overkill to account dl_avg when we already have DL's running
> > bandwidth we can use. I
Commit-ID: ecda2b66e263dfd6c1d6113add19150f4e235bb3
Gitweb: https://git.kernel.org/tip/ecda2b66e263dfd6c1d6113add19150f4e235bb3
Author: Juri Lelli
AuthorDate: Wed, 30 May 2018 18:08:09 +0200
Committer: Ingo Molnar
CommitDate: Thu, 31 May 2018 12:27:13 +0200
sched/deadline: Fix missing
Commit-ID: ecda2b66e263dfd6c1d6113add19150f4e235bb3
Gitweb: https://git.kernel.org/tip/ecda2b66e263dfd6c1d6113add19150f4e235bb3
Author: Juri Lelli
AuthorDate: Wed, 30 May 2018 18:08:09 +0200
Committer: Ingo Molnar
CommitDate: Thu, 31 May 2018 12:27:13 +0200
sched/deadline: Fix missing
On 30/05/18 17:46, Quentin Perret wrote:
> Hi Vincent,
>
> On Friday 25 May 2018 at 15:12:24 (+0200), Vincent Guittot wrote:
> > Add both cfs and rt utilization when selecting an OPP for cfs tasks as rt
> > can preempt and steal cfs's running time.
> >
> > Signed-off-by: Vincent Guittot
> > ---
On 30/05/18 17:46, Quentin Perret wrote:
> Hi Vincent,
>
> On Friday 25 May 2018 at 15:12:24 (+0200), Vincent Guittot wrote:
> > Add both cfs and rt utilization when selecting an OPP for cfs tasks as rt
> > can preempt and steal cfs's running time.
> >
> > Signed-off-by: Vincent Guittot
> > ---
l test robot
Signed-off-by: Juri Lelli
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Luca Abeni
Cc: Claudio Scordino
Cc: linux-kernel@vger.kernel.org
---
This was actually first spotted by lkp-robot[1], but the fix never made
it to the list as a proper patch. Apologies. :/
[1] https://www.spini
l test robot
Signed-off-by: Juri Lelli
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Luca Abeni
Cc: Claudio Scordino
Cc: linux-kernel@vger.kernel.org
---
This was actually first spotted by lkp-robot[1], but the fix never made
it to the list as a proper patch. Apologies. :/
[1] https://www.spini
On 30/05/18 09:37, Quentin Perret wrote:
> On Tuesday 29 May 2018 at 11:52:03 (+0200), Juri Lelli wrote:
> > On 29/05/18 09:40, Quentin Perret wrote:
> > > Hi Vincent,
> > >
> > > On Friday 25 May 2018 at 15:12:26 (+0200), Vincent Guittot wrote:
> &
On 30/05/18 09:37, Quentin Perret wrote:
> On Tuesday 29 May 2018 at 11:52:03 (+0200), Juri Lelli wrote:
> > On 29/05/18 09:40, Quentin Perret wrote:
> > > Hi Vincent,
> > >
> > > On Friday 25 May 2018 at 15:12:26 (+0200), Vincent Guittot wrote:
> &
On 29/05/18 09:40, Quentin Perret wrote:
> Hi Vincent,
>
> On Friday 25 May 2018 at 15:12:26 (+0200), Vincent Guittot wrote:
> > Now that we have both the dl class bandwidth requirement and the dl class
> > utilization, we can use the max of the 2 values when agregating the
> > utilization of the
On 29/05/18 09:40, Quentin Perret wrote:
> Hi Vincent,
>
> On Friday 25 May 2018 at 15:12:26 (+0200), Vincent Guittot wrote:
> > Now that we have both the dl class bandwidth requirement and the dl class
> > utilization, we can use the max of the 2 values when agregating the
> > utilization of the
On 29/05/18 08:48, Vincent Guittot wrote:
> On 29 May 2018 at 08:31, Juri Lelli wrote:
> > On 28/05/18 22:08, Joel Fernandes wrote:
> >> On Mon, May 28, 2018 at 12:12:34PM +0200, Juri Lelli wrote:
> >> [..]
> >> > > +
> >> > > + util =
On 29/05/18 08:48, Vincent Guittot wrote:
> On 29 May 2018 at 08:31, Juri Lelli wrote:
> > On 28/05/18 22:08, Joel Fernandes wrote:
> >> On Mon, May 28, 2018 at 12:12:34PM +0200, Juri Lelli wrote:
> >> [..]
> >> > > +
> >> > > + util =
On 28/05/18 22:08, Joel Fernandes wrote:
> On Mon, May 28, 2018 at 12:12:34PM +0200, Juri Lelli wrote:
> [..]
> > > +
> > > + util = max_t(unsigned long, util, READ_ONCE(rq->avg_dl.util_avg));
> > > +
> > > + return util;
> >
> > A
On 28/05/18 22:08, Joel Fernandes wrote:
> On Mon, May 28, 2018 at 12:12:34PM +0200, Juri Lelli wrote:
> [..]
> > > +
> > > + util = max_t(unsigned long, util, READ_ONCE(rq->avg_dl.util_avg));
> > > +
> > > + return util;
> >
> > A
On 28/05/18 21:24, Waiman Long wrote:
> On 05/28/2018 09:12 PM, Waiman Long wrote:
> > On 05/24/2018 06:28 AM, Juri Lelli wrote:
> >> On 17/05/18 16:55, Waiman Long wrote:
> >>
> >> [...]
> >>
> >>> @@ -849,7 +860,12 @@ static void r
On 28/05/18 21:24, Waiman Long wrote:
> On 05/28/2018 09:12 PM, Waiman Long wrote:
> > On 05/24/2018 06:28 AM, Juri Lelli wrote:
> >> On 17/05/18 16:55, Waiman Long wrote:
> >>
> >> [...]
> >>
> >>> @@ -849,7 +860,12 @@ static void r
Hi Vincent,
On 25/05/18 15:12, Vincent Guittot wrote:
> Now that we have both the dl class bandwidth requirement and the dl class
> utilization, we can use the max of the 2 values when agregating the
> utilization of the CPU.
>
> Signed-off-by: Vincent Guittot
> ---
Hi Vincent,
On 25/05/18 15:12, Vincent Guittot wrote:
> Now that we have both the dl class bandwidth requirement and the dl class
> utilization, we can use the max of the 2 values when agregating the
> utilization of the CPU.
>
> Signed-off-by: Vincent Guittot
> ---
> kernel/sched/sched.h | 6
On 28/05/18 16:57, Vincent Guittot wrote:
> Hi Juri,
>
> On 28 May 2018 at 12:12, Juri Lelli <juri.le...@redhat.com> wrote:
> > Hi Vincent,
> >
> > On 25/05/18 15:12, Vincent Guittot wrote:
> >> Now that we have both the dl class bandwidth requirement and
On 28/05/18 16:57, Vincent Guittot wrote:
> Hi Juri,
>
> On 28 May 2018 at 12:12, Juri Lelli wrote:
> > Hi Vincent,
> >
> > On 25/05/18 15:12, Vincent Guittot wrote:
> >> Now that we have both the dl class bandwidth requirement and the dl class
> >
Hi Vincent,
On 25/05/18 15:12, Vincent Guittot wrote:
> The time spent under interrupt can be significant but it is not reflected
> in the utilization of CPU when deciding to choose an OPP. Now that we have
> access to this metric, schedutil can take it into account when selecting
> the OPP for a
Hi Vincent,
On 25/05/18 15:12, Vincent Guittot wrote:
> The time spent under interrupt can be significant but it is not reflected
> in the utilization of CPU when deciding to choose an OPP. Now that we have
> access to this metric, schedutil can take it into account when selecting
> the OPP for a
On 28/05/18 14:06, Vincent Guittot wrote:
> Hi Juri,
>
> On 28 May 2018 at 12:41, Juri Lelli <juri.le...@redhat.com> wrote:
> > Hi Vincent,
> >
> > On 25/05/18 15:12, Vincent Guittot wrote:
> >> The time spent under interrupt can be significant but i
On 28/05/18 14:06, Vincent Guittot wrote:
> Hi Juri,
>
> On 28 May 2018 at 12:41, Juri Lelli wrote:
> > Hi Vincent,
> >
> > On 25/05/18 15:12, Vincent Guittot wrote:
> >> The time spent under interrupt can be significant but it is not reflected
> >
On 25/05/18 11:31, Patrick Bellasi wrote:
[...]
> Right, so the problem seems to be that we "need" to call
> arch_update_cpu_topology() and we do that by calling
> partition_sched_domains() which was initially introduced by:
>
>029190c515f1 ("cpuset sched_load_balance flag")
>
> back in
On 25/05/18 11:31, Patrick Bellasi wrote:
[...]
> Right, so the problem seems to be that we "need" to call
> arch_update_cpu_topology() and we do that by calling
> partition_sched_domains() which was initially introduced by:
>
>029190c515f1 ("cpuset sched_load_balance flag")
>
> back in
On 25/05/18 13:35, Dietmar Eggemann wrote:
[...]
>
> Looks good to me. Probably especially helpful when setting up exclusive
> cpusets.
>
> Juno with big and little exclusive cpuset:
>
> ...
> [ 124.231333] CPU1 attaching sched-domain(s):
> [ 124.235482] domain-0: span=1-2 level=MC
> [
On 25/05/18 13:35, Dietmar Eggemann wrote:
[...]
>
> Looks good to me. Probably especially helpful when setting up exclusive
> cpusets.
>
> Juno with big and little exclusive cpuset:
>
> ...
> [ 124.231333] CPU1 attaching sched-domain(s):
> [ 124.235482] domain-0: span=1-2 level=MC
> [
Commit-ID: bf5015a50f1fdb248b48405b67cae24dc02605d6
Gitweb: https://git.kernel.org/tip/bf5015a50f1fdb248b48405b67cae24dc02605d6
Author: Juri Lelli <juri.le...@redhat.com>
AuthorDate: Thu, 24 May 2018 17:29:36 +0200
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 25
Commit-ID: bf5015a50f1fdb248b48405b67cae24dc02605d6
Gitweb: https://git.kernel.org/tip/bf5015a50f1fdb248b48405b67cae24dc02605d6
Author: Juri Lelli
AuthorDate: Thu, 24 May 2018 17:29:36 +0200
Committer: Ingo Molnar
CommitDate: Fri, 25 May 2018 08:03:38 +0200
sched/topology: Clarify
span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4
}, 5:{ span=5 }
CPU1 attaching sched-domain(s):
domain-0: span=0-5 level=MC
groups: 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5
}, 0:{ span=0 }
[...]
root domain span: 0-5 (max cpu_capacity = 1024)
Signed-off-by:
span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4
}, 5:{ span=5 }
CPU1 attaching sched-domain(s):
domain-0: span=0-5 level=MC
groups: 1:{ span=1 }, 2:{ span=2 }, 3:{ span=3 }, 4:{ span=4 }, 5:{ span=5
}, 0:{ span=0 }
[...]
root domain span: 0-5 (max cpu_capacity = 1024)
Signed-off-by:
On 24/05/18 11:09, Waiman Long wrote:
> On 05/24/2018 10:36 AM, Juri Lelli wrote:
> > On 17/05/18 16:55, Waiman Long wrote:
> >
> > [...]
> >
> >> + A parent cgroup cannot distribute all its CPUs to child
> >> + scheduling domain cgroups unless
On 24/05/18 11:09, Waiman Long wrote:
> On 05/24/2018 10:36 AM, Juri Lelli wrote:
> > On 17/05/18 16:55, Waiman Long wrote:
> >
> > [...]
> >
> >> + A parent cgroup cannot distribute all its CPUs to child
> >> + scheduling domain cgroups unless
On 17/05/18 16:55, Waiman Long wrote:
[...]
> + A parent cgroup cannot distribute all its CPUs to child
> + scheduling domain cgroups unless its load balancing flag is
> + turned off.
> +
> + cpuset.sched.load_balance
> + A read-write single value file which exists on non-root
>
On 17/05/18 16:55, Waiman Long wrote:
[...]
> + A parent cgroup cannot distribute all its CPUs to child
> + scheduling domain cgroups unless its load balancing flag is
> + turned off.
> +
> + cpuset.sched.load_balance
> + A read-write single value file which exists on non-root
>
401 - 500 of 2448 matches
Mail list logo