On Tue, 6 Aug 2019 at 18:07, Peter Zijlstra wrote:
>
> On Thu, Aug 01, 2019 at 04:40:22PM +0200, Vincent Guittot wrote:
> > runnable load has been introduced to take into account the case
> > where blocked load biases the load balance decision which was selecting
> > under
On Tue, 6 Aug 2019 at 19:17, Valentin Schneider
wrote:
>
> Second batch, get it while it's hot...
>
> On 01/08/2019 15:40, Vincent Guittot wrote:
> [...]
> > @@ -7438,19 +7453,53 @@ static int detach_tasks(struct lb_env *env)
> > i
On Tue, 6 Aug 2019 at 17:56, Peter Zijlstra wrote:
>
> On Thu, Aug 01, 2019 at 04:40:20PM +0200, Vincent Guittot wrote:
> > The load_balance algorithm contains some heuristics which have becomes
> > meaningless since the rework of metrics and the introduction of PELT.
> &g
On Mon, 5 Aug 2019 at 19:07, Valentin Schneider
wrote:
>
> Hi Vincent,
>
> Here's another batch of comments, still need to go through some more of it.
>
> On 01/08/2019 15:40, Vincent Guittot wrote:
> > The load_balance algorithm contains some heuristics which have beco
match what the scaling function does.
>
> Signed-off-by: Qais Yousef
FWIW
Acked-by: Vincent Guittot
> ---
> kernel/sched/cpufreq_schedutil.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c
> b/kernel/
utilization is used to detect a misfit task but the load is then used to
select the task on the CPU which can lead to select a small task with
high weight instead of the task that triggered the misfit migration.
Signed-off-by: Vincent Guittot
---
Keep tracking load instead of utilization
On Thu, 1 Aug 2019 at 18:27, Valentin Schneider
wrote:
>
> On 01/08/2019 15:40, Vincent Guittot wrote:
> > @@ -8261,7 +8261,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> >* If we have more than one misfit sg go with the
> >
:
- update_sd_pick_busiest() select the busiest sched_group.
- find_busiest_group() checks if there is an imabalance between local and
busiest group.
- calculate_imbalance() decides what have to be moved.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 581
being conservative and taking into account the sleeping
tasks that might wakeup on the cpu.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 23 ++-
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f05f1ad
utilization is used to detect a misfit task but the load is then used to
select the task on the CPU which can lead to select a small task with
high weight instead of the task that triggered the misfit migration.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 28
Rename sum_nr_running to sum_h_nr_running because it effectively tracks
cfs->h_nr_running so we can use sum_nr_running to track rq->nr_running
when needed.
There is no functional changes.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 34 +-
in the statistics and use it to detect such
situation.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a8681c3..f05f1ad 100644
--- a/kernel/sched/fair.c
+++ b
When there is only 1 cpu per group, using the idle cpus to evenly spread
tasks doesn't make sense and nr_running is a better metrics.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 40
1 file changed, 28 insertions(+), 12 deletions(-)
diff
clean up load_balance and remove meaningless calculation and fields before
adding new algorithm.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 105 +---
1 file changed, 1 insertion(+), 104 deletions(-)
diff --git a/kernel/sched/fair.c
consolidate the calculation of imbalance in calculate_imbalance().
There is no functional changes.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 63 ++---
1 file changed, 16 insertions(+), 47 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel
mments
Vincent Guittot (8):
sched/fair: clean up asym packing
sched/fair: rename sum_nr_running to sum_h_nr_running
sched/fair: remove meaningless imbalance calculation
sched/fair: rework load_balance
sched/fair: use rq->nr_running when balancing load
sched/fair: use load instead of runn
On Wed, 31 Jul 2019 at 15:44, Srikar Dronamraju
wrote:
>
> * Vincent Guittot [2019-07-26 16:42:53]:
>
> > On Fri, 26 Jul 2019 at 15:59, Srikar Dronamraju
> > wrote:
> > > > @@ -7361,19 +7357,46 @@ static int detach_tasks(struct lb_env *env)
> > > &g
On Fri, 26 Jul 2019 at 16:01, Valentin Schneider
wrote:
>
> On 26/07/2019 13:30, Vincent Guittot wrote:
> >> We can avoid this entirely by going straight for an active balance when
> >> we are balancing misfit tasks (which we really should be doing TBH).
> >
On Fri, 26 Jul 2019 at 15:59, Srikar Dronamraju
wrote:
>
> >
> > The type of sched_group has been extended to better reflect the type of
> > imbalance. We now have :
> > group_has_spare
> > group_fully_busy
> > group_misfit_task
> > group_asym_capacity
> >
On Fri, 26 Jul 2019 at 12:41, Valentin Schneider
wrote:
>
> On 26/07/2019 10:01, Vincent Guittot wrote:
> >> Huh, interesting. Why go for utilization?
> >
> > Mainly because that's what is used to detect a misfit task and not the load
> >
> >>
> >&
On Thu, 25 Jul 2019 at 19:17, Valentin Schneider
wrote:
>
> Hi Vincent,
>
> first batch of questions/comments here...
>
> On 19/07/2019 08:58, Vincent Guittot wrote:
> [...]
> > kernel/sched/fair.c | 539
> >
On Fri, 26 Jul 2019 at 04:17, Srikar Dronamraju
wrote:
>
> * Vincent Guittot [2019-07-19 09:58:22]:
>
> > sum_nr_running will track rq->nr_running task and sum_h_nr_running
> > will track cfs->h_nr_running so we can use both to detect when other
> > scheduling
Commit-ID: f6cad8df6b30a5d2bbbd2e698f74b4cafb9fb82b
Gitweb: https://git.kernel.org/tip/f6cad8df6b30a5d2bbbd2e698f74b4cafb9fb82b
Author: Vincent Guittot
AuthorDate: Mon, 1 Jul 2019 17:47:02 +0200
Committer: Ingo Molnar
CommitDate: Thu, 25 Jul 2019 15:51:52 +0200
sched/fair: Fix
On Tue, 16 Jul 2019 at 02:56, Saravana Kannan wrote:
>
> On Mon, Jul 15, 2019 at 1:16 AM Vincent Guittot
> wrote:
> >
> > On Tue, 9 Jul 2019 at 21:03, Saravana Kannan wrote:
> > >
> > > On Tue, Jul 9, 2019 at 12:25 AM Vincent Guittot
> > > wr
On Fri, 19 Jul 2019 at 15:12, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
>
> > @@ -8029,17 +8063,24 @@ static inline void update_sg_lb_stats(struct lb_env
> > *env,
> > }
> > }
> >
>
On Fri, 19 Jul 2019 at 14:54, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
>
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 67f0acd..472959df 100644
> > --- a/kernel/sched/fair.c
> > +++ b/ke
On Fri, 19 Jul 2019 at 15:06, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> > @@ -7887,7 +7908,7 @@ static inline int sg_imbalanced(struct sched_group
> > *group)
> > static inline bool
> > group_has_capacity(struct l
On Fri, 19 Jul 2019 at 15:22, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> > enum group_type {
> > - group_other = 0,
> > + group_has_spare = 0,
> > + group_fully_busy,
> > group_misfit
On Fri, 19 Jul 2019 at 14:52, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:23AM +0200, Vincent Guittot wrote:
> > @@ -7060,12 +7048,21 @@ static unsigned long __read_mostly
> > max_load_balance_interval = HZ/10;
> > enum fbq_type { regular, remote, all };
On Fri, 19 Jul 2019 at 14:51, Peter Zijlstra wrote:
>
> On Fri, Jul 19, 2019 at 09:58:22AM +0200, Vincent Guittot wrote:
> > sum_nr_running will track rq->nr_running task and sum_h_nr_running
> > will track cfs->h_nr_running so we can use both to detect when other
> >
being conservative and taking into account the sleeping
tasks that might wakeup on the cpu.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 472959df
consolidate the calculation of imbalance in calculate_imbalance().
There is no functional changes.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 63 ++---
1 file changed, 16 insertions(+), 47 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel
sum_nr_running will track rq->nr_running task and sum_h_nr_running
will track cfs->h_nr_running so we can use both to detect when other
scheduling class are running and preempt CFS.
There is no functional changes.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.
:
- update_sd_pick_busiest() select the busiest sched_group.
- find_busiest_group() checks if there is an imabalance between local and
busiest group.
- calculate_imbalance() decides what have to be moved.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 539
+/-4.87% 2310.451 +/-2.45% (+11.61%)
256 groups 1277.402 +/-3.03% 1691.865 +/-6.34% (+32.45%)
tip/sched/core sha1:
af24bde8df20('sched/uclamp: Add uclamp support to energy_compute()')
Vincent Guittot (5):
sched/fair: clean up asym packing
sched/fair: rename sum_nr_running
When there is only 1 cpu per group, using the idle cpus to evenly spread
tasks doesn't make sense and nr_running is a better metrics.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 42 +-
1 file changed, 29 insertions(+), 13 deletions(-)
diff
On Tue, 9 Jul 2019 at 21:03, Saravana Kannan wrote:
>
> On Tue, Jul 9, 2019 at 12:25 AM Vincent Guittot
> wrote:
> >
> > On Sun, 7 Jul 2019 at 23:48, Saravana Kannan wrote:
> > >
> > > On Thu, Jul 4, 2019 at 12:12 AM Vincent Guittot
> > > wr
On Tue, 9 Jul 2019 at 17:42, Chris Redpath wrote:
>
> On 09/07/2019 16:36, Vincent Guittot wrote:
> > Hi Chris,
> >
> >>
> >> We enter this code quite often in our testing, most individual runs of a
> >> test which has small tasks involved have
Hi Chris,
On Tue, 9 Jul 2019 at 17:23, Chris Redpath wrote:
>
> Hi Peter,
>
> On 09/07/2019 14:50, Peter Zijlstra wrote:
> > On Tue, Jul 09, 2019 at 12:57:59PM +0100, Chris Redpath wrote:
> >> The ancient workaround to avoid the cost of updating rq clocks in the
> >> middle of a migration causes
On Sun, 7 Jul 2019 at 23:48, Saravana Kannan wrote:
>
> On Thu, Jul 4, 2019 at 12:12 AM Vincent Guittot
> wrote:
> >
> > On Wed, 3 Jul 2019 at 23:33, Saravana Kannan wrote:
> > >
> > > On Tue, Jul 2, 2019 at 11:45 PM Vincent Guittot
> > > wr
On Fri, 28 Jun 2019 at 22:49, Rik van Riel wrote:
>
> The way the time slice length is currently calculated, not only do high
> priority tasks get longer time slices than low priority tasks, but due
> to fixed point math, low priority tasks could end up with a zero length
> time slice. This can
On Tue, 2 Jul 2019 at 16:29, Valentin Schneider
wrote:
>
>
>
> On 02/07/2019 11:00, Vincent Guittot wrote:
> >> Does that want a
> >>
> >> Cc: sta...@vger.kernel.org
> >> Fixes: afdeee0510db ("sched: Fix imbalance flag reset")
> >
On Fri, 28 Jun 2019 at 22:49, Rik van Riel wrote:
>
> Reducing the overhead of the CPU controller is achieved by not walking
> all the sched_entities every time a task is enqueued or dequeued.
>
> One of the things being checked every single time is whether the cfs_rq
> is on the
On Wed, 3 Jul 2019 at 23:33, Saravana Kannan wrote:
>
> On Tue, Jul 2, 2019 at 11:45 PM Vincent Guittot
> wrote:
> >
> > On Wed, 3 Jul 2019 at 03:10, Saravana Kannan wrote:
> > >
> > > Interconnect paths can have different performance points. Now that OPP
On Wed, 3 Jul 2019 at 03:10, Saravana Kannan wrote:
>
> Interconnect paths can have different performance points. Now that OPP
> framework supports bandwidth OPP tables, add OPP table support for
> interconnects.
>
> Devices can use the interconnect-opp-table DT property to specify OPP
> tables
On Tue, 2 Jul 2019 at 11:34, Valentin Schneider
wrote:
>
> On 01/07/2019 16:47, Vincent Guittot wrote:
> > The load_balance() has a dedicated mecanism to detect when an imbalance
> > is due to CPU affinity and must be handled at parent level. In this case,
> > the imbalan
in the situation described above and
everything looks balanced this time so the imbalance field is immediately
cleared.
The imbalance field should not be cleared if there is no other task to move
when the imbalance is detected.
Signed-off-by: Vincent Guittot
---
Sorry, I sent the patch before
in the situation described above and
everything looks balanced this time so the imbalance field is immediately
cleared.
The imbalance field should not be cleared if there is no other task to move
when the imbalance is detected.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 5 +++--
1
On Fri, 28 Jun 2019 at 16:10, Patrick Bellasi wrote:
>
> On 28-Jun 15:51, Vincent Guittot wrote:
> > On Fri, 28 Jun 2019 at 14:38, Peter Zijlstra wrote:
> > >
> > > On Fri, Jun 28, 2019 at 11:08:14AM +0100, Patrick Bellasi wrote:
> > > > On 26-Jun
On Fri, 28 Jun 2019 at 14:38, Peter Zijlstra wrote:
>
> On Fri, Jun 28, 2019 at 11:08:14AM +0100, Patrick Bellasi wrote:
> > On 26-Jun 13:40, Vincent Guittot wrote:
> > > Hi Patrick,
> > >
> > > On Thu, 20 Jun 2019 at 17:06, Patrick Bellasi
> > >
Hi Patrick,
On Thu, 20 Jun 2019 at 17:06, Patrick Bellasi wrote:
>
> The estimated utilization for a task is currently defined based on:
> - enqueued: the utilization value at the end of the last activation
> - ewma: an exponential moving average which samples are the enqueued
> values
>
Commit-ID: 8ec59c0f5f4966f89f4e3e3cab81710c7fa959d0
Gitweb: https://git.kernel.org/tip/8ec59c0f5f4966f89f4e3e3cab81710c7fa959d0
Author: Vincent Guittot
AuthorDate: Mon, 17 Jun 2019 17:00:17 +0200
Committer: Ingo Molnar
CommitDate: Mon, 24 Jun 2019 19:23:39 +0200
sched/topology: Remove
On Tue, 18 Jun 2019 at 11:34, Peter Zijlstra wrote:
>
> On Mon, Jun 17, 2019 at 06:07:29PM +0100, Valentin Schneider wrote:
> > Hi,
> >
> > On 17/06/2019 16:00, Vincent Guittot wrote:
> > > struct sched_domain *sd parameter is not used anymore in
> > >
struct sched_domain *sd parameter is not used anymore in
arch_scale_cpu_capacity() so we can remove it.
Signed-off-by: Vincent Guittot
---
arch/arm/kernel/topology.c | 2 +-
drivers/base/arch_topology.c | 6 +++---
include/linux/arch_topology.h| 2 +-
include/linux
On Thu, 6 Jun 2019 at 10:34, Dietmar Eggemann wrote:
>
> On 6/6/19 10:20 AM, Vincent Guittot wrote:
> > On Thu, 6 Jun 2019 at 09:49, Quentin Perret wrote:
> >>
> >> Hi Vincent,
> >>
> >> On Thursday 06 Jun 2019 at 09:05:16 (+0200), Vincent Guitto
On Thu, 6 Jun 2019 at 09:49, Quentin Perret wrote:
>
> Hi Vincent,
>
> On Thursday 06 Jun 2019 at 09:05:16 (+0200), Vincent Guittot wrote:
> > Hi Quentin,
> >
> > On Wed, 5 Jun 2019 at 19:21, Quentin Perret wrote:
> > >
> > > On Friday 17
Hi Quentin,
On Wed, 5 Jun 2019 at 19:21, Quentin Perret wrote:
>
> On Friday 17 May 2019 at 14:55:19 (-0700), Stephen Boyd wrote:
> > Quoting Amit Kucheria (2019-05-16 04:54:45)
> > > (cc'ing Andy's correct email address)
> > >
> > > On Wed, May 15, 2019 at 2:46 AM Stephen Boyd wrote:
> > > >
>
On Mon, 3 Jun 2019 at 20:15, Valentin Schneider
wrote:
>
> Hi,
>
> On 03/06/2019 15:17, Vincent Guittot wrote:
> > Clean up asym packing to follow the default load balance behavior:
> > - classify the group by creating a group_asym_capacity field.
>
> Being nitpic
consolidate the calculation of imbalance in calculate_imbalance().
There is no functional changes.
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 63 ++---
1 file changed, 16 insertions(+), 47 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel
consolidate the calculation of imbalance in calculate_imbalance().
There is no functional changes.
Signed-off-by: Vincent Guittot
---
This is a simple cleanup to gather all imbalance calculations in
calculate_imbalance()
before a deeper rework of the load_balance.
kernel/sched/fair.c | 63
others_have_blocked() for !CONFIG_NO_HZ_COMMON so that the
> > NOHZ-specific blocks of update_blocked_averages() become no-ops and
> > the 'done' variable gets optimised out.
> >
> > No change in functionality intended.
> >
> > Reported-by: Qian Cai
> > Signed-off-b
On Mon, 27 May 2019 at 08:21, Dietmar Eggemann wrote:
>
> Since sg_lb_stats::sum_weighted_load is now identical with
> sg_lb_stats::group_load remove it and replace its use case
> (calculating load per task) with the latter.
>
> Signed-off-by: Dietmar Eggemann
FWIW
Acked-b
On Mon, 27 May 2019 at 08:21, Dietmar Eggemann wrote:
>
> This is done to align the per cpu (i.e. per rq) load with the util
> counterpart (cpu_util(int cpu)). The term 'weighted' is not needed
> since there is no 'unweighted' load to distinguish it from.
>
> Signed-off-by: Dietmar Eggemann
>
Hi Song,
On Tue, 14 May 2019 at 22:58, Song Liu wrote:
>
> Hi Vincent,
>
[snip]
> >
> > Here are some more results with both Viresh's patch and the cpu.headroom
> > set. In these tests, the side job runs with SCHED_IDLE, so we get benefit
> > of Viresh's patch.
> >
> > We collected another
Hi Song,
On Thu, 9 May 2019 at 23:54, Song Liu wrote:
>
> On Thu, Apr 25, 2019 at 5:38 AM Viresh Kumar wrote:
> >
> > Hi,
> >
> > Here is another attempt to get some benefit out of the sched-idle
> > policy. The previous version [1] focused on getting better power numbers
> > and this version
On Tue, 7 May 2019 at 15:48, Quentin Perret wrote:
>
> Hi Luca,
>
> On Monday 06 May 2019 at 06:48:31 (+0200), Luca Abeni wrote:
> > diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> > index edfcf8d982e4..646d6d349d53 100644
> > --- a/drivers/base/arch_topology.c
> > +++
Hi Song,
On Tue, 30 Apr 2019 at 08:11, Song Liu wrote:
>
>
>
> > On Apr 29, 2019, at 8:24 AM, Vincent Guittot
> > wrote:
> >
> > Hi Song,
> >
> > On Sun, 28 Apr 2019 at 21:47, Song Liu wrote:
> >>
> >> Hi Morten and V
Hi Song,
On Sun, 28 Apr 2019 at 21:47, Song Liu wrote:
>
> Hi Morten and Vincent,
>
> > On Apr 22, 2019, at 6:22 PM, Song Liu wrote:
> >
> > Hi Vincent,
> >
> >> On Apr 17, 2019, at 5:56 AM, Vincent Guittot
> >> wrote:
> >>
> &
On Thu, 25 Apr 2019 at 19:44, Ingo Molnar wrote:
>
>
> * Ingo Molnar wrote:
>
> >
> > * Peter Zijlstra wrote:
> >
> > > On Wed, Apr 17, 2019 at 08:29:32PM +0200, Ingo Molnar wrote:
> > > > Assuming PeterZ & Rafael & Quentin doesn't hate the whole thermal load
> > > > tracking approach.
> > >
>
On Thu, 25 Apr 2019 at 12:57, Quentin Perret wrote:
>
> On Tuesday 16 Apr 2019 at 15:38:39 (-0400), Thara Gopinath wrote:
> > +/* Per cpu structure to keep track of Thermal Pressure */
> > +struct thermal_pressure {
> > + unsigned long scale; /* scale reflecting average cpu max capacity*/
> >
On Thu, 25 Apr 2019 at 12:45, Quentin Perret wrote:
>
> On Tuesday 23 Apr 2019 at 18:38:46 (-0400), Thara Gopinath wrote:
> > I think there is one major difference between user-defined frequency
> > constraints and frequency constraints due to thermal events in terms of
> > the time period the
On Wed, 10 Apr 2019 at 21:43, Song Liu wrote:
>
> Hi Morten,
>
> > On Apr 10, 2019, at 4:59 AM, Morten Rasmussen
> > wrote:
> >
> >
> > The bit that isn't clear to me, is _why_ adding idle cycles helps your
> > workload. I'm not convinced that adding headroom gives any latency
> > improvements
On Mon, 4 Mar 2019 at 17:48, Quentin Perret wrote:
>
> On Monday 04 Mar 2019 at 16:26:16 (+0100), Peter Zijlstra wrote:
> > On Mon, Mar 04, 2019 at 01:58:12PM +, Quentin Perret wrote:
> > > You could also update the values in sugov_get_util() at the cost of a
> > > small overhead to compute
On Thu, 21 Feb 2019 at 10:35, Rafael J. Wysocki wrote:
>
> On Thu, Feb 21, 2019 at 8:59 AM Vincent Guittot
> wrote:
> >
> > When rpm_resume() desactivates the autosuspend timer, it should only try
> > to cancel hrtimer but not wait for the handler to finish
-runtime: Switch autosuspend over to using hrtimers")
Reported-by: Sunzhaosheng Sun(Zhaosheng)
Signed-off-by: Vincent Guittot
---
drivers/base/power/runtime.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
eep a correct list ordering: https://lkml.org/lkml/2019/2/6/499
With these 3 patches, the slowdown should disappear and the list
ordering will stay correct
Regards,
Vincent
>
> -- Gabriel
>
> On Fri, Jan 25, 2019 at 6:31 AM Vincent Guittot
> wrote:
>>
>> Hi Sargun,
>&g
Commit-ID: 31bc6aeaab1d1de8959b67edbed5c7a4b3cdbe7c
Gitweb: https://git.kernel.org/tip/31bc6aeaab1d1de8959b67edbed5c7a4b3cdbe7c
Author: Vincent Guittot
AuthorDate: Wed, 6 Feb 2019 17:14:21 +0100
Committer: Ingo Molnar
CommitDate: Mon, 11 Feb 2019 08:02:12 +0100
sched/fair: Optimize
Commit-ID: 039ae8bcf7a5f4476f4487e6bf816885fb3fb617
Gitweb: https://git.kernel.org/tip/039ae8bcf7a5f4476f4487e6bf816885fb3fb617
Author: Vincent Guittot
AuthorDate: Wed, 6 Feb 2019 17:14:22 +0100
Committer: Ingo Molnar
CommitDate: Mon, 11 Feb 2019 08:02:13 +0100
sched/fair: Fix O
On Fri, 8 Feb 2019 at 17:51, Peter Zijlstra wrote:
>
> On Fri, Feb 08, 2019 at 05:47:53PM +0100, Vincent Guittot wrote:
> > On Fri, 8 Feb 2019 at 17:30, Peter Zijlstra wrote:
> > > On Fri, Feb 08, 2019 at 04:44:53PM +0100, Vincent Guittot wrote:
> > > > O
On Fri, 8 Feb 2019 at 17:30, Peter Zijlstra wrote:
>
> On Fri, Feb 08, 2019 at 04:44:53PM +0100, Vincent Guittot wrote:
> > On Fri, 8 Feb 2019 at 16:40, Peter Zijlstra wrote:
> > >
> > >
> > > Argh head hurts!!
> > >
> > > On Wed
On Fri, 8 Feb 2019 at 17:31, Peter Zijlstra wrote:
>
> On Wed, Feb 06, 2019 at 05:14:21PM +0100, Vincent Guittot wrote:
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -346,6 +346,18 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq
> > *c
On Fri, 8 Feb 2019 at 16:40, Peter Zijlstra wrote:
>
>
> Argh head hurts!!
>
> On Wed, Feb 06, 2019 at 05:14:21PM +0100, Vincent Guittot wrote:
> > @@ -4438,6 +4450,10 @@ static int tg_unthrottle_up(struct task_group *tg,
> > void *data)
> >
Hi Gilad,
On Wed, 6 Feb 2019 at 17:40, Gilad Ben-Yossef wrote:
>
> Hi all,
>
> A regression was spotted in the ccree driver running on Arm 32 bit
> causing a kernel panic during the crypto API self test phase (panic
> messages included with this message) happening in the PM resume
> callback
On Wed, 6 Feb 2019 at 17:14, Vincent Guittot wrote:
>
> This patchset adds missing pieces in the management of leaf_cfs_rq_list
> to ensure that cfs_rqs are always correctly ordered before
> re-enabling ("sched/fair: Fix O(nr_cgroups) in load balance path")
I
h")
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 43 ++-
1 file changed, 34 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index badf8173..c6167bb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair
This patchset adds missing pieces in the management of leaf_cfs_rq_list
to ensure that cfs_rqs are always correctly ordered before
re-enabling ("sched/fair: Fix O(nr_cgroups) in load balance path")
Vincent Guittot (2):
sched/fair: optimization of update_blocked_averages()
sched/f
throttled cfs_rq are now removed from the list, we can remove the
associated test in update_blocked_averages().
Signed-off-by: Vincent Guittot
---
kernel/sched/fair.c | 24 +++-
1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sc
On Tue, 5 Feb 2019 at 09:10, Ulf Hansson wrote:
>
> On Mon, 4 Feb 2019 at 17:25, Vincent Guittot
> wrote:
> >
> > Similarly to what happened whith autosuspend, a deadlock has been raised
> > with runtime accounting in the sequence:
Update accounting_timestamp field only when PM runtime is enable
and don't forget to account the last state before disabling it.
Suggested-by: Ulf Hansson
Signed-off-by: Vincent Guittot
---
drivers/base/power/runtime.c | 18 ++
1 file changed, 10 insertions(+), 8 deletions
nting")
Reported-by: Biju Das
Signed-off-by: Vincent Guittot
---
drivers/base/power/runtime.c | 17 +
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 1caa394..50740da 100644
--- a/drivers/base/power
Fix time accounting which has the same lock contraint as for using hrtimer
and update accounting_timestamp only when useful.
Vincent Guittot (2):
PM-runtime: move runtime accounting on ktime_get_mono_fast_ns()
PM-runtime: update time accounting only when enabled
drivers/base/power/runtime.c
Commit-ID: f6783319737f28e4436a69611853a5a098cbe974
Gitweb: https://git.kernel.org/tip/f6783319737f28e4436a69611853a5a098cbe974
Author: Vincent Guittot
AuthorDate: Wed, 30 Jan 2019 06:22:47 +0100
Committer: Ingo Molnar
CommitDate: Mon, 4 Feb 2019 09:14:48 +0100
sched/fair: Fix
Commit-ID: 10a35e6812aa0953f02a956c499d23fe4e68af4a
Gitweb: https://git.kernel.org/tip/10a35e6812aa0953f02a956c499d23fe4e68af4a
Author: Vincent Guittot
AuthorDate: Wed, 23 Jan 2019 16:26:54 +0100
Committer: Ingo Molnar
CommitDate: Mon, 4 Feb 2019 09:13:21 +0100
sched/pelt: Skip
Commit-ID: 23127296889fe84b0762b191b5d041e8ba6f2599
Gitweb: https://git.kernel.org/tip/23127296889fe84b0762b191b5d041e8ba6f2599
Author: Vincent Guittot
AuthorDate: Wed, 23 Jan 2019 16:26:53 +0100
Committer: Ingo Molnar
CommitDate: Mon, 4 Feb 2019 09:13:21 +0100
sched/fair: Update
Commit-ID: 62478d9911fab9694c195f0ca8e4701de09be98e
Gitweb: https://git.kernel.org/tip/62478d9911fab9694c195f0ca8e4701de09be98e
Author: Vincent Guittot
AuthorDate: Wed, 23 Jan 2019 16:26:52 +0100
Committer: Ingo Molnar
CommitDate: Mon, 4 Feb 2019 09:13:21 +0100
sched/fair: Move
Hi Wei,
On Fri, 1 Feb 2019 at 18:00, Wei Xu wrote:
>
> Hi Vincent,
>
> On 1/14/2019 8:24 AM, Vincent Guittot wrote:
> > The SDcard detection of hikey960 is active low so cd-inverted is wrong.
> > Instead of adding cd-inverted, we should better set correctly cd-gpios
&g
On Fri, 1 Feb 2019 at 16:48, Vincent Guittot wrote:
>
> On Fri, 1 Feb 2019 at 16:44, Biju Das wrote:
> >
> > Hi Vincent,
> >
> > Thanks for the feedback. Instead of reverting. I just modified the patch
> > like this and it fixed the issue.
> >
>
> From: linux-renesas-soc-ow...@vger.kernel.org > ow...@vger.kernel.org> On Behalf Of Vincent Guittot
> > Sent: 01 February 2019 15:29
> > To: Biju Das
> > Cc: Rafael J. Wysocki ; Linux PM > p...@vger.kernel.org>; Linux Kernel Mailing List > ker...@vger.kerne
Le Friday 01 Feb 2019 à 16:28:54 (+0100), Vincent Guittot a écrit :
> On Fri, 1 Feb 2019 at 16:02, Biju Das wrote:
> >
> > Hi Vincent,
> >
> > I have rebased my kernel to "next-20190201". Still I am seeing dead lock.
> >
> > Am I missing any
194.057208] multi_cpu_stop+0x8c/0x140
> [ 194.060970] cpu_stopper_thread+0xac/0x120
> [ 194.065087] smpboot_thread_fn+0x1ac/0x2c8
> [ 194.069198] kthread+0x128/0x130
> [ 194.072439] ret_from_fork+0x10/0x18
>
>
> Regards,
> Biju
>
> > -Original Message-
601 - 700 of 3961 matches
Mail list logo