Re: [PATCH] sched/rt: Clean up usage of rt_task()

2024-05-15 Thread Phil Auld
On Wed, May 15, 2024 at 01:06:13PM +0100 Qais Yousef wrote: > On 05/15/24 07:20, Phil Auld wrote: > > On Wed, May 15, 2024 at 10:32:38AM +0200 Peter Zijlstra wrote: > > > On Tue, May 14, 2024 at 07:58:51PM -0400, Phil Auld wrote: > > > > > > > > Hi Q

Re: [PATCH] sched/rt: Clean up usage of rt_task()

2024-05-15 Thread Phil Auld
On Wed, May 15, 2024 at 10:32:38AM +0200 Peter Zijlstra wrote: > On Tue, May 14, 2024 at 07:58:51PM -0400, Phil Auld wrote: > > > > Hi Qais, > > > > On Wed, May 15, 2024 at 12:41:12AM +0100 Qais Yousef wrote: > > > rt_task() checks if a task has RT priority.

Re: [PATCH] sched/rt: Clean up usage of rt_task()

2024-05-14 Thread Phil Auld
Hi Qais, On Wed, May 15, 2024 at 12:41:12AM +0100 Qais Yousef wrote: > rt_task() checks if a task has RT priority. But depends on your > dictionary, this could mean it belongs to RT class, or is a 'realtime' > task, which includes RT and DL classes. > > Since this has caused some confusion

Re: [PATCH 2/2] sched/fair: Relax task_hot() for misfit tasks

2021-04-19 Thread Phil Auld
On Mon, Apr 19, 2021 at 06:17:47PM +0100 Valentin Schneider wrote: > On 19/04/21 08:59, Phil Auld wrote: > > On Fri, Apr 16, 2021 at 10:43:38AM +0100 Valentin Schneider wrote: > >> On 15/04/21 16:39, Rik van Riel wrote: > >> > On Thu, 2021-04-15 at 18:58 +

Re: [PATCH 2/2] sched/fair: Relax task_hot() for misfit tasks

2021-04-19 Thread Phil Auld
On Fri, Apr 16, 2021 at 10:43:38AM +0100 Valentin Schneider wrote: > On 15/04/21 16:39, Rik van Riel wrote: > > On Thu, 2021-04-15 at 18:58 +0100, Valentin Schneider wrote: > >> Consider the following topology: > >> > >> Long story short, preempted misfit tasks are affected by task_hot(), > >>

Re: [PATCH v4 1/4] sched/fair: Introduce primitives for CFS bandwidth burst

2021-03-18 Thread Phil Auld
On Thu, Mar 18, 2021 at 09:26:58AM +0800 changhuaixin wrote: > > > > On Mar 17, 2021, at 4:06 PM, Peter Zijlstra wrote: > > > > On Wed, Mar 17, 2021 at 03:16:18PM +0800, changhuaixin wrote: > > > >>> Why do you allow such a large burst? I would expect something like: > >>> > >>> if (burst

Re: [PATCH v1] sched/fair: update_pick_idlest() Select group with lowest group_util when idle_cpus are equal

2020-11-09 Thread Phil Auld
On Mon, Nov 09, 2020 at 03:38:15PM + Mel Gorman wrote: > On Mon, Nov 09, 2020 at 10:24:11AM -0500, Phil Auld wrote: > > Hi, > > > > On Fri, Nov 06, 2020 at 04:00:10PM + Mel Gorman wrote: > > > On Fri, Nov 06, 2020 at 02:33:56PM +0100, Vincent Guittot wro

Re: [PATCH v1] sched/fair: update_pick_idlest() Select group with lowest group_util when idle_cpus are equal

2020-11-09 Thread Phil Auld
Hi, On Fri, Nov 06, 2020 at 04:00:10PM + Mel Gorman wrote: > On Fri, Nov 06, 2020 at 02:33:56PM +0100, Vincent Guittot wrote: > > On Fri, 6 Nov 2020 at 13:03, Mel Gorman wrote: > > > > > > On Wed, Nov 04, 2020 at 09:42:05AM +, Mel Gorman wrote: > > > > While it's possible that some other

Re: [PATCH v1] sched/fair: update_pick_idlest() Select group with lowest group_util when idle_cpus are equal

2020-11-02 Thread Phil Auld
Hi, On Mon, Nov 02, 2020 at 12:06:21PM +0100 Vincent Guittot wrote: > On Mon, 2 Nov 2020 at 11:50, Mel Gorman wrote: > > > > On Tue, Jul 14, 2020 at 08:59:41AM -0400, peter.pu...@linaro.org wrote: > > > From: Peter Puhov > > > > > > v0: https://lkml.org/lkml/2020/6/16/1286 > > > > > > Changes

Re: [PATCH] sched/fair: remove the spin_lock operations

2020-11-02 Thread Phil Auld
On Fri, Oct 30, 2020 at 10:16:29PM + David Laight wrote: > From: Benjamin Segall > > Sent: 30 October 2020 18:48 > > > > Hui Su writes: > > > > > Since 'ab93a4bc955b ("sched/fair: Remove > > > distribute_running fromCFS bandwidth")',there is > > > nothing to protect between

Re: [PATCH] sched/fair: remove the spin_lock operations

2020-10-30 Thread Phil Auld
5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct > cfs_bandwidth *cfs_b) > return; > > distribute_cfs_runtime(cfs_b); > - > - raw_spin_lock_irqsave(_b->lock, flags); > - raw_spin_unlock_irqrestore(_b->lock, flags); > } > > /* > -- > 2.29.0 > > Nice :) Reviewed-by: Phil Auld --

Re: [PATCH 0/8] Style and small fixes for core-scheduling

2020-10-28 Thread Phil Auld
Hi John, On Wed, Oct 28, 2020 at 05:19:09AM -0700 John B. Wyatt IV wrote: > Patchset of style and small fixes for the 8th iteration of the > Core-Scheduling feature. > > Style fixes include changing spaces to tabs, inserting new lines before > declarations, removing unused braces, and spelling.

Re: default cpufreq gov, was: [PATCH] sched/fair: check for idle core

2020-10-22 Thread Phil Auld
On Thu, Oct 22, 2020 at 09:32:55PM +0100 Mel Gorman wrote: > On Thu, Oct 22, 2020 at 07:59:43PM +0200, Rafael J. Wysocki wrote: > > > > Agreed. I'd like the option to switch back if we make the default > > > > change. > > > > It's on the table and I'd like to be able to go that way. > > > > > > >

Re: default cpufreq gov, was: [PATCH] sched/fair: check for idle core

2020-10-22 Thread Phil Auld
On Thu, Oct 22, 2020 at 03:58:13PM +0100 Colin Ian King wrote: > On 22/10/2020 15:52, Mel Gorman wrote: > > On Thu, Oct 22, 2020 at 02:29:49PM +0200, Peter Zijlstra wrote: > >> On Thu, Oct 22, 2020 at 02:19:29PM +0200, Rafael J. Wysocki wrote: > However I do want to retire ondemand,

Re: [PATCH] sched/fair: Remove the force parameter of update_tg_load_avg()

2020-09-25 Thread Phil Auld
update_tg_load_avg(cfs_rq); > propagate_entity_cfs_rq(se); > } > > @@ -10805,7 +10804,7 @@ static void attach_entity_cfs_rq(struct sched_entity > *se) > /* Synchronize entity with its cfs_rq */ > update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : > SKIP_AGE_LOAD); > attach_entity_load_avg(cfs_rq, se); > - update_tg_load_avg(cfs_rq, false); > + update_tg_load_avg(cfs_rq); > propagate_entity_cfs_rq(se); > } > > -- > 2.17.1 > LGTM, Reviewed-by: Phil Auld --

Re: [RFC PATCH v2] sched/fair: select idle cpu from idle cpumask in sched domain

2020-09-24 Thread Phil Auld
On Thu, Sep 24, 2020 at 10:43:12AM -0700 Tim Chen wrote: > > > On 9/24/20 10:13 AM, Phil Auld wrote: > > On Thu, Sep 24, 2020 at 09:37:33AM -0700 Tim Chen wrote: > >> > >> > >> On 9/22/20 12:14 AM, Vincent Guittot wrote: > >> > >>>

Re: [RFC PATCH v2] sched/fair: select idle cpu from idle cpumask in sched domain

2020-09-24 Thread Phil Auld
On Thu, Sep 24, 2020 at 09:37:33AM -0700 Tim Chen wrote: > > > On 9/22/20 12:14 AM, Vincent Guittot wrote: > > >> > > And a quick test with hackbench on my octo cores arm64 gives for 12 > > Vincent, > > Is it octo (=10) or octa (=8) cores on a single socket for your system? In what

Re: [RFC -V2] autonuma: Migrate on fault among multiple bound nodes

2020-09-22 Thread Phil Auld
Hi, On Tue, Sep 22, 2020 at 02:54:01PM +0800 Huang Ying wrote: > Now, AutoNUMA can only optimize the page placement among the NUMA nodes if the > default memory policy is used. Because the memory policy specified explicitly > should take precedence. But this seems too strict in some situations.

Re: [PATCH 0/4] sched/fair: Improve fairness between cfs tasks

2020-09-18 Thread Phil Auld
On Fri, Sep 18, 2020 at 12:39:28PM -0400 Phil Auld wrote: > Hi Peter, > > On Mon, Sep 14, 2020 at 01:42:02PM +0200 pet...@infradead.org wrote: > > On Mon, Sep 14, 2020 at 12:03:36PM +0200, Vincent Guittot wrote: > > > Vincent Guittot (4): > > > sched/fair: relax

Re: [PATCH 0/4] sched/fair: Improve fairness between cfs tasks

2020-09-18 Thread Phil Auld
Hi Peter, On Mon, Sep 14, 2020 at 01:42:02PM +0200 pet...@infradead.org wrote: > On Mon, Sep 14, 2020 at 12:03:36PM +0200, Vincent Guittot wrote: > > Vincent Guittot (4): > > sched/fair: relax constraint on task's load during load balance > > sched/fair: reduce minimal imbalance threshold > >

Re: [PATCH 0/4] sched/fair: Improve fairness between cfs tasks

2020-09-14 Thread Phil Auld
On Mon, Sep 14, 2020 at 01:42:02PM +0200 pet...@infradead.org wrote: > On Mon, Sep 14, 2020 at 12:03:36PM +0200, Vincent Guittot wrote: > > Vincent Guittot (4): > > sched/fair: relax constraint on task's load during load balance > > sched/fair: reduce minimal imbalance threshold > >

Re: [PATCH v2] sched/debug: Add new tracepoint to track cpu_capacity

2020-09-08 Thread Phil Auld
Hi Quais, On Mon, Sep 07, 2020 at 12:02:24PM +0100 Qais Yousef wrote: > On 09/02/20 09:54, Phil Auld wrote: > > > > > > I think this decoupling is not necessary. The natural place for those > > > scheduler trace_event based on trace_points extension files i

Re: Requirements to control kernel isolation/nohz_full at runtime

2020-09-03 Thread Phil Auld
On Thu, Sep 03, 2020 at 03:30:15PM -0300 Marcelo Tosatti wrote: > On Thu, Sep 03, 2020 at 03:23:59PM -0300, Marcelo Tosatti wrote: > > On Tue, Sep 01, 2020 at 12:46:41PM +0200, Frederic Weisbecker wrote: > > > Hi, > > > > Hi Frederic, > > > > Thanks for the summary! Looking forward to your

Re: [PATCH v2] sched/debug: Add new tracepoint to track cpu_capacity

2020-09-02 Thread Phil Auld
On Wed, Sep 02, 2020 at 12:44:42PM +0200 Dietmar Eggemann wrote: > + Phil Auld > Thanks Dietmar. > On 28/08/2020 19:26, Qais Yousef wrote: > > On 08/28/20 19:10, Dietmar Eggemann wrote: > >> On 28/08/2020 12:27, Qais Yousef wrote: > >>> On 08/28/20 10

[tip: sched/urgent] sched: Fix use of count for nr_running tracepoint

2020-08-06 Thread tip-bot2 for Phil Auld
The following commit has been merged into the sched/urgent branch of tip: Commit-ID: a1bd06853ee478d37fae9435c5521e301de94c67 Gitweb: https://git.kernel.org/tip/a1bd06853ee478d37fae9435c5521e301de94c67 Author:Phil Auld AuthorDate:Wed, 05 Aug 2020 16:31:38 -04:00 Committer

[PATCH] sched: Fix use of count for nr_running tracepoint

2020-08-05 Thread Phil Auld
The count field is meant to tell if an update to nr_running is an add or a subtract. Make it do so by adding the missing minus sign. Fixes: 9d246053a691 ("sched: Add a tracepoint to track rq->nr_running") Signed-off-by: Phil Auld --- kernel/sched/sched.h | 2 +- 1 file changed

[tip: sched/core] sched: Add a tracepoint to track rq->nr_running

2020-07-09 Thread tip-bot2 for Phil Auld
The following commit has been merged into the sched/core branch of tip: Commit-ID: 9d246053a69196c7c27068870e9b4b66ac536f68 Gitweb: https://git.kernel.org/tip/9d246053a69196c7c27068870e9b4b66ac536f68 Author:Phil Auld AuthorDate:Mon, 29 Jun 2020 15:23:03 -04:00 Committer

Re: [RFC][PATCH] sched: Better document ttwu()

2020-07-02 Thread Phil Auld
Hi Peter, On Thu, Jul 02, 2020 at 02:52:11PM +0200 Peter Zijlstra wrote: > > Dave hit the problem fixed by commit: > > b6e13e85829f ("sched/core: Fix ttwu() race") > > and failed to understand much of the code involved. Per his request a > few comments to (hopefully) clarify things. > >

Re: [RFC PATCH 00/13] Core scheduling v5

2020-06-30 Thread Phil Auld
On Fri, Jun 26, 2020 at 11:10:28AM -0400 Joel Fernandes wrote: > On Fri, Jun 26, 2020 at 10:36:01AM -0400, Vineeth Remanan Pillai wrote: > > On Thu, Jun 25, 2020 at 9:47 PM Joel Fernandes > > wrote: > > > > > > On Thu, Jun 25, 2020 at 4:12 PM Vineeth Remanan Pillai > > > wrote: > > > [...] > >

Re: [PATCH v2] Sched: Add a tracepoint to track rq->nr_running

2020-06-29 Thread Phil Auld
nts are added to add_nr_running() and sub_nr_running() which are in kernel/sched/sched.h. In order to avoid CREATE_TRACE_POINTS in the header a wrapper call is used and the trace/events/sched.h include is moved before sched.h in kernel/sched/core. Signed-off-by: Phil Auld CC: Qais Yousef CC: Ingo Mol

Re: [PATCH] Sched: Add a tracepoint to track rq->nr_running

2020-06-23 Thread Phil Auld
Hi Qais, On Mon, Jun 22, 2020 at 01:17:47PM +0100 Qais Yousef wrote: > On 06/19/20 10:11, Phil Auld wrote: > > Add a bare tracepoint trace_sched_update_nr_running_tp which tracks > > ->nr_running CPU's rq. This is used to accurately trace this data and > > provide a vi

Re: [PATCH] Sched: Add a tracepoint to track rq->nr_running

2020-06-19 Thread Phil Auld
On Fri, Jun 19, 2020 at 12:46:41PM -0400 Steven Rostedt wrote: > On Fri, 19 Jun 2020 10:11:20 -0400 > Phil Auld wrote: > > > > > diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h > > index ed168b0e2c53..a6d9fe5a68cf 100644 > > --- a/inclu

[PATCH] Sched: Add a tracepoint to track rq->nr_running

2020-06-19 Thread Phil Auld
nts are added to add_nr_running() and sub_nr_running() which are in kernel/sched/sched.h. Since sched.h includes trace/events/tlb.h via mmu_context.h we had to limit when CREATE_TRACE_POINTS is defined. Signed-off-by: Phil Auld CC: Qais Yousef CC: Ingo Molnar CC: Peter Zijlstra CC: Vincent Guittot

Re: [tip: sched/core] sched/fair: Remove distribute_running from CFS bandwidth

2020-06-08 Thread Phil Auld
On Tue, Jun 09, 2020 at 07:05:38AM +0800 Tao Zhou wrote: > Hi Phil, > > On Mon, Jun 08, 2020 at 10:53:04AM -0400, Phil Auld wrote: > > On Sun, Jun 07, 2020 at 09:25:58AM +0800 Tao Zhou wrote: > > > Hi, > > > > > > On Fri, May 01, 2020 at 06:

Re: [tip: sched/core] sched/fair: Remove distribute_running from CFS bandwidth

2020-06-08 Thread Phil Auld
> > don't start a distribution while one is already running. However, even > > in the event that this race occurs, it is fine to have two distributions > > running (especially now that distribute grabs the cfs_b->lock to > > determine remaining quota before assigning). > > &

Re: [PATCH RFC] sched: Add a per-thread core scheduling interface

2020-05-28 Thread Phil Auld
On Thu, May 28, 2020 at 02:17:19PM -0400 Phil Auld wrote: > On Thu, May 28, 2020 at 07:01:28PM +0200 Peter Zijlstra wrote: > > On Sun, May 24, 2020 at 10:00:46AM -0400, Phil Auld wrote: > > > On Fri, May 22, 2020 at 05:35:24PM -0400 Joel Fernandes wrote: > > > > On F

Re: [PATCH RFC] sched: Add a per-thread core scheduling interface

2020-05-28 Thread Phil Auld
On Thu, May 28, 2020 at 07:01:28PM +0200 Peter Zijlstra wrote: > On Sun, May 24, 2020 at 10:00:46AM -0400, Phil Auld wrote: > > On Fri, May 22, 2020 at 05:35:24PM -0400 Joel Fernandes wrote: > > > On Fri, May 22, 2020 at 02:59:05PM +0200, Peter Zijlstra wrote: > > >

Re: [PATCH RFC] sched: Add a per-thread core scheduling interface

2020-05-24 Thread Phil Auld
On Fri, May 22, 2020 at 05:35:24PM -0400 Joel Fernandes wrote: > On Fri, May 22, 2020 at 02:59:05PM +0200, Peter Zijlstra wrote: > [..] > > > > It doens't allow tasks for form their own groups (by for example setting > > > > the key to that of another task). > > > > > > So for this, I was

[tip: sched/urgent] sched/fair: Fix enqueue_task_fair() warning some more

2020-05-19 Thread tip-bot2 for Phil Auld
The following commit has been merged into the sched/urgent branch of tip: Commit-ID: b34cb07dde7c2346dec73d053ce926aeaa087303 Gitweb: https://git.kernel.org/tip/b34cb07dde7c2346dec73d053ce926aeaa087303 Author:Phil Auld AuthorDate:Tue, 12 May 2020 09:52:22 -04:00 Committer

Re: [PATCH v2] sched/fair: enqueue_task_fair optimization

2020-05-13 Thread Phil Auld
On Wed, May 13, 2020 at 03:25:29PM +0200 Vincent Guittot wrote: > On Wed, 13 May 2020 at 15:18, Phil Auld wrote: > > > > On Wed, May 13, 2020 at 03:15:53PM +0200 Vincent Guittot wrote: > > > On Wed, 13 May 2020 at 15:13, Phil Auld wrote: > > > > > > &g

Re: [PATCH v2] sched/fair: enqueue_task_fair optimization

2020-05-13 Thread Phil Auld
On Wed, May 13, 2020 at 03:15:53PM +0200 Vincent Guittot wrote: > On Wed, 13 May 2020 at 15:13, Phil Auld wrote: > > > > On Wed, May 13, 2020 at 03:10:28PM +0200 Vincent Guittot wrote: > > > On Wed, 13 May 2020 at 14:45, Phil Auld wrote: > > > > > > >

Re: [PATCH v2] sched/fair: enqueue_task_fair optimization

2020-05-13 Thread Phil Auld
On Wed, May 13, 2020 at 03:10:28PM +0200 Vincent Guittot wrote: > On Wed, 13 May 2020 at 14:45, Phil Auld wrote: > > > > Hi Vincent, > > > > On Wed, May 13, 2020 at 02:33:35PM +0200 Vincent Guittot wrote: > > > enqueue_task_fair jumps to enqu

Re: [PATCH v2] sched/fair: fix unthrottle_cfs_rq for leaf_cfs_rq list

2020-05-13 Thread Phil Auld
the same pattern as > enqueue_task_fair(). This fixes a problem already faced with the latter and > add an optimization in the last for_each_sched_entity loop. > > Reported-by Tao Zhou > Reviewed-by: Phil Auld > Signed-off-by: Vincent Guittot > --- > > v2 changes: > - R

Re: [PATCH v2] sched/fair: enqueue_task_fair optimization

2020-05-13 Thread Phil Auld
sn't jump to the label then se must be NULL for the loop to terminate. The final loop is a NOP if se is NULL. The check wasn't protecting that. Otherwise still > Reviewed-by: Phil Auld Cheers, Phil > Signed-off-by: Vincent Guittot > --- > > v2 changes: > - Remove useless if s

Re: [PATCH] sched/fair: fix unthrottle_cfs_rq for leaf_cfs_rq list

2020-05-12 Thread Phil Auld
with this one as well. As expected, since the first patch fixed the issue I was seeing and I wasn't hitting the assert here anyway, I didn't hit the assert. But I also didn't hit any other issues, new or old. It makes sense to use the same logic flow here as enqueue_task_fair. Reviewed-by: Phil Auld Cheers, Phil --

Re: [PATCH] sched/fair: enqueue_task_fair optimization

2020-05-12 Thread Phil Auld
ask_struct *p, > int flags) > > } > > +enqueue_throttle: > if (cfs_bandwidth_used()) { > /* >* When bandwidth control is enabled; the cfs_rq_throttled() > -- > 2.17.1 > Reviewed-by: Phil Auld --

Re: [PATCH v3] sched/fair: Fix enqueue_task_fair warning some more

2020-05-12 Thread Phil Auld
On Tue, May 12, 2020 at 04:10:48PM +0200 Peter Zijlstra wrote: > On Tue, May 12, 2020 at 09:52:22AM -0400, Phil Auld wrote: > > sched/fair: Fix enqueue_task_fair warning some more > > > > The recent patch, fe61468b2cb (sched/fair: Fix enqueue_task_fair warning) >

Re: [PATCH v3] sched/fair: Fix enqueue_task_fair warning some more

2020-05-12 Thread Phil Auld
fixes and review tags. Suggested-by: Vincent Guittot Signed-off-by: Phil Auld Cc: Peter Zijlstra (Intel) Cc: Vincent Guittot Cc: Ingo Molnar Cc: Juri Lelli Reviewed-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Fixes: fe61468b2cb (sched/fair: Fix enqueue_task_fair warning) --- kernel

Re: [PATCH v2] sched/fair: Fix enqueue_task_fair warning some more

2020-05-12 Thread Phil Auld
Hi Dietmar, On Tue, May 12, 2020 at 11:00:16AM +0200 Dietmar Eggemann wrote: > On 11/05/2020 22:44, Phil Auld wrote: > > On Mon, May 11, 2020 at 09:25:43PM +0200 Vincent Guittot wrote: > >> On Thu, 7 May 2020 at 22:36, Phil Auld wrote: > >>> > >>> sche

Re: [PATCH v2] sched/fair: Fix enqueue_task_fair warning some more

2020-05-11 Thread Phil Auld
On Mon, May 11, 2020 at 09:25:43PM +0200 Vincent Guittot wrote: > On Thu, 7 May 2020 at 22:36, Phil Auld wrote: > > > > sched/fair: Fix enqueue_task_fair warning some more > > > > The recent patch, fe61468b2cb (sched/fair: Fix enqueue_task_fair warning) > >

Re: [PATCH v2] sched/fair: Fix enqueue_task_fair warning some more

2020-05-07 Thread Phil Auld
ddress this by calling leaf_add_rq_list if there are throttled parents while doing the second for_each_sched_entity loop. Suggested-by: Vincent Guittot Signed-off-by: Phil Auld Cc: Peter Zijlstra (Intel) Cc: Vincent Guittot Cc: Ingo Molnar Cc: Juri Lelli --- kernel/sched/fair.c | 7 +++

Re: [PATCH] sched/fair: Fix enqueue_task_fair warning some more

2020-05-07 Thread Phil Auld
Hi Vincent, On Thu, May 07, 2020 at 05:06:29PM +0200 Vincent Guittot wrote: > Hi Phil, > > On Wed, 6 May 2020 at 20:05, Phil Auld wrote: > > > > Hi Vincent, > > > > Thanks for taking a look. More below... > > > > On Wed, May 06, 2020 at 06:36:45

Re: [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6

2020-05-07 Thread Phil Auld
On Thu, May 07, 2020 at 06:29:44PM +0200 Jirka Hladky wrote: > Hi Mel, > > we are not targeting just OMP applications. We see the performance > degradation also for other workloads, like SPECjbb2005 and > SPECjvm2008. Even worse, it also affects a higher number of threads. > For example,

Re: [PATCH] sched/fair: Fix enqueue_task_fair warning some more

2020-05-07 Thread Phil Auld
Hi Vincent, On Thu, May 07, 2020 at 05:06:29PM +0200 Vincent Guittot wrote: > Hi Phil, > > On Wed, 6 May 2020 at 20:05, Phil Auld wrote: > > > > Hi Vincent, > > > > Thanks for taking a look. More below... > > > > On Wed, May 06, 2020 at 06:36:45

Re: [PATCH] sched/fair: Fix enqueue_task_fair warning some more

2020-05-06 Thread Phil Auld
Hi Vincent, Thanks for taking a look. More below... On Wed, May 06, 2020 at 06:36:45PM +0200 Vincent Guittot wrote: > Hi Phil, > > - reply to all this time > > On Wed, 6 May 2020 at 16:18, Phil Auld wrote: > > > > sched/fair: Fix enqueue_task_fair warning some mo

[PATCH] sched/fair: Fix enqueue_task_fair warning some more

2020-05-06 Thread Phil Auld
ddress this issue by saving the se pointer when the first loop exits and resetting it before doing the fix up, if needed. Signed-off-by: Phil Auld Cc: Peter Zijlstra (Intel) Cc: Vincent Guittot Cc: Ingo Molnar Cc: Juri Lelli --- kernel/sched/fair.c | 4 1 file changed, 4 insertions(+)

Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

2019-10-21 Thread Phil Auld
On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote: > On Mon, 21 Oct 2019 at 09:50, Ingo Molnar wrote: > > > > > > * Vincent Guittot wrote: > > > > > Several wrong task placement have been raised with the current load > > > balance algorithm but their fixes are not always straight

Re: [PATCH v3 0/8] sched/fair: rework the CFS load balance

2019-10-09 Thread Phil Auld
On Tue, Oct 08, 2019 at 05:53:11PM +0200 Vincent Guittot wrote: > Hi Phil, > ... > While preparing v4, I have noticed that I have probably oversimplified > the end of find_idlest_group() in patch "sched/fair: optimize > find_idlest_group" when it compares local vs the idlest other group. >

Re: [PATCH v3 0/8] sched/fair: rework the CFS load balance

2019-10-08 Thread Phil Auld
Hi Vincent, On Thu, Sep 19, 2019 at 09:33:31AM +0200 Vincent Guittot wrote: > Several wrong task placement have been raised with the current load > balance algorithm but their fixes are not always straight forward and > end up with using biased values to force migrations. A cleanup and rework >

Re: [PATCH] sched/fair: scale quota and period without losing quota/period ratio precision

2019-10-07 Thread Phil Auld
20, cfs_quota_us = 3200) [ 1393.965140] cfs_period_timer[cpu11]: period too short, but cannot scale up without losing precision (cfs_period_us = 20, cfs_quota_us = 3200) I suspect going higher could cause the original lockup, but that'd be the case with the old code as well. And this als

Re: [PATCH] sched/fair: scale quota and period without losing quota/period ratio precision

2019-10-07 Thread Phil Auld
Hi Xuewei, On Fri, Oct 04, 2019 at 05:28:15PM -0700 Xuewei Zhang wrote: > On Fri, Oct 4, 2019 at 6:14 AM Phil Auld wrote: > > > > On Thu, Oct 03, 2019 at 07:05:56PM -0700 Xuewei Zhang wrote: > > > +cc neeln...@google.com and hao...@google.com, they helped a lot >

Re: [PATCH] sched/fair: scale quota and period without losing quota/period ratio precision

2019-10-04 Thread Phil Auld
On Thu, Oct 03, 2019 at 07:05:56PM -0700 Xuewei Zhang wrote: > +cc neeln...@google.com and hao...@google.com, they helped a lot > for this issue. Sorry I forgot to include them when sending out the patch. > > On Thu, Oct 3, 2019 at 5:55 PM Phil Auld wrote: > > > > Hi

Re: [PATCH] sched/fair: scale quota and period without losing quota/period ratio precision

2019-10-03 Thread Phil Auld
Hi, On Thu, Oct 03, 2019 at 05:12:43PM -0700 Xuewei Zhang wrote: > quota/period ratio is used to ensure a child task group won't get more > bandwidth than the parent task group, and is calculated as: > normalized_cfs_quota() = [(quota_us << 20) / period_us] > > If the quota/period ratio was

Re: [PATCH v2 0/8] sched/fair: rework the CFS load balance

2019-08-29 Thread Phil Auld
oup due to using the average load. The second was in fix_small_imbalance(). The "load" of the lu.C tasks was so low it often failed to move anything even when it did find a group that was overloaded (nr_running > width). I have two small patches which fix this but since Vincent was > embarking on a re-work which also addressed this I dropped them. We've also run a series of performance tests we use to check for regressions and did not find any bad results on our workloads and systems. So... Tested-by: Phil Auld Cheers, Phil --

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-29 Thread Phil Auld
On Wed, Aug 28, 2019 at 06:01:14PM +0200 Peter Zijlstra wrote: > On Wed, Aug 28, 2019 at 11:30:34AM -0400, Phil Auld wrote: > > On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote: > > > > And given MDS, I'm still not entirely convinced it all makes sense. If &g

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-28 Thread Phil Auld
On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote: > On Tue, Aug 27, 2019 at 10:14:17PM +0100, Matthew Garrett wrote: > > Apple have provided a sysctl that allows applications to indicate that > > specific threads should make use of core isolation while allowing > > the rest of the

Re: [PATCH -next v2] sched/fair: fix -Wunused-but-set-variable warnings

2019-08-23 Thread Phil Auld
On Fri, Aug 23, 2019 at 10:28:02AM -0700 bseg...@google.com wrote: > Dave Chiluk writes: > > > On Wed, Aug 21, 2019 at 12:36 PM wrote: > >> > >> Qian Cai writes: > >> > >> > The linux-next commit "sched/fair: Fix low cpu usage with high > >> > throttling by removing expiration of cpu-local

[PATCH] sched/rt: silence double clock update warning by using rq_lock wrappers

2019-08-15 Thread Phil Auld
er does: raw_spin_lock(>lock); update_rq_clock(rq); which triggers the warning because of not using the rq_lock wrappers. So, use the wrappers. Signed-off-by: Phil Auld Cc: Peter Zijlstra (Intel) Cc: Ingo Molnar Cc: Valentin Schneider Cc: Dietmar Eggemann --- ke

Re: [PATCH] sched: use rq_lock/unlock in online_fair_sched_group

2019-08-15 Thread Phil Auld
On Fri, Aug 09, 2019 at 06:43:09PM +0100 Valentin Schneider wrote: > On 09/08/2019 14:33, Phil Auld wrote: > > On Tue, Aug 06, 2019 at 03:03:34PM +0200 Peter Zijlstra wrote: > >> On Thu, Aug 01, 2019 at 09:37:49AM -0400, Phil Auld wrote: > >>> Enabling WARN_DOU

Re: [tip:sched/core] sched/fair: Use rq_lock/unlock in online_fair_sched_group

2019-08-12 Thread Phil Auld
On Mon, Aug 12, 2019 at 05:52:04AM -0700 tip-bot for Phil Auld wrote: > Commit-ID: a46d14eca7b75fffe35603aa8b81df654353d80f > Gitweb: > https://git.kernel.org/tip/a46d14eca7b75fffe35603aa8b81df654353d80f > Author: Phil Auld > AuthorDate: Thu, 1 Aug 2019 09:37:49 -0

[tip:sched/core] sched/fair: Use rq_lock/unlock in online_fair_sched_group

2019-08-12 Thread tip-bot for Phil Auld
Commit-ID: a46d14eca7b75fffe35603aa8b81df654353d80f Gitweb: https://git.kernel.org/tip/a46d14eca7b75fffe35603aa8b81df654353d80f Author: Phil Auld AuthorDate: Thu, 1 Aug 2019 09:37:49 -0400 Committer: Thomas Gleixner CommitDate: Mon, 12 Aug 2019 14:45:34 +0200 sched/fair: Use rq_lock

Re: [tip:sched/core] sched/fair: Use rq_lock/unlock in online_fair_sched_group

2019-08-09 Thread Phil Auld
On Fri, Aug 09, 2019 at 06:21:22PM +0200 Dietmar Eggemann wrote: > On 8/8/19 1:01 PM, tip-bot for Phil Auld wrote: > > [...] > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index 19c58599e967..d9407517dae9 100644 > > --- a/kernel/sched/fair.c

Re: [PATCH] sched: use rq_lock/unlock in online_fair_sched_group

2019-08-09 Thread Phil Auld
On Tue, Aug 06, 2019 at 03:03:34PM +0200 Peter Zijlstra wrote: > On Thu, Aug 01, 2019 at 09:37:49AM -0400, Phil Auld wrote: > > Enabling WARN_DOUBLE_CLOCK in /sys/kernel/debug/sched_features causes > > ISTR there were more issues; but it sure is good to start picking them > off

[tip:sched/core] sched/fair: Use rq_lock/unlock in online_fair_sched_group

2019-08-08 Thread tip-bot for Phil Auld
Commit-ID: 6b8fd01b21f5f2701b407a7118f236ba4c41226d Gitweb: https://git.kernel.org/tip/6b8fd01b21f5f2701b407a7118f236ba4c41226d Author: Phil Auld AuthorDate: Thu, 1 Aug 2019 09:37:49 -0400 Committer: Peter Zijlstra CommitDate: Thu, 8 Aug 2019 09:09:31 +0200 sched/fair: Use rq_lock

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Phil Auld
On Tue, Aug 06, 2019 at 10:41:25PM +0800 Aaron Lu wrote: > On 2019/8/6 22:17, Phil Auld wrote: > > On Tue, Aug 06, 2019 at 09:54:01PM +0800 Aaron Lu wrote: > >> On Mon, Aug 05, 2019 at 04:09:15PM -0400, Phil Auld wrote: > >>> Hi, > >>> > >&g

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Phil Auld
On Tue, Aug 06, 2019 at 09:54:01PM +0800 Aaron Lu wrote: > On Mon, Aug 05, 2019 at 04:09:15PM -0400, Phil Auld wrote: > > Hi, > > > > On Fri, Aug 02, 2019 at 11:37:15AM -0400 Julien Desfossez wrote: > > > We tested both Aaron's and Tim's patches and here are our re

Re: [PATCH] sched: use rq_lock/unlock in online_fair_sched_group

2019-08-06 Thread Phil Auld
On Tue, Aug 06, 2019 at 03:03:34PM +0200 Peter Zijlstra wrote: > On Thu, Aug 01, 2019 at 09:37:49AM -0400, Phil Auld wrote: > > Enabling WARN_DOUBLE_CLOCK in /sys/kernel/debug/sched_features causes > > ISTR there were more issues; but it sure is good to start picking them > o

Re: [PATCH] sched: use rq_lock/unlock in online_fair_sched_group

2019-08-06 Thread Phil Auld
On Tue, Aug 06, 2019 at 02:04:16PM +0800 Hillf Danton wrote: > > On Mon, 5 Aug 2019 22:07:05 +0800 Phil Auld wrote: > > > > If we're to clear that flag right there, outside of the lock pinning code, > > then I think we might as well just remove the flag and all ass

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-05 Thread Phil Auld
Hi, On Fri, Aug 02, 2019 at 11:37:15AM -0400 Julien Desfossez wrote: > We tested both Aaron's and Tim's patches and here are our results. > > Test setup: > - 2 1-thread sysbench, one running the cpu benchmark, the other one the > mem benchmark > - both started at the same time > - both are

Re: [PATCH] sched: use rq_lock/unlock in online_fair_sched_group

2019-08-05 Thread Phil Auld
On Fri, Aug 02, 2019 at 05:20:38PM +0800 Hillf Danton wrote: > > On Thu, 1 Aug 2019 09:37:49 -0400 Phil Auld wrote: > > > > Enabling WARN_DOUBLE_CLOCK in /sys/kernel/debug/sched_features causes > > warning to fire in update_rq_clock. This seems to be caused by onlining &g

[PATCH] sched: use rq_lock/unlock in online_fair_sched_group

2019-08-01 Thread Phil Auld
e raw locking removes this warning. Signed-off-by: Phil Auld Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Vincent Guittot --- Resend with PATCH instead of CHANGE in subject, and more recent upstream x86 backtrace. kernel/sched/fair.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-

Re: [RFC][PATCH 02/13] stop_machine: Fix stop_cpus_in_progress ordering

2019-07-30 Thread Phil Auld
On Fri, Jul 26, 2019 at 04:54:11PM +0200 Peter Zijlstra wrote: > Make sure the entire for loop has stop_cpus_in_progress set. > > Cc: Valentin Schneider > Cc: Aaron Lu > Cc: keesc...@chromium.org > Cc: mi...@kernel.org > Cc: Pawan Gupta > Cc: Phil Auld > Cc: torva..

[CHANGE] sched: use rq_lock/unlock in online_fair_sched_group

2019-07-26 Thread Phil Auld
0/0x130 [ 612.546585] online_fair_sched_group+0x70/0x140 [ 612.551092] sched_online_group+0xd0/0xf0 [ 612.555082] sched_autogroup_create_attach+0xd0/0x198 [ 612.560108] sys_setsid+0x140/0x160 [ 612.563579] el0_svc_naked+0x44/0x48 Signed-off-by: Phil Auld Cc: Peter Zijlstra Cc: Ingo Molnar Cc:

Re: [RESEND PATCH v3] cpuset: restore sanity to cpuset_cpus_allowed_fallback()

2019-06-12 Thread Phil Auld
l just fine in cgroup v2. A user who wishes > for the previous affinity mask to be restored in this fallback case can use > that mechanism instead. > > This patch modifies scheduler behavior by instead resetting the mask to > task_cs(tsk)->cpus_allowed by default, and cpu_possible mask in l

Re: [PATCH v2] sched/fair: don't push cfs_bandwith slack timers forward

2019-06-11 Thread Phil Auld
On Tue, Jun 11, 2019 at 04:24:43PM +0200 Peter Zijlstra wrote: > On Tue, Jun 11, 2019 at 10:12:19AM -0400, Phil Auld wrote: > > > That looks reasonable to me. > > > > Out of curiosity, why not bool? Is sizeof bool architecture dependent? > > Yeah, sizeof(_Bo

Re: [PATCH v2] sched/fair: don't push cfs_bandwith slack timers forward

2019-06-11 Thread Phil Auld
On Tue, Jun 11, 2019 at 03:53:25PM +0200 Peter Zijlstra wrote: > On Thu, Jun 06, 2019 at 10:21:01AM -0700, bseg...@google.com wrote: > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > index efa686eeff26..60219acda94b 100644 > > --- a/kernel/sched/sched.h > > +++

Re: [PATCH v2] sched/fair: don't push cfs_bandwith slack timers forward

2019-06-11 Thread Phil Auld
tribute_running; > + boolslack_started; > #endif > }; > > -- > 2.22.0.rc1.257.g3120a18244-goog > I think this looks good. I like not delaying that further even if it does not fix Dave's use case. It does make it glaring that I should have used false/true for setting distribute_running though :) Acked-by: Phil Auld --

Re: [PATCH v2 1/1] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices

2019-05-24 Thread Phil Auld
On Fri, May 24, 2019 at 10:14:36AM -0500 Dave Chiluk wrote: > On Fri, May 24, 2019 at 9:32 AM Phil Auld wrote: > > On Thu, May 23, 2019 at 02:01:58PM -0700 Peter Oskolkov wrote: > > > > If the machine runs at/close to capacity, won't the overallocation > > >

Re: [RFC PATCH v2 13/17] sched: Add core wide task selection and scheduling.

2019-05-20 Thread Phil Auld
On Sat, May 18, 2019 at 11:37:56PM +0800 Aubrey Li wrote: > On Wed, Apr 24, 2019 at 12:18 AM Vineeth Remanan Pillai > wrote: > > > > From: Peter Zijlstra (Intel) > > > > Instead of only selecting a local task, select a task for all SMT > > siblings for every reschedule on the core (irrespective

Re: [RFC PATCH v2 00/17] Core scheduling v2

2019-04-29 Thread Phil Auld
On Mon, Apr 29, 2019 at 09:25:35PM +0800 Li, Aubrey wrote: > On 2019/4/29 14:14, Ingo Molnar wrote: > > > > * Li, Aubrey wrote: > > > >>> I suspect it's pretty low, below 1% for all rows? > >> > >> Hope my this mail box works for this... > >> > >>

Re: [RFC PATCH v2 12/17] sched: A quick and dirty cgroup tagging interface

2019-04-26 Thread Phil Auld
On Fri, Apr 26, 2019 at 04:13:07PM +0200 Peter Zijlstra wrote: > On Thu, Apr 25, 2019 at 10:26:53AM -0400, Phil Auld wrote: > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index e8e5f26db052..b312ea1e28a4 100644 > > --- a/kernel/sched/core.c > >

Re: [RFC PATCH v2 00/17] Core scheduling v2

2019-04-26 Thread Phil Auld
On Thu, Apr 25, 2019 at 08:53:43PM +0200 Ingo Molnar wrote: > Interesting. This strongly suggests sub-optimal SMT-scheduling in the > non-saturated HT case, i.e. a scheduler balancing bug. > > As long as loads are clearly below the physical cores count (which they > are in the early phases of

Re: [RFC PATCH v2 11/17] sched: Basic tracking of matching tasks

2019-04-25 Thread Phil Auld
On Wed, Apr 24, 2019 at 08:43:36PM + Vineeth Remanan Pillai wrote: > > A minor nitpick. I find keeping the vruntime base readjustment in > > core_prio_less probably is more straight forward rather than pass a > > core_cmp bool around. > > The reason I moved the vruntime base adjustment to

Re: [RFC PATCH v2 12/17] sched: A quick and dirty cgroup tagging interface

2019-04-25 Thread Phil Auld
On Tue, Apr 23, 2019 at 04:18:17PM + Vineeth Remanan Pillai wrote: > From: Peter Zijlstra (Intel) > > Marks all tasks in a cgroup as matching for core-scheduling. > > Signed-off-by: Peter Zijlstra (Intel) > --- > kernel/sched/core.c | 62 >

Re: [RFC PATCH v2 00/17] Core scheduling v2

2019-04-23 Thread Phil Auld
Hi, On Tue, Apr 23, 2019 at 04:18:05PM + Vineeth Remanan Pillai wrote: > Second iteration of the core-scheduling feature. Thanks for spinning V2 of this. > > This version fixes apparent bugs and performance issues in v1. This > doesn't fully address the issue of core sharing between

Re: [tip:sched/urgent] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup

2019-04-16 Thread Phil Auld
Hi Sasha, On Tue, Apr 16, 2019 at 08:32:09AM -0700 tip-bot for Phil Auld wrote: > Commit-ID: 2e8e19226398db8265a8e675fcc0118b9e80c9e8 > Gitweb: > https://git.kernel.org/tip/2e8e19226398db8265a8e675fcc0118b9e80c9e8 > Author: Phil Auld > AuthorDate: Tue, 19 Mar 2019

Re: [tip:sched/core] sched/fair: Limit sched_cfs_period_timer loop to avoid hard lockup

2019-04-16 Thread Phil Auld
On Tue, Apr 09, 2019 at 03:05:27PM +0200 Peter Zijlstra wrote: > On Tue, Apr 09, 2019 at 08:48:16AM -0400, Phil Auld wrote: > > Hi Ingo, Peter, > > > > On Wed, Apr 03, 2019 at 01:38:39AM -0700 tip-bot for Phil Auld wrote: > > > Commit-ID: 06ec5d30e8d57b820d44df6

[tip:sched/urgent] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup

2019-04-16 Thread tip-bot for Phil Auld
Commit-ID: 2e8e19226398db8265a8e675fcc0118b9e80c9e8 Gitweb: https://git.kernel.org/tip/2e8e19226398db8265a8e675fcc0118b9e80c9e8 Author: Phil Auld AuthorDate: Tue, 19 Mar 2019 09:00:05 -0400 Committer: Ingo Molnar CommitDate: Tue, 16 Apr 2019 16:50:05 +0200 sched/fair: Limit

Re: [tip:sched/core] sched/fair: Limit sched_cfs_period_timer loop to avoid hard lockup

2019-04-16 Thread Phil Auld
On Tue, Apr 09, 2019 at 03:05:27PM +0200 Peter Zijlstra wrote: > On Tue, Apr 09, 2019 at 08:48:16AM -0400, Phil Auld wrote: > > Hi Ingo, Peter, > > > > On Wed, Apr 03, 2019 at 01:38:39AM -0700 tip-bot for Phil Auld wrote: > > > Commit-ID: 06ec5d30e8d57b820d44df6

Re: [PATCH v2] cpuset: restore sanity to cpuset_cpus_allowed_fallback()

2019-04-10 Thread Phil Auld
other avenue has been traveled. > + **/ > + > void cpuset_cpus_allowed_fallback(struct task_struct *tsk) > { > rcu_read_lock(); > - do_set_cpus_allowed(tsk, task_cs(tsk)->effective_cpus); > + do_set_cpus_allowed(tsk, is_in_v2_mode() ? > + task_cs(tsk)->cpus_allowed : cpu_possible_mask); > rcu_read_unlock(); > > /* > -- > 2.18.1 > Fwiw, Acked-by: Phil Auld --

  1   2   >