Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-04-13 Thread Tim Chen



On 4/13/21 3:45 AM, Song Bao Hua (Barry Song) wrote:
> 
> 
>
> Right now in the main cases of using wake_affine to achieve
> better performance, processes are actually bound within one
> numa which is also a LLC in kunpeng920. 
> 
> Probably LLC=NUMA is also true for X86 Jacobsville, Tim?

In general for x86, LLC is partitioned at the sub-numa cluster level.  
LLC could be divided between sub-numa cluster within a NUMA node.
That said, for Jacobsville, we don't have sub-numa clusters
so LLC=NUMA is true for that platform. 

Tim


RE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-04-13 Thread Song Bao Hua (Barry Song)


> -Original Message-
> From: Dietmar Eggemann [mailto:dietmar.eggem...@arm.com]
> Sent: Wednesday, January 13, 2021 12:00 AM
> To: Morten Rasmussen ; Tim Chen
> 
> Cc: Song Bao Hua (Barry Song) ;
> valentin.schnei...@arm.com; catalin.mari...@arm.com; w...@kernel.org;
> r...@rjwysocki.net; vincent.guit...@linaro.org; l...@kernel.org;
> gre...@linuxfoundation.org; Jonathan Cameron ;
> mi...@redhat.com; pet...@infradead.org; juri.le...@redhat.com;
> rost...@goodmis.org; bseg...@google.com; mgor...@suse.de;
> mark.rutl...@arm.com; sudeep.ho...@arm.com; aubrey...@linux.intel.com;
> linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O)
> ; Zengtao (B) ; tiantao (H)
> 
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
> 
> On 11/01/2021 10:28, Morten Rasmussen wrote:
> > On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
> >>
> >>
> >> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
> >>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> >>>> On 1/6/21 12:30 AM, Barry Song wrote:
> 
> [...]
> 
> >> I think it is going to depend on the workload.  If there are dependent
> >> tasks that communicate with one another, putting them together
> >> in the same cluster will be the right thing to do to reduce communication
> >> costs.  On the other hand, if the tasks are independent, putting them 
> >> together
> on the same cluster
> >> will increase resource contention and spreading them out will be better.
> >
> > Agree. That is exactly where I'm coming from. This is all about the task
> > placement policy. We generally tend to spread tasks to avoid resource
> > contention, SMT and caches, which seems to be what you are proposing to
> > extend. I think that makes sense given it can produce significant
> > benefits.
> >
> >>
> >> Any thoughts on what is the right clustering "tag" to use to clump
> >> related tasks together?
> >> Cgroup? Pid? Tasks with same mm?
> >
> > I think this is the real question. I think the closest thing we have at
> > the moment is the wakee/waker flip heuristic. This seems to be related.
> > Perhaps the wake_affine tricks can serve as starting point?
> 
> wake_wide() switches between packing (select_idle_sibling(), llc_size
> CPUs) and spreading (find_idlest_cpu(), all CPUs).
> 
> AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently
> all wakeups are (llc-)packed.
> 
>  select_task_rq_fair()
> 
>for_each_domain(cpu, tmp)
> 
>  if (tmp->flags & sd_flag)
>sd = tmp;
> 
> 
> In case we would like to further distinguish between llc-packing and
> even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
> packing vs. spreading heuristic further down in sis().
> 
> IMHO, Barry's current implementation doesn't do this right now. Instead
> he's trying to pack on cluster first and if not successful look further
> among the remaining llc CPUs for an idle CPU.

Right now in the main cases of using wake_affine to achieve
better performance, processes are actually bound within one
numa which is also a LLC in kunpeng920. 

Probably LLC=NUMA is also true for X86 Jacobsville, Tim?

So one possible way to pretend a 2-level packing might be:
if the affinity cpuset of waker and waker are both subset
of one same LLC, we totally use cluster as the factor to
determine packing or not and ignore LLC.

I haven't really done this, but the below code can make the
same result by forcing llc_id=cluster_id:

diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index d72eb8d..3d78097 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -107,7 +107,7 @@ int __init parse_acpi_topology(void)
cpu_topology[cpu].cluster_id = topology_id;
topology_id = find_acpi_cpu_topology_package(cpu);
cpu_topology[cpu].package_id = topology_id;
-
+#if 0
i = acpi_find_last_cache_level(cpu);

if (i > 0) {
@@ -119,8 +119,11 @@ int __init parse_acpi_topology(void)
if (cache_id > 0)
cpu_topology[cpu].llc_id = cache_id;
}
-   }
+#else
+   cpu_topology[cpu].llc_id = cpu_topology[cpu].cluster_id;
+#endif

+   }
return 0;
 }
 #endif

With this, I have seen some major improvement in hackbench especially
for monogamous communication model (fds_num=1, one sender for one
receiver):
numactl -N 0 hackbench -p -T -l 20 -f 1 -g $1

I have tested -g(group_nums) 6, 

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-02-16 Thread Tim Chen



On 2/3/21 3:32 AM, Song Bao Hua (Barry Song) wrote:

>>
>> Attached below are two RFC patches for creating x86 L2
>> cache sched domain, sans the idle cpu selection on wake up code.  It is
>> similar enough in concept to Barry's patch that we should have a
>> single patchset that accommodates both use cases.
> 
> Hi Tim, Agreed on this.
> hopefully the RFC v4 I am preparing will cover your case.
> 

Barry, 

I've taken a crack at it.  Attached is a patch on top of your
v3 patches to implement L2 cluster sched domain for x86.

Thanks.

Tim

>8--

>From 9189e489b019e110ee6e9d4183e243e48f44ff25 Mon Sep 17 00:00:00 2001
From: Tim Chen 
Date: Tue, 16 Feb 2021 08:24:39 -0800
Subject: [RFC PATCH] scheduler: Add cluster scheduler level for x86
To: , , , 
, , , 
, , 
, , , 
, , , 
, , , 

Cc: , , 
, , , 
, , Jonathan Cameron 
, Barry Song 

There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
is shared among a cluster of cores instead of being exclusive
to one single core.

To prevent oversubscription of L2 cache, load should be
balanced between such L2 clusters, especially for tasks with
no shared data.

Also with cluster scheduling policy where tasks are woken up
in the same L2 cluster, we will benefit from keeping tasks
related to each other and likely sharing data in the same L2
cluster.

Add CPU masks of CPUs sharing the L2 cache so we can build such
L2 cluster scheduler domain.

Signed-off-by: Tim Chen 
---
 arch/x86/Kconfig|  8 ++
 arch/x86/include/asm/smp.h  |  7 ++
 arch/x86/include/asm/topology.h |  1 +
 arch/x86/kernel/cpu/cacheinfo.c |  1 +
 arch/x86/kernel/cpu/common.c|  3 +++
 arch/x86/kernel/smpboot.c   | 43 -
 6 files changed, 62 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 21f851179ff0..10fc95005df7 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1008,6 +1008,14 @@ config NR_CPUS
  This is purely to save memory: each supported CPU adds about 8KB
  to the kernel image.
 
+config SCHED_CLUSTER
+   bool "Cluster scheduler support"
+   default n
+   help
+Cluster scheduler support improves the CPU scheduler's decision
+making when dealing with machines that have clusters of CPUs
+sharing L2 cache. If unsure say N here.
+
 config SCHED_SMT
def_bool y if SMP
 
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index c0538f82c9a2..9cbc4ae3078f 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -16,7 +16,9 @@ DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_map);
 DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_die_map);
 /* cpus sharing the last level cache: */
 DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
+DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
 DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id);
+DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id);
 DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
 
 static inline struct cpumask *cpu_llc_shared_mask(int cpu)
@@ -24,6 +26,11 @@ static inline struct cpumask *cpu_llc_shared_mask(int cpu)
return per_cpu(cpu_llc_shared_map, cpu);
 }
 
+static inline struct cpumask *cpu_l2c_shared_mask(int cpu)
+{
+   return per_cpu(cpu_l2c_shared_map, cpu);
+}
+
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid);
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid);
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid);
diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index 9239399e5491..2a11ccc14fb1 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -103,6 +103,7 @@ static inline void setup_node_to_cpumask_map(void) { }
 #include 
 
 extern const struct cpumask *cpu_coregroup_mask(int cpu);
+extern const struct cpumask *cpu_clustergroup_mask(int cpu);
 
 #define topology_logical_package_id(cpu)   (cpu_data(cpu).logical_proc_id)
 #define topology_physical_package_id(cpu)  (cpu_data(cpu).phys_proc_id)
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 3ca9be482a9e..0d03a71e713e 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -846,6 +846,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
l2 = new_l2;
 #ifdef CONFIG_SMP
per_cpu(cpu_llc_id, cpu) = l2_id;
+   per_cpu(cpu_l2c_id, cpu) = l2_id;
 #endif
}
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 35ad8480c464..fb08c73d752c 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -78,6 +78,9 @@ EXPORT_SYMBOL(smp_num_siblings);
 /* Last level cache ID of each logical CPU */
 DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID;
 
+/* L2 cache ID of each logical CPU */
+DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id) = BAD_APICID;
+
 /* correctly size the local cpu masks */
 

RE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-02-03 Thread Song Bao Hua (Barry Song)


> -Original Message-
> From: Tim Chen [mailto:tim.c.c...@linux.intel.com]
> Sent: Friday, January 8, 2021 12:17 PM
> To: Song Bao Hua (Barry Song) ;
> valentin.schnei...@arm.com; catalin.mari...@arm.com; w...@kernel.org;
> r...@rjwysocki.net; vincent.guit...@linaro.org; l...@kernel.org;
> gre...@linuxfoundation.org; Jonathan Cameron ;
> mi...@redhat.com; pet...@infradead.org; juri.le...@redhat.com;
> dietmar.eggem...@arm.com; rost...@goodmis.org; bseg...@google.com;
> mgor...@suse.de; mark.rutl...@arm.com; sudeep.ho...@arm.com;
> aubrey...@linux.intel.com
> Cc: linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O)
> ; Zengtao (B) ; tiantao (H)
> 
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
> 
> 
> 
> On 1/6/21 12:30 AM, Barry Song wrote:
> > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data while each cluster
> > has local L3 tag. On the other hand, each cluster will share some
> > internal system bus. This means cache is much more affine inside one cluster
> > than across clusters.
> >
> > +---+  
> > +-+
> > |  +--++--++---+
> >  |
> > |  | CPU0 || cpu1 | |+---+ |
> >  |
> > |  +--++--+ ||   | |
> >  |
> > |   ++L3 | |
> >  |
> > |  +--++--+   cluster   ||tag| |
> >  |
> > |  | CPU2 || CPU3 | ||   | |
> >  |
> > |  +--++--+ |+---+ |
> >  |
> > |   |  |
> >  |
> > +---+  |
> >  |
> > +---+  |
> >  |
> > |  +--++--+ +--+
> >  |
> > |  |  ||  | |+---+ |
> >  |
> > |  +--++--+ ||   | |
> >  |
> > |   ||L3 | |
> >  |
> > |  +--++--+ ++tag| |
> >  |
> > |  |  ||  | ||   | |
> >  |
> > |  +--++--+ |+---+ |
> >  |
> > |   |  |
> >  |
> > +---+  |   L3   
> >  |
> >|   data 
> >  |
> > +---+  |
> >  |
> > |  +--++--+ |+---+ |
> >  |
> > |  |  ||  | ||   | |
> >  |
> > |  +--++--+ ++L3 | |
> >  |
> > |   ||tag| |
> >  |
> > |  +--++--+ ||   | |
> >  |
> > |  |  ||  |+++---+ |
> >  |
> > |  +--++--+|---+
> >  |
> > +---|  |
> >  |
> > +---

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-26 Thread Dietmar Eggemann
On 25/01/2021 11:50, Song Bao Hua (Barry Song) wrote:
> 
>> -Original Message-
>> From: Dietmar Eggemann [mailto:dietmar.eggem...@arm.com]
>> Sent: Wednesday, January 13, 2021 12:00 AM
>> To: Morten Rasmussen ; Tim Chen
>> 
>> Cc: Song Bao Hua (Barry Song) ;
>> valentin.schnei...@arm.com; catalin.mari...@arm.com; w...@kernel.org;
>> r...@rjwysocki.net; vincent.guit...@linaro.org; l...@kernel.org;
>> gre...@linuxfoundation.org; Jonathan Cameron ;
>> mi...@redhat.com; pet...@infradead.org; juri.le...@redhat.com;
>> rost...@goodmis.org; bseg...@google.com; mgor...@suse.de;
>> mark.rutl...@arm.com; sudeep.ho...@arm.com; aubrey...@linux.intel.com;
>> linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org;
>> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O)
>> ; Zengtao (B) ; tiantao (H)
>> 
>> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters 
>> and
>> add cluster scheduler
>>
>> On 11/01/2021 10:28, Morten Rasmussen wrote:
>>> On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
>>>>
>>>>
>>>> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
>>>>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
>>>>>> On 1/6/21 12:30 AM, Barry Song wrote:

[...]

>> wake_wide() switches between packing (select_idle_sibling(), llc_size
>> CPUs) and spreading (find_idlest_cpu(), all CPUs).
>>
>> AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently
>> all wakeups are (llc-)packed.
> 
> Sorry for late response. I was struggling with some other topology
> issues recently.
> 
> For "all wakeups are (llc-)packed",
> it seems you mean current want_affine is only affecting the new_cpu,
> and for wake-up path, we will always go to select_idle_sibling() rather
> than find_idlest_cpu() since nobody sets SD_WAKE_BALANCE in any
> sched_domain ?
> 
>>
>>  select_task_rq_fair()
>>
>>for_each_domain(cpu, tmp)
>>
>>  if (tmp->flags & sd_flag)
>>sd = tmp;
>>
>>
>> In case we would like to further distinguish between llc-packing and
>> even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
>> packing vs. spreading heuristic further down in sis().
> 
> I didn't get your point on "2 level packing". Would you like
> to describe more? It seems you mean we need to have separate
> calculation for avg_scan_cost and sched_feat(SIS_) for cluster
> (or MC-L2) since cluster and llc are not in the same level
> physically?

By '1. level packing' I meant going sis() (i.e. sd=per_cpu(sd_llc,
target)) instead of routing WF_TTWU through find_idlest_cpu() which uses
a broader sd span (in case all sd's (or at least up to an sd > llc)
would have SD_BALANCE_WAKE set).
wake_wide() (wakee/waker flip heuristic) is currently used to make this
decision. But since no sd sets SD_BALANCE_WAKE we always go sis() for
WF_TTWU.

'2. level packing' would be the decision between cluster- and
llc-packing. The question was which heuristic could be used here.

>> IMHO, Barry's current implementation doesn't do this right now. Instead
>> he's trying to pack on cluster first and if not successful look further
>> among the remaining llc CPUs for an idle CPU.
> 
> Yes. That is exactly what the current patch is doing.

And this will be favoring cluster- over llc-packing for each task instead.


RE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-25 Thread Song Bao Hua (Barry Song)


> -Original Message-
> From: Dietmar Eggemann [mailto:dietmar.eggem...@arm.com]
> Sent: Wednesday, January 13, 2021 1:53 AM
> To: Song Bao Hua (Barry Song) ; Morten Rasmussen
> ; Tim Chen 
> Cc: valentin.schnei...@arm.com; catalin.mari...@arm.com; w...@kernel.org;
> r...@rjwysocki.net; vincent.guit...@linaro.org; l...@kernel.org;
> gre...@linuxfoundation.org; Jonathan Cameron ;
> mi...@redhat.com; pet...@infradead.org; juri.le...@redhat.com;
> rost...@goodmis.org; bseg...@google.com; mgor...@suse.de;
> mark.rutl...@arm.com; sudeep.ho...@arm.com; aubrey...@linux.intel.com;
> linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O)
> ; Zengtao (B) ; tiantao (H)
> 
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
> 
> On 08/01/2021 22:30, Song Bao Hua (Barry Song) wrote:
> >
> >> -Original Message-
> >> From: Morten Rasmussen [mailto:morten.rasmus...@arm.com]
> >> Sent: Saturday, January 9, 2021 4:13 AM
> >> To: Tim Chen 
> >> Cc: Song Bao Hua (Barry Song) ;
> >> valentin.schnei...@arm.com; catalin.mari...@arm.com; w...@kernel.org;
> >> r...@rjwysocki.net; vincent.guit...@linaro.org; l...@kernel.org;
> >> gre...@linuxfoundation.org; Jonathan Cameron
> ;
> >> mi...@redhat.com; pet...@infradead.org; juri.le...@redhat.com;
> >> dietmar.eggem...@arm.com; rost...@goodmis.org; bseg...@google.com;
> >> mgor...@suse.de; mark.rutl...@arm.com; sudeep.ho...@arm.com;
> >> aubrey...@linux.intel.com; linux-arm-ker...@lists.infradead.org;
> >> linux-kernel@vger.kernel.org; linux-a...@vger.kernel.org;
> >> linux...@openeuler.org; xuwei (O) ; Zengtao (B)
> >> ; tiantao (H) 
> >> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters
> and
> >> add cluster scheduler
> >>
> >> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> >>> On 1/6/21 12:30 AM, Barry Song wrote:
> >>>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> >>>> cluster has 4 cpus. All clusters share L3 cache data while each cluster
> >>>> has local L3 tag. On the other hand, each cluster will share some
> >>>> internal system bus. This means cache is much more affine inside one 
> >>>> cluster
> >>>> than across clusters.
> >>>
> >>> There is a similar need for clustering in x86.  Some x86 cores could share
> >> L2 caches that
> >>> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6
> clusters
> >>> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing
> >> L3).
> >>> Having a sched domain at the L2 cluster helps spread load among
> >>> L2 domains.  This will reduce L2 cache contention and help with
> >>> performance for low to moderate load scenarios.
> >>
> >> IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
> >> between L2 caches while Barry is after consolidating tasks within the
> >> boundaries of a L3 tag cache. One helps cache utilization, the other
> >> communication latency between tasks. Am I missing something?
> >
> > Morten, this is not true.
> >
> > we are both actually looking for the same behavior. My patch also
> > has done the exact same behavior of spreading with Tim's patch.
> 
> That's the case for the load-balance path because of the extra Sched
> Domain (SD) (CLS/MC_L2) below MC.
> 
> But in wakeup you add code which leads to a different packing strategy.

Yes, but I put a note for the 1st case:
"Case 1. we have two tasks *without* any relationship running in a system
with 2 clusters and 8 cpus"

so for tasks without wake-up relationship, the current patch will only
result in spreading.

Anyway, I will also test Tim's benchmark in kunpeng920 with the SCHED_CLUTER
to see what will happen. Till now, benchmark has only covered the case to
figure out the benefit of changing wake-up path.
I would also be interested in figuring out what we have got from the change
of load_balance().

> 
> It looks like that Tim's workload (SPECrate mcf) shows a performance
> boost solely because of the changes the additional MC_L2 SD introduces
> in load balance. The wakeup path is unchanged, i.e. llc-packing. IMHO we
> have to carefully distinguish between packing vs. spreading in wakeup
> and load-balance here.
> 
> > Considering the below two cases:
> >

RE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-25 Thread Song Bao Hua (Barry Song)


> -Original Message-
> From: Dietmar Eggemann [mailto:dietmar.eggem...@arm.com]
> Sent: Wednesday, January 13, 2021 12:00 AM
> To: Morten Rasmussen ; Tim Chen
> 
> Cc: Song Bao Hua (Barry Song) ;
> valentin.schnei...@arm.com; catalin.mari...@arm.com; w...@kernel.org;
> r...@rjwysocki.net; vincent.guit...@linaro.org; l...@kernel.org;
> gre...@linuxfoundation.org; Jonathan Cameron ;
> mi...@redhat.com; pet...@infradead.org; juri.le...@redhat.com;
> rost...@goodmis.org; bseg...@google.com; mgor...@suse.de;
> mark.rutl...@arm.com; sudeep.ho...@arm.com; aubrey...@linux.intel.com;
> linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O)
> ; Zengtao (B) ; tiantao (H)
> 
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
> 
> On 11/01/2021 10:28, Morten Rasmussen wrote:
> > On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
> >>
> >>
> >> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
> >>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> >>>> On 1/6/21 12:30 AM, Barry Song wrote:
> 
> [...]
> 
> >> I think it is going to depend on the workload.  If there are dependent
> >> tasks that communicate with one another, putting them together
> >> in the same cluster will be the right thing to do to reduce communication
> >> costs.  On the other hand, if the tasks are independent, putting them 
> >> together
> on the same cluster
> >> will increase resource contention and spreading them out will be better.
> >
> > Agree. That is exactly where I'm coming from. This is all about the task
> > placement policy. We generally tend to spread tasks to avoid resource
> > contention, SMT and caches, which seems to be what you are proposing to
> > extend. I think that makes sense given it can produce significant
> > benefits.
> >
> >>
> >> Any thoughts on what is the right clustering "tag" to use to clump
> >> related tasks together?
> >> Cgroup? Pid? Tasks with same mm?
> >
> > I think this is the real question. I think the closest thing we have at
> > the moment is the wakee/waker flip heuristic. This seems to be related.
> > Perhaps the wake_affine tricks can serve as starting point?
> 
> wake_wide() switches between packing (select_idle_sibling(), llc_size
> CPUs) and spreading (find_idlest_cpu(), all CPUs).
> 
> AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently
> all wakeups are (llc-)packed.

Sorry for late response. I was struggling with some other topology
issues recently.

For "all wakeups are (llc-)packed",
it seems you mean current want_affine is only affecting the new_cpu,
and for wake-up path, we will always go to select_idle_sibling() rather
than find_idlest_cpu() since nobody sets SD_WAKE_BALANCE in any
sched_domain ?

> 
>  select_task_rq_fair()
> 
>for_each_domain(cpu, tmp)
> 
>  if (tmp->flags & sd_flag)
>sd = tmp;
> 
> 
> In case we would like to further distinguish between llc-packing and
> even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
> packing vs. spreading heuristic further down in sis().

I didn't get your point on "2 level packing". Would you like
to describe more? It seems you mean we need to have separate
calculation for avg_scan_cost and sched_feat(SIS_) for cluster
(or MC-L2) since cluster and llc are not in the same level
physically?

> 
> IMHO, Barry's current implementation doesn't do this right now. Instead
> he's trying to pack on cluster first and if not successful look further
> among the remaining llc CPUs for an idle CPU.

Yes. That is exactly what the current patch is doing.

Thanks
Barry


Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-12 Thread Dietmar Eggemann
On 08/01/2021 22:30, Song Bao Hua (Barry Song) wrote:
>  
>> -Original Message-
>> From: Morten Rasmussen [mailto:morten.rasmus...@arm.com]
>> Sent: Saturday, January 9, 2021 4:13 AM
>> To: Tim Chen 
>> Cc: Song Bao Hua (Barry Song) ;
>> valentin.schnei...@arm.com; catalin.mari...@arm.com; w...@kernel.org;
>> r...@rjwysocki.net; vincent.guit...@linaro.org; l...@kernel.org;
>> gre...@linuxfoundation.org; Jonathan Cameron ;
>> mi...@redhat.com; pet...@infradead.org; juri.le...@redhat.com;
>> dietmar.eggem...@arm.com; rost...@goodmis.org; bseg...@google.com;
>> mgor...@suse.de; mark.rutl...@arm.com; sudeep.ho...@arm.com;
>> aubrey...@linux.intel.com; linux-arm-ker...@lists.infradead.org;
>> linux-kernel@vger.kernel.org; linux-a...@vger.kernel.org;
>> linux...@openeuler.org; xuwei (O) ; Zengtao (B)
>> ; tiantao (H) 
>> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters 
>> and
>> add cluster scheduler
>>
>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
>>> On 1/6/21 12:30 AM, Barry Song wrote:
>>>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
>>>> cluster has 4 cpus. All clusters share L3 cache data while each cluster
>>>> has local L3 tag. On the other hand, each cluster will share some
>>>> internal system bus. This means cache is much more affine inside one 
>>>> cluster
>>>> than across clusters.
>>>
>>> There is a similar need for clustering in x86.  Some x86 cores could share
>> L2 caches that
>>> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 
>>> clusters
>>> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing
>> L3).
>>> Having a sched domain at the L2 cluster helps spread load among
>>> L2 domains.  This will reduce L2 cache contention and help with
>>> performance for low to moderate load scenarios.
>>
>> IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
>> between L2 caches while Barry is after consolidating tasks within the
>> boundaries of a L3 tag cache. One helps cache utilization, the other
>> communication latency between tasks. Am I missing something?
> 
> Morten, this is not true.
> 
> we are both actually looking for the same behavior. My patch also
> has done the exact same behavior of spreading with Tim's patch.

That's the case for the load-balance path because of the extra Sched
Domain (SD) (CLS/MC_L2) below MC.

But in wakeup you add code which leads to a different packing strategy.

It looks like that Tim's workload (SPECrate mcf) shows a performance
boost solely because of the changes the additional MC_L2 SD introduces
in load balance. The wakeup path is unchanged, i.e. llc-packing. IMHO we
have to carefully distinguish between packing vs. spreading in wakeup
and load-balance here.

> Considering the below two cases:
> Case 1. we have two tasks without any relationship running in a system with 2 
> clusters and 8 cpus.
> 
> Without the sched_domain of cluster, these two tasks might be put as below:
> +---++-+
> | ++   ++   || |
> | |task|   |task|   || |
> | |1   |   |2   |   || |
> | ++   ++   || |
> |   || |
> |   cluster1|| cluster2|
> +---++-+
> With the sched_domain of cluster, load balance will spread them as below:
> +---++-+
> | ++|| ++  |
> | |task||| |task|  |
> | |1   ||| |2   |  |
> | ++|| ++  |
> |   || |
> |   cluster1|| cluster2|
> +---++-+
> 
> Then task1 and tasks2 get more cache and decrease cache contention.
> They will get better performance.
> 
> That is what my original patch also can make. And tim's patch
> is also doing. Once we add a sched_domain, load balance will
> get involved.
> 
> 
> Case 2. we have 8 tasks, running in a system with 2 clusters and 8 cpus.
> But they are working in 4 groups:
> Task1 wakes up task4
> Task2 wakes up task5
> Task3 wakes up task6
> Task4 wakes up task7
> 
> With my changing in select_idle_sibling, the WAKE_AFFINE mechanism will
> try to put task1 an

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-12 Thread Dietmar Eggemann
On 11/01/2021 10:28, Morten Rasmussen wrote:
> On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
>>
>>
>> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
>>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
 On 1/6/21 12:30 AM, Barry Song wrote:

[...]

>> I think it is going to depend on the workload.  If there are dependent
>> tasks that communicate with one another, putting them together
>> in the same cluster will be the right thing to do to reduce communication
>> costs.  On the other hand, if the tasks are independent, putting them 
>> together on the same cluster
>> will increase resource contention and spreading them out will be better.
> 
> Agree. That is exactly where I'm coming from. This is all about the task
> placement policy. We generally tend to spread tasks to avoid resource
> contention, SMT and caches, which seems to be what you are proposing to
> extend. I think that makes sense given it can produce significant
> benefits.
> 
>>
>> Any thoughts on what is the right clustering "tag" to use to clump
>> related tasks together?
>> Cgroup? Pid? Tasks with same mm?
> 
> I think this is the real question. I think the closest thing we have at
> the moment is the wakee/waker flip heuristic. This seems to be related.
> Perhaps the wake_affine tricks can serve as starting point?

wake_wide() switches between packing (select_idle_sibling(), llc_size
CPUs) and spreading (find_idlest_cpu(), all CPUs).

AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently
all wakeups are (llc-)packed.

 select_task_rq_fair()

   for_each_domain(cpu, tmp)

 if (tmp->flags & sd_flag)
   sd = tmp;


In case we would like to further distinguish between llc-packing and
even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
packing vs. spreading heuristic further down in sis().

IMHO, Barry's current implementation doesn't do this right now. Instead
he's trying to pack on cluster first and if not successful look further
among the remaining llc CPUs for an idle CPU.


Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-11 Thread Morten Rasmussen
On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
> 
> 
> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
> > On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> >> On 1/6/21 12:30 AM, Barry Song wrote:
> >>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> >>> cluster has 4 cpus. All clusters share L3 cache data while each cluster
> >>> has local L3 tag. On the other hand, each cluster will share some
> >>> internal system bus. This means cache is much more affine inside one 
> >>> cluster
> >>> than across clusters.
> >>
> >> There is a similar need for clustering in x86.  Some x86 cores could share 
> >> L2 caches that
> >> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 
> >> clusters
> >> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing 
> >> L3).  
> >> Having a sched domain at the L2 cluster helps spread load among 
> >> L2 domains.  This will reduce L2 cache contention and help with
> >> performance for low to moderate load scenarios.
> > 
> > IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
> > between L2 caches while Barry is after consolidating tasks within the
> > boundaries of a L3 tag cache. One helps cache utilization, the other
> > communication latency between tasks. Am I missing something? 
> > 
> > IMHO, we need some numbers on the table to say which way to go. Looking
> > at just benchmarks of one type doesn't show that this is a good idea in
> > general.
> > 
> 
> I think it is going to depend on the workload.  If there are dependent
> tasks that communicate with one another, putting them together
> in the same cluster will be the right thing to do to reduce communication
> costs.  On the other hand, if the tasks are independent, putting them 
> together on the same cluster
> will increase resource contention and spreading them out will be better.

Agree. That is exactly where I'm coming from. This is all about the task
placement policy. We generally tend to spread tasks to avoid resource
contention, SMT and caches, which seems to be what you are proposing to
extend. I think that makes sense given it can produce significant
benefits.

> 
> Any thoughts on what is the right clustering "tag" to use to clump
> related tasks together?
> Cgroup? Pid? Tasks with same mm?

I think this is the real question. I think the closest thing we have at
the moment is the wakee/waker flip heuristic. This seems to be related.
Perhaps the wake_affine tricks can serve as starting point?

Morten


RE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-08 Thread Song Bao Hua (Barry Song)



> -Original Message-
> From: Morten Rasmussen [mailto:morten.rasmus...@arm.com]
> Sent: Saturday, January 9, 2021 4:13 AM
> To: Tim Chen 
> Cc: Song Bao Hua (Barry Song) ;
> valentin.schnei...@arm.com; catalin.mari...@arm.com; w...@kernel.org;
> r...@rjwysocki.net; vincent.guit...@linaro.org; l...@kernel.org;
> gre...@linuxfoundation.org; Jonathan Cameron ;
> mi...@redhat.com; pet...@infradead.org; juri.le...@redhat.com;
> dietmar.eggem...@arm.com; rost...@goodmis.org; bseg...@google.com;
> mgor...@suse.de; mark.rutl...@arm.com; sudeep.ho...@arm.com;
> aubrey...@linux.intel.com; linux-arm-ker...@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-a...@vger.kernel.org;
> linux...@openeuler.org; xuwei (O) ; Zengtao (B)
> ; tiantao (H) 
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
> 
> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> > On 1/6/21 12:30 AM, Barry Song wrote:
> > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > > cluster has 4 cpus. All clusters share L3 cache data while each cluster
> > > has local L3 tag. On the other hand, each cluster will share some
> > > internal system bus. This means cache is much more affine inside one 
> > > cluster
> > > than across clusters.
> >
> > There is a similar need for clustering in x86.  Some x86 cores could share
> L2 caches that
> > is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 
> > clusters
> > of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing
> L3).
> > Having a sched domain at the L2 cluster helps spread load among
> > L2 domains.  This will reduce L2 cache contention and help with
> > performance for low to moderate load scenarios.
> 
> IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
> between L2 caches while Barry is after consolidating tasks within the
> boundaries of a L3 tag cache. One helps cache utilization, the other
> communication latency between tasks. Am I missing something?

Morten, this is not true.

we are both actually looking for the same behavior. My patch also
has done the exact same behavior of spreading with Tim's patch.

Considering the below two cases:
Case 1. we have two tasks without any relationship running in a system with 2 
clusters and 8 cpus.

Without the sched_domain of cluster, these two tasks might be put as below:
+---++-+
| ++   ++   || |
| |task|   |task|   || |
| |1   |   |2   |   || |
| ++   ++   || |
|   || |
|   cluster1|| cluster2|
+---++-+
With the sched_domain of cluster, load balance will spread them as below:
+---++-+
| ++|| ++  |
| |task||| |task|  |
| |1   ||| |2   |  |
| ++|| ++  |
|   || |
|   cluster1|| cluster2|
+---++-+

Then task1 and tasks2 get more cache and decrease cache contention.
They will get better performance.

That is what my original patch also can make. And tim's patch
is also doing. Once we add a sched_domain, load balance will
get involved.


Case 2. we have 8 tasks, running in a system with 2 clusters and 8 cpus.
But they are working in 4 groups:
Task1 wakes up task4
Task2 wakes up task5
Task3 wakes up task6
Task4 wakes up task7

With my changing in select_idle_sibling, the WAKE_AFFINE mechanism will
try to put task1 and 4, task2 and 5, task3 and 6, task4 and 7 in same clusters 
rather
than putting all of them in the random one of the 8 cpus. However, the 8 tasks
are still spreading among the 8 cpus with my change in select_idle_sibling
as load balance is still working.

+---++--+
| +++-+ || ++  +-+  |
| |task||task | || |task|  |task |  |
| |1   || 4   | || |2   |  |5|  |
| +++-+ || ++  +-+  |
|   ||  |
|   cluster1|| cluster2 |
|   ||  |
|   ||  |
| +-+   +--+|| +-+ +--+ |
| |task |   | task ||| |task | |task  | |
| |3|   |  6   ||   

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-08 Thread Tim Chen



On 1/8/21 7:12 AM, Morten Rasmussen wrote:
> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
>> On 1/6/21 12:30 AM, Barry Song wrote:
>>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
>>> cluster has 4 cpus. All clusters share L3 cache data while each cluster
>>> has local L3 tag. On the other hand, each cluster will share some
>>> internal system bus. This means cache is much more affine inside one cluster
>>> than across clusters.
>>
>> There is a similar need for clustering in x86.  Some x86 cores could share 
>> L2 caches that
>> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 
>> clusters
>> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing 
>> L3).  
>> Having a sched domain at the L2 cluster helps spread load among 
>> L2 domains.  This will reduce L2 cache contention and help with
>> performance for low to moderate load scenarios.
> 
> IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
> between L2 caches while Barry is after consolidating tasks within the
> boundaries of a L3 tag cache. One helps cache utilization, the other
> communication latency between tasks. Am I missing something? 
> 
> IMHO, we need some numbers on the table to say which way to go. Looking
> at just benchmarks of one type doesn't show that this is a good idea in
> general.
> 

I think it is going to depend on the workload.  If there are dependent
tasks that communicate with one another, putting them together
in the same cluster will be the right thing to do to reduce communication
costs.  On the other hand, if the tasks are independent, putting them together 
on the same cluster
will increase resource contention and spreading them out will be better.

Any thoughts on what is the right clustering "tag" to use to clump related 
tasks together?
Cgroup? Pid? Tasks with same mm?

Tim 


Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-08 Thread Morten Rasmussen
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> On 1/6/21 12:30 AM, Barry Song wrote:
> > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data while each cluster
> > has local L3 tag. On the other hand, each cluster will share some
> > internal system bus. This means cache is much more affine inside one cluster
> > than across clusters.
> 
> There is a similar need for clustering in x86.  Some x86 cores could share L2 
> caches that
> is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 
> clusters
> of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing 
> L3).  
> Having a sched domain at the L2 cluster helps spread load among 
> L2 domains.  This will reduce L2 cache contention and help with
> performance for low to moderate load scenarios.

IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
between L2 caches while Barry is after consolidating tasks within the
boundaries of a L3 tag cache. One helps cache utilization, the other
communication latency between tasks. Am I missing something? 

IMHO, we need some numbers on the table to say which way to go. Looking
at just benchmarks of one type doesn't show that this is a good idea in
general.

Morten


Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-07 Thread Tim Chen



On 1/6/21 12:30 AM, Barry Song wrote:
> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> cluster has 4 cpus. All clusters share L3 cache data while each cluster
> has local L3 tag. On the other hand, each cluster will share some
> internal system bus. This means cache is much more affine inside one cluster
> than across clusters.
> 
> +---+  +-+
> |  +--++--++---+ |
> |  | CPU0 || cpu1 | |+---+ | |
> |  +--++--+ ||   | | |
> |   ++L3 | | |
> |  +--++--+   cluster   ||tag| | |
> |  | CPU2 || CPU3 | ||   | | |
> |  +--++--+ |+---+ | |
> |   |  | |
> +---+  | |
> +---+  | |
> |  +--++--+ +--+ |
> |  |  ||  | |+---+ | |
> |  +--++--+ ||   | | |
> |   ||L3 | | |
> |  +--++--+ ++tag| | |
> |  |  ||  | ||   | | |
> |  +--++--+ |+---+ | |
> |   |  | |
> +---+  |   L3|
>|   data  |
> +---+  | |
> |  +--++--+ |+---+ | |
> |  |  ||  | ||   | | |
> |  +--++--+ ++L3 | | |
> |   ||tag| | |
> |  +--++--+ ||   | | |
> |  |  ||  |+++---+ | |
> |  +--++--+|---+ |
> +---|  | |
> +---|  | |
> |  +--++--++---+ |
> |  |  ||  | |+---+ | |
> |  +--++--+ ||   | | |
> |   ++L3 | | |
> |  +--++--+ ||tag| | |
> |  |  ||  | ||   | | |
> |  +--++--+ |+---+ | |
> |   |  | |
> +---+  | |
> +---+  | |
> |  +--++--+ +--+ |
> |  |  ||  | |   +---+  | |
> |  +--++--+ |   |   |  | |
> 
> 

There is a similar need for clustering in x86.  Some x86 cores could share L2 
caches that
is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 
clusters
of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3).  
Having a sched domain at the L2 cluster helps spread load among 
L2 domains.  This will reduce L2 cache contention and help with
performance for low to moderate load scenarios.

The cluster detection mechanism will need
to be based on L2 cache sharing in this case.  I suggest making the 
cluster detection to be CPU architecture dependent so both ARM64 and x86 use 
cases
can be accommodated.

Attached below are two RFC patches for creating x86 L2
cache sched domain, sans the idle cpu selection on wake up code.  It is
similar enough in concept to Barry's patch that we should have a 
single patchset that accommodates both use cases.

Thanks.

Tim


>From e0e7e42e1a033c9634723ff1dc80b426deeec1e9 Mon Sep 17 00:00:00 2001
Message-Id: 

In-Reply-To: 
References: 
From: Tim Chen 
Date: Wed, 19 Aug 2020