RE: [RFC PATCH 1/2] sched/topology: Get rid of NUMA overlapping groups

2021-02-09 Thread Valentin Schneider
On 09/02/21 00:12, Song Bao Hua (Barry Song) wrote:
>> -Original Message-
>> From: Valentin Schneider [mailto:valentin.schnei...@arm.com]
>>
>> Yes; let's take your topology for instance:
>>
>> node   0   1   2   3
>> 0:  10  12  20  22
>> 1:  12  10  22  24
>> 2:  20  22  10  12
>> 3:  22  24  12  10
>>
>>   2   10  2
>>   0 <---> 1 <---> 2 <---> 3
>
> Guess you actually mean
>2   10  2
>1 <---> 0 <---> 2 <---> 3
>

Yeah, you're right, sorry about that!

>>
>>
>> Domains for node1 will look like (before any fixes are applied):
>>
>> NUMA<=10: span=1   groups=(1)
>> NUMA<=12: span=0-1 groups=(1)->(0)
>> NUMA<=20: span=0-1 groups=(0,1)
>> NUMA<=22: span=0-2 groups=(0,1)->(0,2-3)
>> NUMA<=24: span=0-3 groups=(0-2)->(0,2-3)
>>
>> As you can see, the domain representing distance <= 20 will be degenerated
>> (it has a single group). If we were to e.g. add some more nodes to the left
>> of node0, then we would trigger the "grandchildren logic" for node1 and
>> would end up creating a reference to node1 NUMA<=20's sgc, which is a
>> mistake: that domain will be degenerated, and that sgc will never be
>> updated. The right thing to do here would be reference node1 NUMA<=12's
>> sgc, which the above snippet does.
>
> Guess I got your point even though the diagram is not correct :-)
>

Good!

> If the topology is as below(add a node left to node1 rather than
> node0):
>
> 9   2   10  2
> A <---> 1 <---> 0 <---> 2 <---> 3
>
> For nodeA,
> NUMA<=10: span=A   groups=(A)
> NUMA<=12: span= A groups= (A)
> NUMA<=19: span=A-1 groups=(A),(1)
> NUMA<=20: span=A-1 groups=(A,1)
> *1 NUMA<=21: span=A-1-0 groups=(A,1), node1's numa<=20
>
> For node0,
> NUMA<=10: span=9   groups=(0)
> #3 NUMA<=12: span=0-1 groups=(0)->(1)
> #2 NUMA<=19: span=0-1 groups=(0,1)
> #1 NUMA<=20: span=0-1-2 groups=(0,1),
>
> *1 will firstly try #1, and it finds 2 is outside the A-1-0,
> then it will try #2. Finally #2 will be degenerated, so we
> should actually use #3. Amazing!
>

Bingo!

>>
>> >> +
>> >> + return parent;
>> >> +}
>> >> +
>
> Thanks
> Barry


RE: [RFC PATCH 1/2] sched/topology: Get rid of NUMA overlapping groups

2021-02-08 Thread Song Bao Hua (Barry Song)



> -Original Message-
> From: Valentin Schneider [mailto:valentin.schnei...@arm.com]
> Sent: Tuesday, February 9, 2021 12:48 AM
> To: Song Bao Hua (Barry Song) ;
> linux-kernel@vger.kernel.org
> Cc: vincent.guit...@linaro.org; mgor...@suse.de; mi...@kernel.org;
> pet...@infradead.org; dietmar.eggem...@arm.com; morten.rasmus...@arm.com;
> linux...@openeuler.org; xuwei (O) ; Liguozhu (Kenneth)
> ; tiantao (H) ; wanghuiqiang
> ; Zengtao (B) ; Jonathan
> Cameron ; guodong...@linaro.org; Meelis Roos
> 
> Subject: RE: [RFC PATCH 1/2] sched/topology: Get rid of NUMA overlapping 
> groups
> 
> Hi Barry,
> 
> On 08/02/21 10:04, Song Bao Hua (Barry Song) wrote:
> >> -Original Message-
> >> From: Valentin Schneider [mailto:valentin.schnei...@arm.com]
> 
> >
> > Hi Valentin,
> >
> > While I like your approach, this will require more time
> > to evaluate possible influence as the approach also affects
> > all machines without 3-hops issue. So x86 platforms need to
> > be tested and benchmark is required.
> >
> > What about we firstly finish the review of "grandchild" approach
> > v2 and have a solution for kunpeng920 and Sun Fire X4600-M2
> > while not impacting other machines which haven't 3-hops issues
> > first?
> >
> 
> I figured I'd toss this out while the iron was hot (and I had the topology
> crud paged in), but I ultimately agree that it's better to first go with
> something that fixes the diameter > 2 topologies and leaves the other ones
> untouched, which is exactly what you have.
> 
> > I would appreciate very much if you could comment on v2:
> >
> https://lore.kernel.org/lkml/20210203111201.20720-1-song.bao.hua@hisilicon
> .com/
> >
> 
> See my comment below on domain degeneration; with that taken care of I
> would say it's good to go. Have a look at what patch1+patch3 squashed
> together looks like, passing the right sd to init_overlap_sched_group()
> looks a bit neater IMO.
> 
> >> +static struct sched_domain *find_node_domain(struct sched_domain *sd)
> >> +{
> >> +  struct sched_domain *parent;
> >> +
> >> +  BUG_ON(!(sd->flags & SD_NUMA));
> >> +
> >> +  /* Get to the level above NODE */
> >> +  while (sd && sd->child) {
> >> +  parent = sd;
> >> +  sd = sd->child;
> >> +
> >> +  if (!(sd->flags & SD_NUMA))
> >> +  break;
> >> +  }
> >> +  /*
> >> +   * We're going to create cross topology level sched_group_capacity
> >> +   * references. This can only work if the domains resulting from said
> >> +   * levels won't be degenerated, as we need said sgc to be periodically
> >> +   * updated: it needs to be attached to the local group of a domain
> >> +   * that didn't get degenerated.
> >> +   *
> >> +   * Of course, groups aren't available yet, so we can't call the usual
> >> +   * sd_degenerate(). Checking domain spans is the closest we get.
> >> +   * Start from NODE's parent, and keep going up until we get a domain
> >> +   * we're sure won't be degenerated.
> >> +   */
> >> +  while (sd->parent &&
> >> + cpumask_equal(sched_domain_span(sd), sched_domain_span(parent)))
> {
> >> +  sd = parent;
> >> +  parent = sd->parent;
> >> +  }
> >
> > So this is because the sched_domain which doesn't contribute to scheduler
> > will be destroyed during cpu_attach_domain() since sd and parent span
> > the seam mask?
> >
> 
> Yes; let's take your topology for instance:
> 
> node   0   1   2   3
> 0:  10  12  20  22
> 1:  12  10  22  24
> 2:  20  22  10  12
> 3:  22  24  12  10
> 
>   2   10  2
>   0 <---> 1 <---> 2 <---> 3

Guess you actually mean
   2   10  2
   1 <---> 0 <---> 2 <---> 3

> 
> 
> Domains for node1 will look like (before any fixes are applied):
> 
> NUMA<=10: span=1   groups=(1)
> NUMA<=12: span=0-1 groups=(1)->(0)
> NUMA<=20: span=0-1 groups=(0,1)
> NUMA<=22: span=0-2 groups=(0,1)->(0,2-3)
> NUMA<=24: span=0-3 groups=(0-2)->(0,2-3)
> 
> As you can see, the domain representing distance <= 20 will be degenerated
> (it has a single group). If we were to e.g. add some more nodes to the left
> of node0, then we would trigger the "grandchildren logic" for node1 and
> would end up creating a reference to node1 NUMA<=20's sgc, which is a
> mistake: that domain will

RE: [RFC PATCH 1/2] sched/topology: Get rid of NUMA overlapping groups

2021-02-08 Thread Valentin Schneider
Hi Barry,

On 08/02/21 10:04, Song Bao Hua (Barry Song) wrote:
>> -Original Message-
>> From: Valentin Schneider [mailto:valentin.schnei...@arm.com]

>
> Hi Valentin,
>
> While I like your approach, this will require more time
> to evaluate possible influence as the approach also affects
> all machines without 3-hops issue. So x86 platforms need to
> be tested and benchmark is required.
>
> What about we firstly finish the review of "grandchild" approach
> v2 and have a solution for kunpeng920 and Sun Fire X4600-M2
> while not impacting other machines which haven't 3-hops issues
> first?
>

I figured I'd toss this out while the iron was hot (and I had the topology
crud paged in), but I ultimately agree that it's better to first go with
something that fixes the diameter > 2 topologies and leaves the other ones
untouched, which is exactly what you have.

> I would appreciate very much if you could comment on v2:
> https://lore.kernel.org/lkml/20210203111201.20720-1-song.bao@hisilicon.com/
>

See my comment below on domain degeneration; with that taken care of I
would say it's good to go. Have a look at what patch1+patch3 squashed
together looks like, passing the right sd to init_overlap_sched_group()
looks a bit neater IMO.

>> +static struct sched_domain *find_node_domain(struct sched_domain *sd)
>> +{
>> +struct sched_domain *parent;
>> +
>> +BUG_ON(!(sd->flags & SD_NUMA));
>> +
>> +/* Get to the level above NODE */
>> +while (sd && sd->child) {
>> +parent = sd;
>> +sd = sd->child;
>> +
>> +if (!(sd->flags & SD_NUMA))
>> +break;
>> +}
>> +/*
>> + * We're going to create cross topology level sched_group_capacity
>> + * references. This can only work if the domains resulting from said
>> + * levels won't be degenerated, as we need said sgc to be periodically
>> + * updated: it needs to be attached to the local group of a domain
>> + * that didn't get degenerated.
>> + *
>> + * Of course, groups aren't available yet, so we can't call the usual
>> + * sd_degenerate(). Checking domain spans is the closest we get.
>> + * Start from NODE's parent, and keep going up until we get a domain
>> + * we're sure won't be degenerated.
>> + */
>> +while (sd->parent &&
>> +   cpumask_equal(sched_domain_span(sd), sched_domain_span(parent))) 
>> {
>> +sd = parent;
>> +parent = sd->parent;
>> +}
>
> So this is because the sched_domain which doesn't contribute to scheduler
> will be destroyed during cpu_attach_domain() since sd and parent span
> the seam mask?
>

Yes; let's take your topology for instance:

node   0   1   2   3
0:  10  12  20  22
1:  12  10  22  24
2:  20  22  10  12
3:  22  24  12  10

  2   10  2
  0 <---> 1 <---> 2 <---> 3


Domains for node1 will look like (before any fixes are applied):

NUMA<=10: span=1   groups=(1)
NUMA<=12: span=0-1 groups=(1)->(0)
NUMA<=20: span=0-1 groups=(0,1)
NUMA<=22: span=0-2 groups=(0,1)->(0,2-3)
NUMA<=24: span=0-3 groups=(0-2)->(0,2-3)

As you can see, the domain representing distance <= 20 will be degenerated
(it has a single group). If we were to e.g. add some more nodes to the left
of node0, then we would trigger the "grandchildren logic" for node1 and
would end up creating a reference to node1 NUMA<=20's sgc, which is a
mistake: that domain will be degenerated, and that sgc will never be
updated. The right thing to do here would be reference node1 NUMA<=12's
sgc, which the above snippet does.

>> +
>> +return parent;
>> +}
>> +


RE: [RFC PATCH 1/2] sched/topology: Get rid of NUMA overlapping groups

2021-02-08 Thread Song Bao Hua (Barry Song)



> -Original Message-
> From: Valentin Schneider [mailto:valentin.schnei...@arm.com]
> Sent: Thursday, February 4, 2021 4:55 AM
> To: linux-kernel@vger.kernel.org
> Cc: vincent.guit...@linaro.org; mgor...@suse.de; mi...@kernel.org;
> pet...@infradead.org; dietmar.eggem...@arm.com; morten.rasmus...@arm.com;
> linux...@openeuler.org; xuwei (O) ; Liguozhu (Kenneth)
> ; tiantao (H) ; wanghuiqiang
> ; Zengtao (B) ; Jonathan
> Cameron ; guodong...@linaro.org; Song Bao Hua
> (Barry Song) ; Meelis Roos 
> Subject: [RFC PATCH 1/2] sched/topology: Get rid of NUMA overlapping groups
> 
> As pointed out in commit
> 
>   b5b217346de8 ("sched/topology: Warn when NUMA diameter > 2")
> 
> overlapping groups result in broken topology data structures whenever the
> underlying system has a NUMA diameter greater than 2. This stems from
> overlapping groups being built from sibling domain's spans, yielding bogus
> transitivity relations the like of:
> 
>   distance(A, B) <= 30 && distance(B, C) <= 20
> =>
>   distance(A, C) <= 30
> 
> As discussed with Barry, a feasible approach is to catch bogus overlapping
> groups and fix them after the fact [1].
> 
> A more proactive approach would be to prevent aforementioned bogus
> relations from being built altogether, implies departing from the
> "group span is sibling domain child's span" strategy. Said strategy only
> works for diameter <= 2, which fortunately or unfortunately is currently
> the most common case.
> 
> The chosen approach is, for NUMA domains:
> a) have the local group be the child domain's span, as before
> b) have all remote groups span only their respective node
> 
> This boils down to getting rid of overlapping groups.
> 

Hi Valentin,

While I like your approach, this will require more time
to evaluate possible influence as the approach also affects
all machines without 3-hops issue. So x86 platforms need to
be tested and benchmark is required.

What about we firstly finish the review of "grandchild" approach
v2 and have a solution for kunpeng920 and Sun Fire X4600-M2
while not impacting other machines which haven't 3-hops issues
first?

I would appreciate very much if you could comment on v2:
https://lore.kernel.org/lkml/20210203111201.20720-1-song.bao@hisilicon.com/


> Note that b) requires introducing cross sched_domain_topology_level
> references for sched_group_capacity. This is a somewhat prickly matter as
> we need to ensure whichever group we hook into won't see its domain
> degenerated (which was never an issue when such references were bounded
> within a single topology level).
> 
> This lifts the NUMA diameter restriction, although yields more groups in
> the NUMA domains. As an example, here is the distance matrix for
> an AMD Epyc:
> 
>   node   0   1   2   3   4   5   6   7
> 0:  10  16  16  16  32  32  32  32
> 1:  16  10  16  16  32  32  32  32
> 2:  16  16  10  16  32  32  32  32
> 3:  16  16  16  10  32  32  32  32
> 4:  32  32  32  32  10  16  16  16
> 5:  32  32  32  32  16  10  16  16
> 6:  32  32  32  32  16  16  10  16
> 7:  32  32  32  32  16  16  16  10
> 
> Emulating this on QEMU yields, before the patch:
>   [0.386745] CPU0 attaching sched-domain(s):
>   [0.386969]  domain-0: span=0-3 level=NUMA
>   [0.387708]   groups: 0:{ span=0 cap=1008 }, 1:{ span=1 cap=1007 },
> 2:{ span=2 cap=1007 }, 3:{ span=3 cap=998 }
>   [0.388505]   domain-1: span=0-7 level=NUMA
>   [0.388700]groups: 0:{ span=0-3 cap=4020 }, 4:{ span=4-7 cap=4014 }
>   [0.389861] CPU1 attaching sched-domain(s):
>   [0.390020]  domain-0: span=0-3 level=NUMA
>   [0.390200]   groups: 1:{ span=1 cap=1007 }, 2:{ span=2 cap=1007 },
> 3:{ span=3 cap=998 }, 0:{ span=0 cap=1008 }
>   [0.390701]   domain-1: span=0-7 level=NUMA
>   [0.390874]groups: 0:{ span=0-3 cap=4020 }, 4:{ span=4-7 cap=4014 }
>   [0.391460] CPU2 attaching sched-domain(s):
>   [0.391664]  domain-0: span=0-3 level=NUMA
>   [0.392750]   groups: 2:{ span=2 cap=1007 }, 3:{ span=3 cap=998 }, 0:{ 
> span=0
> cap=1008 }, 1:{ span=1 cap=1007 }
>   [0.393672]   domain-1: span=0-7 level=NUMA
>   [0.393961]groups: 0:{ span=0-3 cap=4020 }, 4:{ span=4-7 cap=4014 }
>   [0.394645] CPU3 attaching sched-domain(s):
>   [0.394792]  domain-0: span=0-3 level=NUMA
>   [0.394961]   groups: 3:{ span=3 cap=998 }, 0:{ span=0 cap=1008 }, 1:{ 
> span=1
> cap=1007 }, 2:{ span=2 cap=1007 }
>   [0.395749]   domain-1: span=0-7 level=NUMA
>   [0.396098]groups: 0:{ span=0-3 cap=4020 }, 4:{ span=4-7 cap=4014 }
>   [0.396455] CPU4 attaching sched-domain(s):
>   [0.396603]  domain-0: span=4-7 level=NUMA
>   [0.396771]   groups: 4:{ span=4 cap=1001 }, 5:{ span=5 cap=1004 },
> 6:{ span=6 cap=1003 }, 7:{ span=7 cap=1006 }
>   [0.397274]   domain-1: span=0-7 level=NUMA
>   [0.397454]groups: 4:{ span=4-7 cap=4014 }, 0:{ span=0-3 cap=4020 }
>   [0.397801] CPU5 attaching