Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Martin Schwidefsky
On Mon, 13 Jun 2016 15:53:02 +0200
Peter Zijlstra  wrote:

> On Mon, Jun 13, 2016 at 03:19:42PM +0200, Martin Schwidefsky wrote:
> > On Mon, 13 Jun 2016 15:06:47 +0200
> > Peter Zijlstra  wrote:
> > 
> > > On Mon, Jun 13, 2016 at 01:22:30PM +0200, Heiko Carstens wrote:
> > > > Yes, and actually we are all virt/LPAR always, so this is unfortunately 
> > > > not
> > > > very easy to do. And yes, I do agree that for the 1:1 case it most 
> > > > likely
> > > > would make sense, however we don't have any run-time guarantee to stay 
> > > > 1:1.
> > > 
> > > One option would be to make it a boot option; such that the
> > > administrator has to set it. At that point, if the admin creates
> > > multiple LPARs its on him.
> > 
> > Unfortunately not good enough. The LPAR code tries to optimize the layout
> > at the time a partition is activated. The landscape of already running
> > partitions can change at this point.
> 
> Would not the admin _know_ this? It would be him activating partitions
> after all, no?

This is all fine and good in a static environment where you can afford to
stop all partitions to do a reconfiguration. There you could get away with
a kernel option that enables "real" NUMA.

But as a general solution this fails. Consider this scenario: you have several
partitions already running with a workload that you do *not* want to interrupt
right now, think stock exchange. And now another partition urgently needs more
memory. To do this you have to shut it down, deactivate it, update the profile
with more memory, re-activate it and restart the OS. End result: memory
landscape could have changed.

> > To get around this you would have to activate *all* partitions first and
> > then start the operating systems in a second step.
> 
> Arguably, you only care about the single partition covering the entire
> machine case, so I don't see that being a problem.
> 
> Again, admin _knows_ this.

The single partitions case is boring, several large partitions to big for a
single node is the hard part.

> > And then there is concurrent repair which will move things around if a
> > piece of memory goes bad. This happens rarely though.
> 
> That would be magic disturbance indeed, nothing much to do about that.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.



Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Martin Schwidefsky
On Mon, 13 Jun 2016 15:53:02 +0200
Peter Zijlstra  wrote:

> On Mon, Jun 13, 2016 at 03:19:42PM +0200, Martin Schwidefsky wrote:
> > On Mon, 13 Jun 2016 15:06:47 +0200
> > Peter Zijlstra  wrote:
> > 
> > > On Mon, Jun 13, 2016 at 01:22:30PM +0200, Heiko Carstens wrote:
> > > > Yes, and actually we are all virt/LPAR always, so this is unfortunately 
> > > > not
> > > > very easy to do. And yes, I do agree that for the 1:1 case it most 
> > > > likely
> > > > would make sense, however we don't have any run-time guarantee to stay 
> > > > 1:1.
> > > 
> > > One option would be to make it a boot option; such that the
> > > administrator has to set it. At that point, if the admin creates
> > > multiple LPARs its on him.
> > 
> > Unfortunately not good enough. The LPAR code tries to optimize the layout
> > at the time a partition is activated. The landscape of already running
> > partitions can change at this point.
> 
> Would not the admin _know_ this? It would be him activating partitions
> after all, no?

This is all fine and good in a static environment where you can afford to
stop all partitions to do a reconfiguration. There you could get away with
a kernel option that enables "real" NUMA.

But as a general solution this fails. Consider this scenario: you have several
partitions already running with a workload that you do *not* want to interrupt
right now, think stock exchange. And now another partition urgently needs more
memory. To do this you have to shut it down, deactivate it, update the profile
with more memory, re-activate it and restart the OS. End result: memory
landscape could have changed.

> > To get around this you would have to activate *all* partitions first and
> > then start the operating systems in a second step.
> 
> Arguably, you only care about the single partition covering the entire
> machine case, so I don't see that being a problem.
> 
> Again, admin _knows_ this.

The single partitions case is boring, several large partitions to big for a
single node is the hard part.

> > And then there is concurrent repair which will move things around if a
> > piece of memory goes bad. This happens rarely though.
> 
> That would be magic disturbance indeed, nothing much to do about that.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.



Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Peter Zijlstra
On Mon, Jun 13, 2016 at 03:19:42PM +0200, Martin Schwidefsky wrote:
> On Mon, 13 Jun 2016 15:06:47 +0200
> Peter Zijlstra  wrote:
> 
> > On Mon, Jun 13, 2016 at 01:22:30PM +0200, Heiko Carstens wrote:
> > > Yes, and actually we are all virt/LPAR always, so this is unfortunately 
> > > not
> > > very easy to do. And yes, I do agree that for the 1:1 case it most likely
> > > would make sense, however we don't have any run-time guarantee to stay 
> > > 1:1.
> > 
> > One option would be to make it a boot option; such that the
> > administrator has to set it. At that point, if the admin creates
> > multiple LPARs its on him.
> 
> Unfortunately not good enough. The LPAR code tries to optimize the layout
> at the time a partition is activated. The landscape of already running
> partitions can change at this point.

Would not the admin _know_ this? It would be him activating partitions
after all, no?

> To get around this you would have to activate *all* partitions first and
> then start the operating systems in a second step.

Arguably, you only care about the single partition covering the entire
machine case, so I don't see that being a problem.

Again, admin _knows_ this.

> And then there is concurrent repair which will move things around if a
> piece of memory goes bad. This happens rarely though.

That would be magic disturbance indeed, nothing much to do about that.


Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Peter Zijlstra
On Mon, Jun 13, 2016 at 03:19:42PM +0200, Martin Schwidefsky wrote:
> On Mon, 13 Jun 2016 15:06:47 +0200
> Peter Zijlstra  wrote:
> 
> > On Mon, Jun 13, 2016 at 01:22:30PM +0200, Heiko Carstens wrote:
> > > Yes, and actually we are all virt/LPAR always, so this is unfortunately 
> > > not
> > > very easy to do. And yes, I do agree that for the 1:1 case it most likely
> > > would make sense, however we don't have any run-time guarantee to stay 
> > > 1:1.
> > 
> > One option would be to make it a boot option; such that the
> > administrator has to set it. At that point, if the admin creates
> > multiple LPARs its on him.
> 
> Unfortunately not good enough. The LPAR code tries to optimize the layout
> at the time a partition is activated. The landscape of already running
> partitions can change at this point.

Would not the admin _know_ this? It would be him activating partitions
after all, no?

> To get around this you would have to activate *all* partitions first and
> then start the operating systems in a second step.

Arguably, you only care about the single partition covering the entire
machine case, so I don't see that being a problem.

Again, admin _knows_ this.

> And then there is concurrent repair which will move things around if a
> piece of memory goes bad. This happens rarely though.

That would be magic disturbance indeed, nothing much to do about that.


Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Martin Schwidefsky
On Mon, 13 Jun 2016 15:06:47 +0200
Peter Zijlstra  wrote:

> On Mon, Jun 13, 2016 at 01:22:30PM +0200, Heiko Carstens wrote:
> > Yes, and actually we are all virt/LPAR always, so this is unfortunately not
> > very easy to do. And yes, I do agree that for the 1:1 case it most likely
> > would make sense, however we don't have any run-time guarantee to stay 1:1.
> 
> One option would be to make it a boot option; such that the
> administrator has to set it. At that point, if the admin creates
> multiple LPARs its on him.

Unfortunately not good enough. The LPAR code tries to optimize the layout
at the time a partition is activated. The landscape of already running
partitions can change at this point.

To get around this you would have to activate *all* partitions first and
then start the operating systems in a second step.

And then there is concurrent repair which will move things around if a
piece of memory goes bad. This happens rarely though.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.



Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Martin Schwidefsky
On Mon, 13 Jun 2016 15:06:47 +0200
Peter Zijlstra  wrote:

> On Mon, Jun 13, 2016 at 01:22:30PM +0200, Heiko Carstens wrote:
> > Yes, and actually we are all virt/LPAR always, so this is unfortunately not
> > very easy to do. And yes, I do agree that for the 1:1 case it most likely
> > would make sense, however we don't have any run-time guarantee to stay 1:1.
> 
> One option would be to make it a boot option; such that the
> administrator has to set it. At that point, if the admin creates
> multiple LPARs its on him.

Unfortunately not good enough. The LPAR code tries to optimize the layout
at the time a partition is activated. The landscape of already running
partitions can change at this point.

To get around this you would have to activate *all* partitions first and
then start the operating systems in a second step.

And then there is concurrent repair which will move things around if a
piece of memory goes bad. This happens rarely though.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.



Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Peter Zijlstra
On Mon, Jun 13, 2016 at 01:22:30PM +0200, Heiko Carstens wrote:
> Yes, and actually we are all virt/LPAR always, so this is unfortunately not
> very easy to do. And yes, I do agree that for the 1:1 case it most likely
> would make sense, however we don't have any run-time guarantee to stay 1:1.

One option would be to make it a boot option; such that the
administrator has to set it. At that point, if the admin creates
multiple LPARs its on him.




Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Peter Zijlstra
On Mon, Jun 13, 2016 at 01:22:30PM +0200, Heiko Carstens wrote:
> Yes, and actually we are all virt/LPAR always, so this is unfortunately not
> very easy to do. And yes, I do agree that for the 1:1 case it most likely
> would make sense, however we don't have any run-time guarantee to stay 1:1.

One option would be to make it a boot option; such that the
administrator has to set it. At that point, if the admin creates
multiple LPARs its on him.




Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Heiko Carstens
On Mon, Jun 13, 2016 at 01:06:21PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 08, 2016 at 11:09:16AM +0200, Heiko Carstens wrote:
> > The z13 machine added a fourth level to the cpu topology
> > information. The new top level is called drawer.
> > 
> > A drawer contains two books, which used to be the top level.
> > 
> > Adding this additional scheduling domain did show performance
> > improvements for some workloads of up to 8%, while there don't
> > seem to be any workloads impacted in a negative way.
> 
> Right; so no objection.
> 
> Acked-by: Peter Zijlstra (Intel) 

Thanks!

> You still don't want to make NUMA explicit on this thing? So while I
> suppose the SC 480M L4 cache does hide some of it, there can be up to 8
> nodes on this thing. Which seems to me there's win to be had by exposing
> it.
> 
> Of course, the moment you go all virt/LPAR on it, that all gets really
> interesting, but for those cases where you run 1:1 it might make sense.

Yes, and actually we are all virt/LPAR always, so this is unfortunately not
very easy to do. And yes, I do agree that for the 1:1 case it most likely
would make sense, however we don't have any run-time guarantee to stay 1:1.

> Also, are you sure you don't want some of the behaviour changed for the
> drawer domains? I could for example imagine you wouldn't want
> SD_WAKE_AFFINE set (we disable that for NUMA domains as well).

That's something we need to look into further as well. Thanks for pointing
this out!



Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Heiko Carstens
On Mon, Jun 13, 2016 at 01:06:21PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 08, 2016 at 11:09:16AM +0200, Heiko Carstens wrote:
> > The z13 machine added a fourth level to the cpu topology
> > information. The new top level is called drawer.
> > 
> > A drawer contains two books, which used to be the top level.
> > 
> > Adding this additional scheduling domain did show performance
> > improvements for some workloads of up to 8%, while there don't
> > seem to be any workloads impacted in a negative way.
> 
> Right; so no objection.
> 
> Acked-by: Peter Zijlstra (Intel) 

Thanks!

> You still don't want to make NUMA explicit on this thing? So while I
> suppose the SC 480M L4 cache does hide some of it, there can be up to 8
> nodes on this thing. Which seems to me there's win to be had by exposing
> it.
> 
> Of course, the moment you go all virt/LPAR on it, that all gets really
> interesting, but for those cases where you run 1:1 it might make sense.

Yes, and actually we are all virt/LPAR always, so this is unfortunately not
very easy to do. And yes, I do agree that for the 1:1 case it most likely
would make sense, however we don't have any run-time guarantee to stay 1:1.

> Also, are you sure you don't want some of the behaviour changed for the
> drawer domains? I could for example imagine you wouldn't want
> SD_WAKE_AFFINE set (we disable that for NUMA domains as well).

That's something we need to look into further as well. Thanks for pointing
this out!



Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Peter Zijlstra
On Mon, Jun 13, 2016 at 01:25:53PM +0200, Heiko Carstens wrote:
> On Mon, Jun 13, 2016 at 01:06:21PM +0200, Peter Zijlstra wrote:
> > On Wed, Jun 08, 2016 at 11:09:16AM +0200, Heiko Carstens wrote:
> > > The z13 machine added a fourth level to the cpu topology
> > > information. The new top level is called drawer.
> > > 
> > > A drawer contains two books, which used to be the top level.
> > > 
> > > Adding this additional scheduling domain did show performance
> > > improvements for some workloads of up to 8%, while there don't
> > > seem to be any workloads impacted in a negative way.
> > 
> > Right; so no objection.
> > 
> > Acked-by: Peter Zijlstra (Intel) 
> 
> May I add your ACK also to the sysfs patch?

Not really my area, nor something I've ever looked hard at, but the
patch seems to have the right shape, so sure ;-)


Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Peter Zijlstra
On Mon, Jun 13, 2016 at 01:25:53PM +0200, Heiko Carstens wrote:
> On Mon, Jun 13, 2016 at 01:06:21PM +0200, Peter Zijlstra wrote:
> > On Wed, Jun 08, 2016 at 11:09:16AM +0200, Heiko Carstens wrote:
> > > The z13 machine added a fourth level to the cpu topology
> > > information. The new top level is called drawer.
> > > 
> > > A drawer contains two books, which used to be the top level.
> > > 
> > > Adding this additional scheduling domain did show performance
> > > improvements for some workloads of up to 8%, while there don't
> > > seem to be any workloads impacted in a negative way.
> > 
> > Right; so no objection.
> > 
> > Acked-by: Peter Zijlstra (Intel) 
> 
> May I add your ACK also to the sysfs patch?

Not really my area, nor something I've ever looked hard at, but the
patch seems to have the right shape, so sure ;-)


Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Heiko Carstens
On Mon, Jun 13, 2016 at 01:06:21PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 08, 2016 at 11:09:16AM +0200, Heiko Carstens wrote:
> > The z13 machine added a fourth level to the cpu topology
> > information. The new top level is called drawer.
> > 
> > A drawer contains two books, which used to be the top level.
> > 
> > Adding this additional scheduling domain did show performance
> > improvements for some workloads of up to 8%, while there don't
> > seem to be any workloads impacted in a negative way.
> 
> Right; so no objection.
> 
> Acked-by: Peter Zijlstra (Intel) 

May I add your ACK also to the sysfs patch?



Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Heiko Carstens
On Mon, Jun 13, 2016 at 01:06:21PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 08, 2016 at 11:09:16AM +0200, Heiko Carstens wrote:
> > The z13 machine added a fourth level to the cpu topology
> > information. The new top level is called drawer.
> > 
> > A drawer contains two books, which used to be the top level.
> > 
> > Adding this additional scheduling domain did show performance
> > improvements for some workloads of up to 8%, while there don't
> > seem to be any workloads impacted in a negative way.
> 
> Right; so no objection.
> 
> Acked-by: Peter Zijlstra (Intel) 

May I add your ACK also to the sysfs patch?



Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Peter Zijlstra
On Wed, Jun 08, 2016 at 11:09:16AM +0200, Heiko Carstens wrote:
> The z13 machine added a fourth level to the cpu topology
> information. The new top level is called drawer.
> 
> A drawer contains two books, which used to be the top level.
> 
> Adding this additional scheduling domain did show performance
> improvements for some workloads of up to 8%, while there don't
> seem to be any workloads impacted in a negative way.

Right; so no objection.

Acked-by: Peter Zijlstra (Intel) 

You still don't want to make NUMA explicit on this thing? So while I
suppose the SC 480M L4 cache does hide some of it, there can be up to 8
nodes on this thing. Which seems to me there's win to be had by exposing
it.

Of course, the moment you go all virt/LPAR on it, that all gets really
interesting, but for those cases where you run 1:1 it might make sense.

Also, are you sure you don't want some of the behaviour changed for the
drawer domains? I could for example imagine you wouldn't want
SD_WAKE_AFFINE set (we disable that for NUMA domains as well).



Re: [PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-13 Thread Peter Zijlstra
On Wed, Jun 08, 2016 at 11:09:16AM +0200, Heiko Carstens wrote:
> The z13 machine added a fourth level to the cpu topology
> information. The new top level is called drawer.
> 
> A drawer contains two books, which used to be the top level.
> 
> Adding this additional scheduling domain did show performance
> improvements for some workloads of up to 8%, while there don't
> seem to be any workloads impacted in a negative way.

Right; so no objection.

Acked-by: Peter Zijlstra (Intel) 

You still don't want to make NUMA explicit on this thing? So while I
suppose the SC 480M L4 cache does hide some of it, there can be up to 8
nodes on this thing. Which seems to me there's win to be had by exposing
it.

Of course, the moment you go all virt/LPAR on it, that all gets really
interesting, but for those cases where you run 1:1 it might make sense.

Also, are you sure you don't want some of the behaviour changed for the
drawer domains? I could for example imagine you wouldn't want
SD_WAKE_AFFINE set (we disable that for NUMA domains as well).



[PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-08 Thread Heiko Carstens
The z13 machine added a fourth level to the cpu topology
information. The new top level is called drawer.

A drawer contains two books, which used to be the top level.

Adding this additional scheduling domain did show performance
improvements for some workloads of up to 8%, while there don't
seem to be any workloads impacted in a negative way.

Signed-off-by: Heiko Carstens 
---
 arch/s390/Kconfig|  4 
 arch/s390/include/asm/topology.h |  4 
 arch/s390/kernel/topology.c  | 33 +++--
 arch/s390/numa/mode_emu.c| 25 -
 4 files changed, 55 insertions(+), 11 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a8c259059adf..9d35d6d084da 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -477,6 +477,9 @@ config SCHED_MC
 config SCHED_BOOK
def_bool n
 
+config SCHED_DRAWER
+   def_bool n
+
 config SCHED_TOPOLOGY
def_bool y
prompt "Topology scheduler support"
@@ -484,6 +487,7 @@ config SCHED_TOPOLOGY
select SCHED_SMT
select SCHED_MC
select SCHED_BOOK
+   select SCHED_DRAWER
help
  Topology scheduler support improves the CPU scheduler's decision
  making when dealing with machines that have multi-threading,
diff --git a/arch/s390/include/asm/topology.h b/arch/s390/include/asm/topology.h
index 6b53962e807e..f15f5571ca2b 100644
--- a/arch/s390/include/asm/topology.h
+++ b/arch/s390/include/asm/topology.h
@@ -14,10 +14,12 @@ struct cpu_topology_s390 {
unsigned short core_id;
unsigned short socket_id;
unsigned short book_id;
+   unsigned short drawer_id;
unsigned short node_id;
cpumask_t thread_mask;
cpumask_t core_mask;
cpumask_t book_mask;
+   cpumask_t drawer_mask;
 };
 
 DECLARE_PER_CPU(struct cpu_topology_s390, cpu_topology);
@@ -30,6 +32,8 @@ DECLARE_PER_CPU(struct cpu_topology_s390, cpu_topology);
 #define topology_core_cpumask(cpu)   (_cpu(cpu_topology, 
cpu).core_mask)
 #define topology_book_id(cpu)(per_cpu(cpu_topology, cpu).book_id)
 #define topology_book_cpumask(cpu)   (_cpu(cpu_topology, 
cpu).book_mask)
+#define topology_drawer_id(cpu)  (per_cpu(cpu_topology, 
cpu).drawer_id)
+#define topology_drawer_cpumask(cpu) (_cpu(cpu_topology, 
cpu).drawer_mask)
 
 #define mc_capable() 1
 
diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
index 64298a867589..44745e751c3a 100644
--- a/arch/s390/kernel/topology.c
+++ b/arch/s390/kernel/topology.c
@@ -46,6 +46,7 @@ static DECLARE_WORK(topology_work, topology_work_fn);
  */
 static struct mask_info socket_info;
 static struct mask_info book_info;
+static struct mask_info drawer_info;
 
 DEFINE_PER_CPU(struct cpu_topology_s390, cpu_topology);
 EXPORT_PER_CPU_SYMBOL_GPL(cpu_topology);
@@ -80,6 +81,7 @@ static cpumask_t cpu_thread_map(unsigned int cpu)
 }
 
 static struct mask_info *add_cpus_to_mask(struct topology_core *tl_core,
+ struct mask_info *drawer,
  struct mask_info *book,
  struct mask_info *socket,
  int one_socket_per_cpu)
@@ -97,9 +99,11 @@ static struct mask_info *add_cpus_to_mask(struct 
topology_core *tl_core,
continue;
for (i = 0; i <= smp_cpu_mtid; i++) {
topo = _cpu(cpu_topology, lcpu + i);
+   topo->drawer_id = drawer->id;
topo->book_id = book->id;
topo->core_id = rcore;
topo->thread_id = lcpu + i;
+   cpumask_set_cpu(lcpu + i, >mask);
cpumask_set_cpu(lcpu + i, >mask);
cpumask_set_cpu(lcpu + i, >mask);
if (one_socket_per_cpu)
@@ -128,6 +132,11 @@ static void clear_masks(void)
cpumask_clear(>mask);
info = info->next;
}
+   info = _info;
+   while (info) {
+   cpumask_clear(>mask);
+   info = info->next;
+   }
 }
 
 static union topology_entry *next_tle(union topology_entry *tle)
@@ -141,12 +150,17 @@ static void __tl_to_masks_generic(struct sysinfo_15_1_x 
*info)
 {
struct mask_info *socket = _info;
struct mask_info *book = _info;
+   struct mask_info *drawer = _info;
union topology_entry *tle, *end;
 
tle = info->tle;
end = (union topology_entry *)((unsigned long)info + info->length);
while (tle < end) {
switch (tle->nl) {
+   case 3:
+   drawer = drawer->next;
+   drawer->id = tle->container.id;
+   break;
case 2:
book = book->next;

[PATCH 2/2] s390/topology: add drawer scheduling domain level

2016-06-08 Thread Heiko Carstens
The z13 machine added a fourth level to the cpu topology
information. The new top level is called drawer.

A drawer contains two books, which used to be the top level.

Adding this additional scheduling domain did show performance
improvements for some workloads of up to 8%, while there don't
seem to be any workloads impacted in a negative way.

Signed-off-by: Heiko Carstens 
---
 arch/s390/Kconfig|  4 
 arch/s390/include/asm/topology.h |  4 
 arch/s390/kernel/topology.c  | 33 +++--
 arch/s390/numa/mode_emu.c| 25 -
 4 files changed, 55 insertions(+), 11 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a8c259059adf..9d35d6d084da 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -477,6 +477,9 @@ config SCHED_MC
 config SCHED_BOOK
def_bool n
 
+config SCHED_DRAWER
+   def_bool n
+
 config SCHED_TOPOLOGY
def_bool y
prompt "Topology scheduler support"
@@ -484,6 +487,7 @@ config SCHED_TOPOLOGY
select SCHED_SMT
select SCHED_MC
select SCHED_BOOK
+   select SCHED_DRAWER
help
  Topology scheduler support improves the CPU scheduler's decision
  making when dealing with machines that have multi-threading,
diff --git a/arch/s390/include/asm/topology.h b/arch/s390/include/asm/topology.h
index 6b53962e807e..f15f5571ca2b 100644
--- a/arch/s390/include/asm/topology.h
+++ b/arch/s390/include/asm/topology.h
@@ -14,10 +14,12 @@ struct cpu_topology_s390 {
unsigned short core_id;
unsigned short socket_id;
unsigned short book_id;
+   unsigned short drawer_id;
unsigned short node_id;
cpumask_t thread_mask;
cpumask_t core_mask;
cpumask_t book_mask;
+   cpumask_t drawer_mask;
 };
 
 DECLARE_PER_CPU(struct cpu_topology_s390, cpu_topology);
@@ -30,6 +32,8 @@ DECLARE_PER_CPU(struct cpu_topology_s390, cpu_topology);
 #define topology_core_cpumask(cpu)   (_cpu(cpu_topology, 
cpu).core_mask)
 #define topology_book_id(cpu)(per_cpu(cpu_topology, cpu).book_id)
 #define topology_book_cpumask(cpu)   (_cpu(cpu_topology, 
cpu).book_mask)
+#define topology_drawer_id(cpu)  (per_cpu(cpu_topology, 
cpu).drawer_id)
+#define topology_drawer_cpumask(cpu) (_cpu(cpu_topology, 
cpu).drawer_mask)
 
 #define mc_capable() 1
 
diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
index 64298a867589..44745e751c3a 100644
--- a/arch/s390/kernel/topology.c
+++ b/arch/s390/kernel/topology.c
@@ -46,6 +46,7 @@ static DECLARE_WORK(topology_work, topology_work_fn);
  */
 static struct mask_info socket_info;
 static struct mask_info book_info;
+static struct mask_info drawer_info;
 
 DEFINE_PER_CPU(struct cpu_topology_s390, cpu_topology);
 EXPORT_PER_CPU_SYMBOL_GPL(cpu_topology);
@@ -80,6 +81,7 @@ static cpumask_t cpu_thread_map(unsigned int cpu)
 }
 
 static struct mask_info *add_cpus_to_mask(struct topology_core *tl_core,
+ struct mask_info *drawer,
  struct mask_info *book,
  struct mask_info *socket,
  int one_socket_per_cpu)
@@ -97,9 +99,11 @@ static struct mask_info *add_cpus_to_mask(struct 
topology_core *tl_core,
continue;
for (i = 0; i <= smp_cpu_mtid; i++) {
topo = _cpu(cpu_topology, lcpu + i);
+   topo->drawer_id = drawer->id;
topo->book_id = book->id;
topo->core_id = rcore;
topo->thread_id = lcpu + i;
+   cpumask_set_cpu(lcpu + i, >mask);
cpumask_set_cpu(lcpu + i, >mask);
cpumask_set_cpu(lcpu + i, >mask);
if (one_socket_per_cpu)
@@ -128,6 +132,11 @@ static void clear_masks(void)
cpumask_clear(>mask);
info = info->next;
}
+   info = _info;
+   while (info) {
+   cpumask_clear(>mask);
+   info = info->next;
+   }
 }
 
 static union topology_entry *next_tle(union topology_entry *tle)
@@ -141,12 +150,17 @@ static void __tl_to_masks_generic(struct sysinfo_15_1_x 
*info)
 {
struct mask_info *socket = _info;
struct mask_info *book = _info;
+   struct mask_info *drawer = _info;
union topology_entry *tle, *end;
 
tle = info->tle;
end = (union topology_entry *)((unsigned long)info + info->length);
while (tle < end) {
switch (tle->nl) {
+   case 3:
+   drawer = drawer->next;
+   drawer->id = tle->container.id;
+   break;
case 2:
book = book->next;
book->id = tle->container.id;