Re: [hwloc-users] Building hwloc for a Cray/KNL system

2017-01-27 Thread Gunter, David O
Samuel,

  That was the magic flag I needed. I had totally misunderstood wha it meant.

Thanks,
david
--
David Gunter
HPC-ENV: Applications Readiness Team
Los Alamos National Laboratory




> On Jan 27, 2017, at 11:12 AM, Samuel Thibault  
> wrote:
> 
> Hello,
> 
> Gunter, David O, on Fri 27 Jan 2017 18:05:44 +, wrote:
>> $ aprun -n 1 -L 193 ~hwloc-tt/bin/lstopo-no-graphics
> 
> Does aprun give you allocation of all cores?  By default lstopo only
> shows the allocated cores.  To see all of them, use the --whole-system
> option.
> 
> Samuel

___
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users


Re: [hwloc-users] Building hwloc for a Cray/KNL system

2017-01-27 Thread Samuel Thibault
Hello,

Gunter, David O, on Fri 27 Jan 2017 18:05:44 +, wrote:
> $ aprun -n 1 -L 193 ~hwloc-tt/bin/lstopo-no-graphics

Does aprun give you allocation of all cores?  By default lstopo only
shows the allocated cores.  To see all of them, use the --whole-system
option.

Samuel
___
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users


[hwloc-users] Building hwloc for a Cray/KNL system

2017-01-27 Thread Gunter, David O
We have a Cray KNL system with hwloc 1.11.2 installed. When executing lstopo on 
a KNL node, I do not get any info on the cores and threads, the way I do on 
other Intel cpus.

I downloaded and built the latest git version and built it, but it is giving me 
the same output (shown below). Has anyone successfuly built this for this type 
of system?

$ ./configure —prefix=~/hwloc-tt --build=x86_64-unknown-linux-gnu
$ make
$ make install

$ aprun -n 1 -L 193 ~hwloc-tt/bin/lstopo-no-graphics
Machine (110GB total) + Package L#0
  Group0(Cluster) L#0
NUMANode L#0 (P#0 23GB) + L2 L#0 (1024KB) + L1d L#0 (32KB) + L1i L#0 (32KB) 
+ Core L#0 + PU L#0 (P#0)
NUMANode(MCDRAM) L#1 (P#4 4040MB)
  Group0(Cluster) L#1
NUMANode L#2 (P#1 24GB)
NUMANode(MCDRAM) L#3 (P#5 4040MB)
  Group0(Cluster) L#2
NUMANode L#4 (P#2 24GB)
NUMANode(MCDRAM) L#5 (P#6 4040MB)
  Group0(Cluster) L#3
NUMANode L#6 (P#3 24GB)
NUMANode(MCDRAM) L#7 (P#7 4037MB)
Application 2727480 resources: utime ~0s, stime ~0s, Rss ~4548, inblocks ~0, 
outblocks ~0
(dog@tt-login1 30%) aprun -n 1 -L 193 /usr/bin/lstopo-no-graphics
Machine (110GB total) + Package L#0
  Group0(Cluster) L#0
NUMANode L#0 (P#0 23GB) + L2 L#0 (1024KB) + L1d L#0 (32KB) + L1i L#0 (32KB) 
+ Core L#0 + PU L#0 (P#0)
NUMANode(MCDRAM) L#1 (P#4 4040MB)
  Group0(Cluster) L#1
NUMANode L#2 (P#1 24GB)
NUMANode(MCDRAM) L#3 (P#5 4040MB)
  Group0(Cluster) L#2
NUMANode L#4 (P#2 24GB)
NUMANode(MCDRAM) L#5 (P#6 4040MB)
  Group0(Cluster) L#3
NUMANode L#6 (P#3 24GB)
NUMANode(MCDRAM) L#7 (P#7 4037MB)
--
David Gunter
HPC-ENV: Applications Readiness Team
Los Alamos National Laboratory



___
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users