For me, it doesn't:

[root@cn45 ~]# slurmd -C
ClusterName=(null) NodeName=cn45 slurmd: Considering each NUMA node as a
socket
CPUs=64 Boards=1 SocketsPerBoard=4 CoresPerSocket=8 ThreadsPerCore=2
RealMemory=128832 TmpDisk=32768
UpTime=7-01:47:00

[root@cn45 ~]# grep 'NodeName="cn\[45-49\]"' /etc/slurm/slurm.conf
NodeName="cn[45-49]" NodeHostname="cn[45-49]" Sockets="2"
CoresPerSocket="16" ThreadsPerCore="2" RealMemory="128832" Weight="1000"
Feature="intel,haswell,supemicro,nvme"
[root@cn45 ~]#

In any case, I found this thread that seems to answer my question:
https://bugs.schedmd.com/show_bug.cgi?id=838

Ryan

On Thu, May 19, 2016 at 2:36 PM Ryan Novosielski <[email protected]>
wrote:

> Bear in mind that that stuff is set in
> > On May 19, 2016, at 4:29 PM, Ryan Braithwaite <[email protected]> wrote:
> >
> > Hi,
> >
> > I just upgraded our cluster to 15.08.11 and started using node features
> to constrain jobs. We discovered that our Intel Haswell-based systems that
> have Cluster-on-Die enabled are showing up with the wrong number of sockets:
> > …
> >
> > I'm not sure if this is a known bug or not, so I thought I'd check here
> first.
>
> Bear in mind that unless I’m misunderstanding you, that stuff from
> scontrol show node is set in slurm.conf. If you want to check what SLURM
> thinks you ought to have in slurm.conf, you can do “slurmd -C” from a node.
>
> --
> ____
> || \\UTGERS,     |---------------------------*O*---------------------------
> ||_// the State  |         Ryan Novosielski - [email protected]
> || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
> ||  \\    of NJ  | Office of Advanced Research Computing - MSB C630, Newark
>      `'
>
>

Reply via email to