Also I recommend setting:

*CoreSpecCount*
   Number of cores reserved for system use. These cores will not be
   available for allocation to user jobs. Depending upon the
   *TaskPluginParam* option of *SlurmdOffSpec*, Slurm daemons (i.e.
   slurmd and slurmstepd) may either be confined to these resources
   (the default) or prevented from using these resources. Isolation of
   the Slurm daemons from user jobs may improve application
   performance. If this option and *CpuSpecList* are both designated
   for a node, an error is generated. For information on the algorithm
   used by Slurm to select the cores refer to the core specialization
documentation ( https://slurm.schedmd.com/core_spec.html ).
and

*MemSpecLimit*
   Amount of memory, in megabytes, reserved for system use and not
   available for user allocations. If the task/cgroup plugin is
   configured and that plugin constrains memory allocations (i.e.
   *TaskPlugin=task/cgroup* in slurm.conf, plus *ConstrainRAMSpace=yes*
   in cgroup.conf), then Slurm compute node daemons (slurmd plus
   slurmstepd) will be allocated the specified memory limit. Note that
   having the Memory set in *SelectTypeParameters* as any of the
   options that has it as a consumable resource is needed for this
   option to work. The daemons will not be killed if they exhaust the
   memory allocation (ie. the Out-Of-Memory Killer is disabled for the
   daemon's memory cgroup). If the task/cgroup plugin is not
   configured, the specified memory will only be unavailable for user
allocations. These will restrict specific memory and cores for system use. This is probably the best way to go rather than spoofing your config.

-Paul Edmon-


On 1/7/2022 2:36 AM, Rémi Palancher wrote:
Le jeudi 6 janvier 2022 à 22:39, David Henkemeyer<david.henkeme...@gmail.com>  
a écrit :

All,

When my team used PBS, we had several nodes that had a TON of CPUs, so many, in 
fact, that we ended up setting np to a smaller value, in order to not starve 
the system of memory.

What is the best way to do this with Slurm? I tried modifying # of CPUs in the slurm.conf file, but 
I noticed that Slurm enforces that "CPUs" is equal to Boards * SocketsPerBoard * 
CoresPerSocket * ThreadsPerCore. This left me with having to "fool" Slurm into thinking 
there were either fewer ThreadsPerCore, fewer CoresPerSocket, or fewer SocketsPerBoard. This is a 
less than ideal solution, it seems to me. At least, it left me feeling like there has to be a 
better way.
I'm not sure you can lie to Slurm about the real number of CPUs on the nodes.

If you want to prevent Slurm from allocating more than n CPUs below the total 
number of CPUs of these nodes, I guess one solution is to use MaxCPUsPerNode=n 
at the partition level.

You can also mask "system" CPUs with CpuSpecList at node level.

The later is better if you need fine grain control over the exact list of 
reserved CPUs regarding NUMA topology or whatever.

--
Rémi Palancher
Rackslab: Open Source Solutions for HPC Operations
https://rackslab.io


Reply via email to