Hi!

If a user asks for more then the available memory of a core (in our case 2500/core) with -N1 --mem-per-cpu and also add --exclusive.

Slurm will allocate all the cores in the node but only account for the number of nodes that fulfil the requirements of --mem-per-cpu.

For example if I say --mem-per-cpu=5000 only half of the available cores will be accounted for but all of them will be blocked.

This is the (relevant) output of scontrol show job of real job at our system:

-----8<-----
JobId=69211 Name=memory
   NumNodes=1 NumCPUs=30 CPUs/Task=1 ReqS:C:T=*:*:*
     Nodes=t-cn0102 CPU_IDs=0-47 Mem=120000
   MinCPUsNode=1 MinMemoryCPU=4000M MinTmpDiskNode=0
   Shared=0 Contiguous=0 Licenses=(null) Network=(null)

#!/bin/bash
#SBATCH --mem-per-cpu=4000
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --exclusive
-----8<-----

Total memory/node=128000M, 48 cores, default 2500M/core

As I understand it this will give the wrong input for the fair share scheduler and results the wrong priority (to high) for the user.

Best regards,
Magnus

--
Magnus Jonsson, Developer, HPC2N, UmeƄ Universitet

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to