Hi Chris,

One way is to use "scontrol":

scontrol --details show job <jobID>

you should then see something like:
...snip
 Nodes=clu[2-3] CPU_IDs=0-19 Mem=1024
 Nodes=clu4 CPU_IDs=0-3,10-13 Mem=1024

Note the note:
Note that the CPU ids reported by this command are Slurm abstract CPU ids, not Linux/hardware CPU ids (as reported by, for example, /proc/cpuinfo).

http://slurm.schedmd.com/cpu_management.html#Section2

So if you want actually core ids, I guess you will have to query the cgroup hierarchy.

hth
-k
--
Kaizaad Bilimorya
Systems Administrator - SHARCNET | http://www.sharcnet.ca
Compute Canada | http://www.computecanada.ca
ph: (519) 824-4120 x52700


On Tue, 25 Oct 2016, Christopher Samuel wrote:


Hi all,

I can't help but think I'm missing something blindingly obvious, but
does anyone know how to find out how Slurm has distributed a job in
terms of cores per node?

In other words, if I submit:

sbatch --ntasks=64 --wrap sleep 60

on a system with (say 16 core nodes where nodes are already running
disparate number of jobs using variable cores, how do I see what cores
on what nodes Slurm has allocated my running job?

I know I can go and poke around with cgroups, but is there a way to get
that out of squeue, sstat or sacct?

All the best,
Chris
--
Christopher Samuel        Senior Systems Administrator
VLSCI - Victorian Life Sciences Computation Initiative
Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545
http://www.vlsci.org.au/      http://twitter.com/vlsci

Reply via email to