Hi All,

I'd like to be able to see for a given jobid how much resources are used by
a job on each node it is running on at this moment. Is there a way to do
it?

So far it looks like I have to script it: get the list of the involved
nodes using, for example, squeue or qstat, ssh to each node and find all
the user processes (not 100% guaranteed that they would be from the job I
am interested in: is there a way to find UNIX pids corresponding to Slurm
jobid?).

Another question: is there python API to slurm? I found pyslurm but so far
it would not build with my version of Slurm.

Thank you,
Igor

Reply via email to