how to monitor CPU/RAM usage on each node of a slurm job? python API?

   You should use HDF5

   <!-- tmpl_var LEFT_BRACKET -->1<!-- tmpl_var RIGHT_BRACKET 

   On 09/19/2016 03:41 AM, Igor Yakushin wrote:

   Hi All,
   I'd like to be able to see for a given jobid how much resources are
   used by a job on each node it is running on at this moment. Is there a
   way to do it?
   So far it looks like I have to script it: get the list of the involved
   nodes using, for example, squeue or qstat, ssh to each node and find
   all the user processes (not 100% guaranteed that they would be from
   the job I am interested in: is there a way to find UNIX pids
   corresponding to Slurm jobid?).
   Another question: is there python API to slurm? I found pyslurm but so
   far it would not build with my version of Slurm.
   Thank you,Igor


   <!-- tmpl_var LEFT_BRACKET -->1<!-- tmpl_var RIGHT_BRACKET -->

Reply via email to