Dear Lachlam
This was done.
please see below an example for one user
Cluster Account User Partition Share GrpJobs GrpTRES
GrpSubmit GrpWall GrpTRESMins MaxJobs MaxTRES MaxTRESPerNode
MaxSubmit MaxWall MaxTRESMins QOS Def QOS
GrpTRESRunMin
---------- ---------- ---------- ---------- --------- ------- -------------
--------- ----------- ------------- ------- ------------- --------------
--------- ----------- ------------- -------------------- ---------
-------------
cluster rennes aboucekk 1
2 4
normal
Cheers,
Rémi
> Le 2 juin 2016 à 07:57, remi marchal <[email protected]> a écrit :
>
> Dear slurm users,
>
> I am quite new in the community and I would like to monitor the running jobs.
>
> Looking through internet, I found this command:
> sacct -j
>
> However, here is the result of one of my jobs (submission script bellow):
>
> JobID JobIDRaw JobName Partition MaxVMSize MaxVMSizeNode
> MaxVMSizeTask AveVMSize MaxRSS MaxRSSNode MaxRSSTask AveRSS MaxPages
> MaxPagesNode MaxPagesTask AvePages MinCPU MinCPUNode MinCPUTask
> AveCPU NTasks AllocCPUS Elapsed State ExitCode AveCPUFreq
> ReqCPUFreqMin ReqCPUFreqMax ReqCPUFreqGov ReqMem ConsumedEnergy
> MaxDiskRead MaxDiskReadNode MaxDiskReadTask AveDiskRead MaxDiskWrite
> MaxDiskWriteNode MaxDiskWriteTask AveDiskWrite AllocGRES ReqGRES
> ReqTRES AllocTRES
> ------------ ------------ ---------- ---------- ---------- --------------
> -------------- ---------- ---------- ---------- ---------- ----------
> -------- ------------ -------------- ---------- ---------- ----------
> ---------- ---------- -------- ---------- ---------- ---------- --------
> ---------- ------------- ------------- ------------- ----------
> -------------- ------------ --------------- --------------- --------------
> ------------ ---------------- ---------------- -------------- ------------
> ------------ ---------- ----------
> 156 156 test debug
>
>
> 36 00:00:15 RUNNING 0:0 Unknown
> Unknown Unknown 0n
>
> cpu=36,no+ cpu=36,no+
>
> Submission script
>
> #!/bin/bash
> #
> #SBATCH --job-name=test
> #SBATCH --output=res.txt
> #
> #SBATCH --tasks=36
>
> #SBATCH --time=5-00:00
>
> SLURM_SUB=$(pwd)
> echo $SLURM_SUB > testres
> echo $SLURM_JOBID > res2
> mkdir /tmp/job_$SLURM_JOBID
> cp *psf /tmp/job_$SLURM_JOBID
> cp *fdf /tmp/job_$SLURM_JOBID
> ls /tmp/job_$SLURM_JOBID >> res2
> cd /tmp/job_$SLURM_JOBID
>
> source /opt/intel/bin/ifortvars.sh intel64
>
> /cluster_cti/utils/openmpi/openmpi-1.10.2/bin/mpirun -n 18
> /cluster_cti/bin/SIESTA/bin/siesta-so2 < tecfai.fdf > tecfai.log
> cp *.log $SLURM_SUB
>
> Can anyone help me
>
> Regards,
>
> Rémi
>
>
>
>
>