Dear Grace

This was done.

please see below an example for one user

Cluster    Account       User  Partition     Share GrpJobs       GrpTRES 
GrpSubmit     GrpWall   GrpTRESMins MaxJobs       MaxTRES MaxTRESPerNode 
MaxSubmit     MaxWall   MaxTRESMins                  QOS   Def QOS 
GrpTRESRunMin 
---------- ---------- ---------- ---------- --------- ------- ------------- 
--------- ----------- ------------- ------- ------------- -------------- 
--------- ----------- ------------- -------------------- --------- 
------------- 
   cluster     rennes   aboucekk                    1                           
                                      2                                      4  
                                       normal                         
           
Cheers,

Rémi              





> Le 2 juin 2016 à 08:17, Lachlan Musicman <[email protected]> a écrit :
> 
> Remi,
> 
> The obvious questions are:
> 
> Have you set up the accounting? Added a cluster, added some users, etc?
> 
> ie, on the link below, there's a section under "Tools" and "Database 
> Configuration" that might apply?
> 
> http://slurm.schedmd.com/accounting.html 
> <http://slurm.schedmd.com/accounting.html>
> 
> 
> 
> I think that this section is ripe for a how to as well - it's a very dense 
> wall of text and could do with a quick 5-min preview.
> 
> Cheers
> L.
> 
> 
> 
> ------
> The most dangerous phrase in the language is, "We've always done it this way."
> 
> - Grace Hopper
> 
> On 2 June 2016 at 15:56, remi marchal <[email protected] 
> <mailto:[email protected]>> wrote:
> Dear slurm users,
> 
> I am quite new in the community and I would like to monitor the running jobs.
> 
> Looking through internet, I found this command:
> sacct -j
> 
> However, here is the result of one of my jobs (submission script bellow):
> 
>        JobID     JobIDRaw    JobName  Partition  MaxVMSize  MaxVMSizeNode  
> MaxVMSizeTask  AveVMSize     MaxRSS MaxRSSNode MaxRSSTask     AveRSS MaxPages 
> MaxPagesNode   MaxPagesTask   AvePages     MinCPU MinCPUNode MinCPUTask     
> AveCPU   NTasks  AllocCPUS    Elapsed      State ExitCode AveCPUFreq 
> ReqCPUFreqMin ReqCPUFreqMax ReqCPUFreqGov     ReqMem ConsumedEnergy  
> MaxDiskRead MaxDiskReadNode MaxDiskReadTask    AveDiskRead MaxDiskWrite 
> MaxDiskWriteNode MaxDiskWriteTask   AveDiskWrite    AllocGRES      ReqGRES    
> ReqTRES  AllocTRES 
> ------------ ------------ ---------- ---------- ---------- -------------- 
> -------------- ---------- ---------- ---------- ---------- ---------- 
> -------- ------------ -------------- ---------- ---------- ---------- 
> ---------- ---------- -------- ---------- ---------- ---------- -------- 
> ---------- ------------- ------------- ------------- ---------- 
> -------------- ------------ --------------- --------------- -------------- 
> ------------ ---------------- ---------------- -------------- ------------ 
> ------------ ---------- ---------- 
> 156          156                test      debug                               
>                                                                               
>                                                                               
>                    36   00:00:15    RUNNING      0:0                  Unknown 
>       Unknown       Unknown         0n                                        
>                                                                               
>                                               cpu=36,no+ cpu=36,no+ 
> 
> Submission script
> 
> #!/bin/bash
> #
> #SBATCH --job-name=test
> #SBATCH --output=res.txt
> #
> #SBATCH --tasks=36
> 
> #SBATCH --time=5-00:00
> 
> SLURM_SUB=$(pwd)
> echo $SLURM_SUB > testres
> echo $SLURM_JOBID > res2
> mkdir /tmp/job_$SLURM_JOBID
> cp *psf /tmp/job_$SLURM_JOBID
> cp *fdf /tmp/job_$SLURM_JOBID
> ls /tmp/job_$SLURM_JOBID >> res2
> cd /tmp/job_$SLURM_JOBID
> 
> source /opt/intel/bin/ifortvars.sh intel64 
> 
> /cluster_cti/utils/openmpi/openmpi-1.10.2/bin/mpirun -n 18 
> /cluster_cti/bin/SIESTA/bin/siesta-so2 < tecfai.fdf > tecfai.log
> cp *.log $SLURM_SUB
> 
> Can anyone help me
> 
> Regards,
> 
> Rémi
> 
> 
> 
> 
> 
> 

Reply via email to