Below is the whole thing:

# slurm.conf file generated by configurator easy.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=fission
#ControlAddr=
#
#MailProg=/bin/mail
MpiDefault=none
#MpiParams=ports=#-#
ProctrackType=proctrack/pgid
ReturnToService=2
SlurmctldPidFile=/var/run/slurmctld.pid
#SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
#SlurmdPort=6818
SlurmdSpoolDir=/cm/local/apps/slurm/var/spool
SlurmUser=slurm
#SlurmdUser=root
StateSaveLocation=/cm/shared/apps/slurm/var/cm/statesave
SwitchType=switch/none
#TaskPlugin=task/none
TaskPlugin=task/affinity          # enable task affinity
#
#
# TIMERS
#KillWait=30
#MinJobAge=300
#SlurmctldTimeout=120
#SlurmdTimeout=300
#
#
# SCHEDULING
FastSchedule=0
SchedulerType=sched/backfill
#SchedulerPort=7321
#ap original, node consumable resource, use SelectType=select/linear only
#SelectType=select/linear
SelectType=select/cons_res
SelectTypeParameters=CR_Core
#
#
# LOGGING AND ACCOUNTING
AccountingStorageType=accounting_storage/none
#AccountingStorageType=accounting_storage/filetxt
ClusterName=SLURM_CLUSTER
#JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
#JobAcctGatherType=jobacct_gather/linux
#ap inserted below
#JobCompType=jobcomp/filetxt
#JobCompLoc=/var/log/slurm/job_completions
#AccountingStorageLoc=/var/log/slurm/accounting
#ap
#SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurmctld
#SlurmdDebug=3
SlurmdLogFile=/var/log/slurmd
#
#
# COMPUTE NODES
NodeName=n0[01-10]
PartitionName=debug Nodes=n[001-010] Default=YES MaxTime=00:01:00 State=UP
Shared=YES AllowGroups=ALL Disable
RootJobs=NO RootOnly=NO Hidden=YES Priority=1000
PartitionName=GPU Nodes=n0[01-05,08,10] Default=NO MaxTime=INFINITE
State=UP Shared=YES:4 AllowGroups=ALL Dis
ableRootJobs=NO RootOnly=NO Priority=50
PartitionName=DAY Nodes=n0[01-10] MaxNodes=1 Default=NO MaxTime=24:00:00
State=UP Shared=YES:4 AllowGroups=AL
L DisableRootJobs=NO RootOnly=NO Priority=100
PartitionName=WEEK Nodes=n0[01-10] Default=NO MaxTime=5-00:00 State=UP
Shared=YES:2 AllowGroups=ALL DisableRo
otJobs=NO RootOnly=NO Priority=10
PartitionName=UNLIM Nodes=n0[01-10] Default=NO MaxTime=INFINITE State=UP
Shared=YES:2 AllowGroups=ALL Disable
RootJobs=NO RootOnly=NO Priority=1
#FastSchedule=1

On Mon, Sep 21, 2015 at 9:37 PM, Christopher Samuel <[email protected]>
wrote:

>
> On 22/09/15 06:37, Andrew Petersen wrote:
>
> > However if I use 2,
> > #SBATCH --cpus-per-task=2
> > I get the error "sbatch: error: Batch job submission failed: Requested
> > node configuration is not available"
>
> What does your slurm.conf look like?
>
> --
>  Christopher Samuel        Senior Systems Administrator
>  VLSCI - Victorian Life Sciences Computation Initiative
>  Email: [email protected] Phone: +61 (0)3 903 55545
>  http://www.vlsci.org.au/      http://twitter.com/vlsci
>

Reply via email to