Hi Jette,

Thank's for your ultra fast answer. Unfortunately (again), it does not
affect the number of jobs running. I previously tried it without
success.
NodeName=node[69-71] RealMemory=23000 Procs=16 Sockets=2
CoresPerSocket=4 ThreadsPerCore=2 State=UNKNOWN

Could there be another limit somewhere? Here is my complete configuration file:
https://gist.github.com/882507

Thanks

Nicolas

On Tue, Mar 22, 2011 at 10:25 PM,  <[email protected]> wrote:
> Try adding "Procs=16" to the NodeName line.
> By default, SLURM schedules one task per core/
>
> Quoting Nicolas Bigaouette <[email protected]>:
>
>> Hi all,
>>
>> I want to be able to submit 16 serial jobs on my compute nodes at the
>> same time since each node is 2 sockets, 4 core, hyperthreading. We see
>> a speedup when saturating the node with 16 different serial jobs
>> (launched manually) so I want to take advantage of this with slurm.
>>
>> I tough it would be easy...
>>
>> Unfortunately, I always get at most 8 jobs running on nodes.
>>
>> Here is the relevant (I think) part of /etc/slurm.conf:
>> # SCHEDULING
>> #DefMemPerCPU=0
>> FastSchedule=1
>> #MaxMemPerCPU=0
>> #SchedulerRootFilter=1
>> #SchedulerTimeSlice=30
>> SchedulerType=sched/backfill
>> SchedulerPort=7321
>> SelectType=select/cons_res
>> NodeName=node[69-71] RealMemory=23000 Sockets=2 CoresPerSocket=4
>> ThreadsPerCore=2 State=UNKNOWN
>> PartitionName=test         Nodes=node[69-71]
>> MaxTime=INFINITE State=UP
>>
>> The logs don't show anything interesting. For example, setting
>> ThreadsPerCore to 1 will print a warning for the compute nodes that
>> the number of hardward cpu is not the same as the config's. So the
>> compute nodes are correctly detecting the number of threads possible.
>>
>> How can I achieve this?
>>
>> Thanks!
>>
>
>
>
>

Reply via email to