Rather than maximize fragmentation, you probably want to do it on a per-job basis. If you want one core per node: sbatch: -N $numnodes -n $numnodes. Anything else would require the -m flag. I haven't played with it recently but I think you would want -m cyclic.

Ryan

On 05/08/2014 11:49 AM, Atom Powers wrote:
How to spread jobs among nodes?

It appears that my Slurm cluster is scheduling jobs to load up nodes as much as possible before putting jobs on other nodes. I understand the reasons for doing this, however I foresee my users wanting to spread jobs out among as many nodes as possible for various reasons, some of which are even valid.

How would I configure the scheduler to distribute jobs in something like a round-robin fashion to many nodes instead of loading jobs onto just a few nodes?

I currently have:
    'SchedulerType'         => 'sched/builtin',
    'SelectTypeParameters'  => 'CR_Core_Memory',
    'SelectType'            => 'select/cons_res',

--
Perfection is just a word I use occasionally with mustard.
--Atom Powers--

--
Ryan Cox
Operations Director
Fulton Supercomputing Lab
Brigham Young University

Reply via email to