Trevor, 

It does depend a bit on the configuration of your cluster, however it sounds 
like what you need to do is create a job submission file that requests enough 
resources for one of your jobs and then submit them as an array. Read the man 
page on sbatch to determine what switches you need to use in a batch script to 
request what you need. I would also refer to a resource guide for your resource 
if one exists, there should at least be a quick start guide somewhere for your 
specific resources.

Once your script is created you can pass the --array=##-## switch to sbatch and 
it will submit however many jobs are defined in the array and set the 
environment variable SLURM_ARRAY_TASK_ID to whichever element in the array it 
is. 

If you need it to run different commands or needs different inputs you can use 
this variable to do things like read a corresponding line in an input file, or 
execute different commands inside the submission script itself with if 
statements that are dependent on the value of the variable. 

Good luck,
Buddy.


-----Original Message-----
From: Trevor Gale [mailto:[email protected]] 
Sent: Friday, May 08, 2015 1:07 PM
To: slurm-dev
Subject: [slurm-dev] launching a variable number of jobs in slurm


Hello everyone,

I’m developing a piece of software that runs fault injection simulations on a 
cluster running slurm and am trying to figure out the ideal method for 
launching a potentially massive amount of jobs. I’m not very familiar with 
slurm, and had a questions about how slurm allocates resources
        -if I simply call sbatch without specifying and resource requirements, 
does slurm automatically allocate sufficient resources? or does it depend upon 
the configuration of the cluster?

Thanks,
Trevor=

Reply via email to