The dynamic process allocation is only intregrated with Hadoop at this point.
What you probably want is this: http://slurm.schedmd.com/faq.html#job_size Quoting Andrew Petersen <[email protected]>:
Hello I see that slurm can do dynamic process allocation. http://slurm.schedmd.com/dynalloc.html However the description is brief and I don't know if this can do what I want. I want to be able to 1) run an mpi job 2) if cores are available, spawn more jobs (to take up the whole cluster, for example) 3) If another job starts, then processes/spawns are terminated to make room for the other job, if is has equal or higher priority. Can I get SLURM to do this? Regards Andrew Petersen
-- Morris "Moe" Jette CTO, SchedMD LLC Slurm User Group Meeting September 23-24, Lugano, Switzerland Find out more http://slurm.schedmd.com/slurm_ug_agenda.html
