Take a look at the node "Weight" option described in the slurm.conf man page. The backfill scheduling module does consider job memory requirements too.

Quoting Ulf Markwardt <[email protected]>:

Dear Slurm developers,

where can I find the rules Slurm uses for the allocation of CPUs and memory? How can I fine-tune them?

We have nodes with 2, 4, and 8 GB RAM per core. Jobs with low mem requirements should normally run only on the small nodes. But it would be a good idea to fill the larger nodes, if the run-time of the smaller job allows a "backfill" there.

Example: A job allocates 4 cores and 100 GB on a large node, leaving 12 cores and 20 GB free. Now I could fill the node with smaller jobs as long as they finish before the large one.

How does Slurm handle this? What options do I have for fine-tuning?

Thank you,
Ulf

--
___________________________________________________________________
Dr. Ulf Markwardt

Dresden University of Technology
Center for Information Services and High Performance Computing (ZIH)
01062 Dresden, Germany

Phone: (+49) 351/463-33640      WWW:  http://www.tu-dresden.de/zih



Reply via email to