Dear Ulf,

You can try to define the parameter DefMemPerCPU=400 in your slurm.conf
file. This will set a default memory limit for each job to 400MB per core
requested.
This should be enough to fulfil your requirement

Regards,
Carles Fenoy
Barcelona Supercomputing Center


On Thu, Dec 12, 2013 at 12:24 PM, Ulf Markwardt <[email protected]
> wrote:

> Dear list,
>
> we run our Slurm in a shared mode, our nodes have different sizes of
> memory installed.
>
> How can I make sure that a job can only be scheduled if the remaining
> cores and the remaining memory on each node have a ratio of 400MB/core?
>
> We have managed it in our older systems without a problem, and would like
> to install this functionality to our Slurm system as well.
>
> Thank you,
> Ulf
>
>
> --
> ___________________________________________________________________
> Dr. Ulf Markwardt
>
> Technische Universität Dresden
> Center for Information Services and High Performance Computing (ZIH)
> 01062 Dresden, Germany
>
> Phone: (+49) 351/463-33640      WWW:  http://www.tu-dresden.de/zih
>
>


-- 
--
Carles Fenoy

Reply via email to