Hi Matt, Matt Hohmeister <hohmeis...@psy.fsu.edu> writes:
> Relatively new to Slurm here; I have someone who has asked if the > following is possible: > > Allow Slurm to use as much memory on a node as exists on the node > itself. If someone is running a process outside of Slurm, decrease > Slurm’s memory usage to make way for the non-Slurm process. > > Is such a thing possible? I don't think this is possible. Once Slurm has allocated memory to a job, it cannot currently be changed. However, from my point of view, you really don't want to do this anyway. The whole idea of a resource manager like Slurm is to maximise the utilisation of your resources. If you have people running jobs outside of Slurm as well, you will generate major headaches for yourself. You can increase the responsiveness some users within Slurm by tweaking the priorities (via accounts, partitions, QOS, ...) You could look at interactive jobs, whereby a user can just use 'srun' to start a shell on a node, e.g. srun --ntasks=1 --time=00:30:00 --mem=1000 bash Of course, this will only work well if your cluster is not full or you have a dedicated partition and, ultimately, also leads to resources being wasted. HTH, Loris -- Dr. Loris Bennett (Mr.) ZEDAT, Freie Universität Berlin Email loris.benn...@fu-berlin.de