It sounds like you really want/need oversubscribe to be off.
That being said, perhaps you could do a combo of exclusive and mem=0.
The mem=0 should still allocate all the memory (thus the entire node)
and the exclusive will allow the job to access all the cpus/cores/gpus.
Brian Andrus
On 3/30/2026 8:46 AM, Dustin Lang via slurm-users wrote:
Hi,
No, unfortunately -- "the partition's *OverSubscribe* option takes
precedence over the job's option"
Thanks for the suggestion, though!
-dustin
On Mon, Mar 30, 2026 at 10:33 AM Guillaume COCHARD
<[email protected]> wrote:
Hi,
Could --exclusive (
https://slurm.schedmd.com/sbatch.html#OPT_exclusive ) do the trick?
Guillaume
------------------------------------------------------------------------
*De: *"Dustin Lang via slurm-users" <[email protected]>
*À: *"Slurm User Community List" <[email protected]>
*Envoyé: *Lundi 30 Mars 2026 16:12:11
*Objet: *[slurm-users] Shared queue: how to request full node with
all resources?
Hi,
With a partition with "OverSubscribe=FORCE" set, is there a way to
request all the node resources? I see "--mem=0" does that for
memory. But I do not see an option to request all the CPUs and
GRES/TRES such as GPUs. I tried "--nodes=1 --ntasks=1
--cpus-per-task=0", but "--cpus-per-task=0" does not do the same
thing as "--mem=0".
In other words, is it possible to have a queue where nodes can be
shared between jobs, but have a simple way for an sbatch script to
request the full node with all its memory, cpus, and other
resources? We have a heterogeneous cluster, so telling users (or
jupyterhub scripts) to list exactly the resources they want
doesn't really work.
thanks,
dustin
--
slurm-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
--
slurm-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]