Hi, 

Could --exclusive ( [ https://slurm.schedmd.com/sbatch.html#OPT_exclusive | 
https://slurm.schedmd.com/sbatch.html#OPT_exclusive ] ) do the trick? 

Guillaume 


De: "Dustin Lang via slurm-users" <[email protected]> 
À: "Slurm User Community List" <[email protected]> 
Envoyé: Lundi 30 Mars 2026 16:12:11 
Objet: [slurm-users] Shared queue: how to request full node with all resources? 

Hi, 

With a partition with "OverSubscribe=FORCE" set, is there a way to request all 
the node resources? I see "--mem=0" does that for memory. But I do not see an 
option to request all the CPUs and GRES/TRES such as GPUs. I tried "--nodes=1 
--ntasks=1 --cpus-per-task=0", but "--cpus-per-task=0" does not do the same 
thing as "--mem=0". 

In other words, is it possible to have a queue where nodes can be shared 
between jobs, but have a simple way for an sbatch script to request the full 
node with all its memory, cpus, and other resources? We have a heterogeneous 
cluster, so telling users (or jupyterhub scripts) to list exactly the resources 
they want doesn't really work. 

thanks, 
dustin 




-- 
slurm-users mailing list -- [email protected] 
To unsubscribe send an email to [email protected] 
-- 
slurm-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to