Thanks you Michael!

I've tried the following example:

    NodeName=gpunode01 Gres=gpu:1 Sockets=2 CoresPerSocket=28 ThreadsPerCore=2 State=UNKNOWN RealMemory=380000     PartitionName=gpu MaxCPUsPerNode=56 MaxMemPerNode=190000 Nodes=gpunode01 Default=NO MaxTime=1-0 State=UP     PartitionName=cpu MaxCPUsPerNode=56 MaxMemPerNode=190000 Nodes=gpunode01 Default=YES MaxTime=1-0 State=UP

1) So when the system is idling, the following "gpu" job will start immediately ("gpu" partition, 1 GPU, 20 CPUs):

    srun -p gpu --gpus=1 -c 20 --pty bash -i

2) If I run the same command again, it will be queued ... this is normal ("gpu" partition, 1 GPU, 20 CPUs):

    srun -p gpu --gpus=1 -c 20 --pty bash -i

3) Then the following "cpu" job will be queued too ("cpu" partition, 20 x CPUs):

    srun -p cpu --gpus=0 -c 20 --pty bash -i

Is there a way to let the "cpu" job run instead of waiting?

Any suggestions?

Thanks again,

Weijun

On 12/16/2020 2:54 PM, Renfro, Michael wrote:
*EXTERNAL EMAIL:*

We have overlapping partitions for GPU work and some kinds non-GPU work (both large memory and regular memory jobs).

For 28-core nodes with 2 GPUs, we have:

PartitionName=gpu MaxCPUsPerNode=16 … Nodes=gpunode[001-004]

PartitionName=any-interactive MaxCPUsPerNode=12 … Nodes=node[001-040],gpunode[001-004]

PartitionName=bigmem MaxCPUsPerNode=12 … Nodes=gpunode[001-003]

PartitionName=hugemem MaxCPUsPerNode=12 … Nodes=gpunode004

Worst case, non-GPU jobs could reserve up to 24 of the 28 cores on a GPU node, but only for a limited time (our any-interactive partition has a 2 hour time limit). In practice, it's let us use a lot of otherwise idle CPU capacity in the GPU nodes for short test runs.

*From: *slurm-users <[email protected]>
*Date: *Wednesday, December 16, 2020 at 1:04 PM
*To: *Slurm User Community List <[email protected]>
*Subject: *[slurm-users] using resources effectively?

External Email Warning

This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests.

________________________________

Hi,

Say if I have a Slurm node with 1 x GPU and 112 x CPU cores, and:

     1) there is a job running on the node using the GPU and 20 x CPU cores

     2) there is a job waiting in the queue asking for 1 x GPU and 20 x
CPU cores

Is it possible to a) let a new job asking for 0 x GPU and 20 x CPU cores
(safe for the queued GPU job) start immediately; and b) let a new job
asking for 0 x GPU and 100 x CPU cores (not safe for the queued GPU job)
wait in the queue? Or c) is it doable to put the node into two Slurm
partitions, 56 CPU cores to a "cpu" partition, and 56 CPU cores to a
"gpu" partition, for example?

Thank you in advance for any suggestions / tips.

Best,

Weijun

===========
Weijun Gao
Computational Research Support Specialist
Department of Psychology, University of Toronto Scarborough
1265 Military Trail, Room SW416
Toronto, ON M1C 1M2
E-mail: [email protected]

Reply via email to