What's the, let's say, most optimized way of configuring Slurm to manage
a very small cluster (around 20 nodes), with nodes having the following
characteristics:
- A few nodes with:
        Intel Core i7
        NVIDIA GeForce GTX 680
        8 GB RAM
- Some other nodes with:
        Intel Core i7
        8 GB RAM
- And some other nodes with:
        Intel QuadCore
        4 GB RAM

My main goal is to set priorities like that:
- The nodes with the best configuration are the first to get jobs, but
it's VERY important that the GPU in these nodes remain usable for jobs
that require GPUs.
  Example:
    1.let's say node01 has this: CPUs=4, and Gres=gpu:1; and node02 has
    this: CPUs=4, and no GPU.
    2.a user submits a job that does not use GPU, but uses 5 CPUs
    3.my idea in that case is to make the job use only 3 CPUs in node01
    and 2 CPUs in node02, because node01 has a GPU and the job does not
    use GPU so I have to reserve a CPU and a GPU in node01 for jobs that
    require GPU
    4.another user submits a job with GPU as requirement and the job
    gets executed in node01 because the GPU and (at least) 1 CPU is
    reserved there
- Nodes with slower performance are the last to receive jobs

Note: MPI support would be great.

I don't know if that's possible, and if it isn't, which ways do you guys
suggest for me to configure this cluster to allocate the GPUs properly?
Thanks.

Reply via email to