We have a shared gres.conf that includes node names, which should have the 
flexibility to specify node-specific settings for GPUs:

=====

NodeName=gpunode00[1-4] Name=gpu Type=k80 File=/dev/nvidia0 COREs=0-7
NodeName=gpunode00[1-4] Name=gpu Type=k80 File=/dev/nvidia1 COREs=8-15

=====

See the third example configuration at https://slurm.schedmd.com/gres.conf.html 
for a reference.

> On Mar 5, 2020, at 9:24 AM, Durai Arasan <arasan.du...@gmail.com> wrote:
> 
> External Email Warning
> This email originated from outside the university. Please use caution when 
> opening attachments, clicking links, or responding to requests.
> When configuring a slurm cluster you need to have a copy of the configuration 
> file slurm.conf on all nodes. These copies are identical. In the situation 
> where you need to use GPUs in your cluster you have an additional 
> configuration file that you need to have on all nodes. This is the gres.conf. 
> My question is - will this file be different on each node depending on the 
> configuration on that node or will it be identical on all nodes (like 
> slurm.conf?). Assume that the slave nodes have different configurations of 
> gpus in them and are not identical.
> 
> 
> Thank you,
> Durai


Reply via email to