On Mon, 2014-06-02 at 17:59 -0700, Franco Broi wrote: 
> 
> You can also use sbatch --gres=bandwidth ....

rather than adding a plugin I should add and you still need to define
the bandwidth gres in slurm.conf;

GresTypes=bandwidth
NodeName=... Gres=bandwidth

..and in gres.conf

Name=bandwidth Count=10

> 
> On Fri, 2014-05-30 at 10:34 -0700, [email protected] wrote: 
> > Define a GRES called "bandwidth" and create a job_submit plugin that  
> > sets a GRES value of "bandwidth:1" for jobs. It's relatively simple.  
> > See:
> > http://slurm.schedmd.com/gres.html
> > http://slurm.schedmd.com/job_submit_plugins.html
> > 
> > Moe Jette
> > SchedMD
> > 
> > 
> > Quoting Brian Baughman <[email protected]>:
> > 
> > > Greetings,
> > >
> > > We run multiple types of jobs on our cluster and many of our nodes  
> > > have 48 cores or more. We have found that some jobs are idle on such  
> > > nodes when are accessing data in aggregate over the available  
> > > bandwidth. I was thinking of trying to create a “high-bandwidth”  
> > > queue which would only allow say 10 such processes to run on each  
> > > node so the bandwidth wouldn’t become a problem. Is such a thing  
> > > possible with the slurm scheduler? If not any suggestions on how to  
> > > solve such a problem?
> > >
> > > Regards,
> > > Brian
> > 

Reply via email to