You are right. Many thanks for correcting.

On Tuesday, March 21, 2017, Benjamin Redling <benjamin.ra...@uni-jena.de>
wrote:

>
> re hi,
>
> your script will occasionally fail because the number of fields in the
> output of "uptime" is variable.
> I was reminded by this one:
> http://stackoverflow.com/questions/11735211/get-last-
> five-minutes-load-average-using-ksh-with-uptime
>
> Even more a reason to use /proc...
>
> Regards,
> Benjamin
>
> Am 21.03.2017 um 21:15 schrieb kesim:
> > There is an error in the script. It could be:
> >
> > scontrol update node=your_node_name WEIGHT=`echo 100*$(uptime | awk
> > '{print $12}')/1 | bc`
> >
> >
> > On Tue, Mar 21, 2017 at 8:41 PM, kesim <ketiw...@gmail.com
> <javascript:;>
> > <mailto:ketiw...@gmail.com <javascript:;>>> wrote:
> >
> >     Dear SLURM Users,
> >
> >     My response here is for those who are trying to solve the simple
> >     problem of nodes ordering according to the CPU load. Actually,
> >     Markus was right and he gave me the idea (THANKS!!!)
> >     The solution is not pretty but it works and it has a lot of
> >     flexibility. Just put into crone a script:
> >
> >     #!/bin/sh
> >     scontrol update node=your_node_name WEIGHT=`echo 100*$(uptime | awk
> >     -F'[, ]' '{print $21}')/1 | bc`
> >
> >     Best Regards,
> >
> >     Ketiw
> >
> >
> >
> >
> >     On Mon, Mar 20, 2017 at 3:31 PM, Markus Koeberl
> >     <markus.koeb...@tugraz.at <javascript:;> <mailto:
> markus.koeb...@tugraz.at <javascript:;>>> wrote:
> >
> >
> >         On Monday 20 March 2017 05:38:29 Christopher Samuel wrote:
> >         >
> >         > On 19/03/17 23:25, kesim wrote:
> >         >
> >         > > I have 11 nodes and declared 7 CPUs per node. My setup is
> >         such that all
> >         > > desktop belongs to group members who are using them mainly
> >         as graphics
> >         > > stations. Therefore from time to time an application is
> >         requesting high
> >         > > CPU usage.
> >         >
> >         > In this case I would suggest you carve off 3 cores via cgroups
> for
> >         > interactive users and give Slurm the other 7 to parcel out to
> >         jobs by
> >         > ensuring that Slurm starts within a cgroup dedicated to those
> >         7 cores..
> >         >
> >         > This is similar to the "boot CPU set" concept that SGI came up
> >         with (at
> >         > least I've not come across people doing that before them).
> >         >
> >         > To be fair this is not really Slurm's problem to solve, Linux
> >         gives you
> >         > the tools to do this already, it's just that people don't
> >         realise that
> >         > you can use cgroups to do this.
> >         >
> >         > Your use case is valid, but it isn't really HPC, and you can't
> >         really
> >         > blame Slurm for not catering to this.  It can use cgroups to
> >         partition
> >         > cores to jobs precisely so it doesn't need to care what the
> >         load average
> >         > is - it knows the kernel is ensuring the cores the jobs want
> >         are not
> >         > being stomped on by other tasks.
> >
> >         You could additionally define a higher "Weight" value for a host
> >         if you know that the load is usually higher on it than on the
> >         others.
> >
> >
> >         regards
> >         Markus Köberl
> >         --
> >         Markus Koeberl
> >         Graz University of Technology
> >         Signal Processing and Speech Communication Laboratory
> >         E-mail: markus.koeb...@tugraz.at <javascript:;> <mailto:
> markus.koeb...@tugraz.at <javascript:;>>
> >
> >
> >
>
>
> --
> FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html
> vox: +49 3641 9 44323 | fax: +49 3641 9 44321
>

Reply via email to