On 22/03/17 08:35, kesim wrote:
> You are right. Many thanks for correcting.
Just note that load average is not necessarily the same as CPU load.
If you have tasks blocked for I/O they will contribute to load average
but will not be using much CPU at all.
So, for instance, on one of our
You are right. Many thanks for correcting.
On Tuesday, March 21, 2017, Benjamin Redling
wrote:
>
> re hi,
>
> your script will occasionally fail because the number of fields in the
> output of "uptime" is variable.
> I was reminded by this one:
>
re hi,
your script will occasionally fail because the number of fields in the
output of "uptime" is variable.
I was reminded by this one:
http://stackoverflow.com/questions/11735211/get-last-five-minutes-load-average-using-ksh-with-uptime
Even more a reason to use /proc...
Regards,
Benjamin
Hi,
if you don't want to depend on the whitespaces in the output of "uptime"
(the number of fields depends on a locale) you can improve that via "awk
'{print $3}' /proc/loadavg" (for the 15min avg) -- it's always better to
avoid programmatically accessing output made for humans as long as
There is an error in the script. It could be:
scontrol update node=your_node_name WEIGHT=`echo 100*$(uptime | awk '{print
$12}')/1 | bc`
On Tue, Mar 21, 2017 at 8:41 PM, kesim wrote:
> Dear SLURM Users,
>
> My response here is for those who are trying to solve the simple
Dear SLURM Users,
My response here is for those who are trying to solve the simple problem of
nodes ordering according to the CPU load. Actually, Markus was right and he
gave me the idea (THANKS!!!)
The solution is not pretty but it works and it has a lot of flexibility.
Just put into crone a
We are faced with the problem that one of our tasks to be used in a slurm
multiprog will need several cpus, while all the other tasks barely use one.
The tasks do have to run in parallel on the same node though, so splitting
the thing up is not an option.
Now we could always just increase the