Aaron,

You have discovered a case where the design/implementation outpaced the 
documentation.  Originally the usage was normalized to the theoretical maximum 
usage subject to half-life decay as documented.  But we found that, for 
under-utilized systems, each user's normalized usage was very low, which gave 
them higher fair-share factors.  In this circumstance, the result was that 
fair-share factors for all users tended to get crowded toward 1.0 and one would 
have to increase the fair-share weight to resolve them.

By changing to make the usage normalized to total actual usage, we prevented 
this artifact.  It made the code a little simpler as well.  With this change, 
we still maintain "fairness".  However, users may notice minor variations in 
the normalized usage that gets reported based on usage outside their accounts.  
This variation will diminish substantially the more fully a cluster is utilized 
or by increasing the PriorityDecayHalfLife configuration value.

I will update the html page to reflect the newer formula.

Don

From: [email protected] [mailto:[email protected]] On 
Behalf Of Aaron Knister
Sent: Wednesday, May 25, 2011 7:22 PM
To: slurm-dev
Subject: [slurm-dev] Normalized usage question

The multifactor priority documentation seems to suggest that the normalized 
usage is calculated based on the cluster's available CPU time. However, in 
practice it seems to be based on the summation of all raw usage. I'm getting 
ready to implement fairshare priorities and am wondering which is the case. The 
former would be ideal as that way the fairshare values in the output of sshare 
won't flip flop based on usage outside of a given account. Of course, it's 
always possible something is borked in my testing setup :)

Reply via email to