Hi,

Am 31.10.2011 um 22:16 schrieb Bill Hoover:

> I've been trying to figure out a way to get the queue sorting to work like 
> I'd prefer, but so far haven't come up with any good ideas.  I'd appreciate 
> any suggestions (GE 6.2u5).
> 
> Currently I have:
> queue_sort_method                 load
> job_load_adjustments              NONE
> load_adjustment_decay_time        0:0:00
> load_formula                      -slots

what about using the value of a custom load sensor here? It will have to check 
the number of used slots on a maschine (`qhost -F slots -h node01`) and half 
loaded machines should report 0 again instead of the usual -12. You could even 
return a (fake) 1, so that the 8 cores machines are filled before for sure.

$ qhost -F slots -h node01 | sed -n '/^    Host 
Resource(s):/s/.*=\([0-9]\)*\..*/\1/p'

But slots must be also attached to each exechost (and use $HOSTNAME instead of 
the example node01), like it's done to prevent oversubscription. The other 
option to get it out of `qstat -F slots`.

-- Reuti


> On a system with uniform nodes this would operate just as I want, and with 
> minimal scheduling load (we bang lots of jobs through this).
> 
> An example of the problem is on of my systems.  It has 2 sets of nodes.
> set 1 has 14 machines with older Xeon processors, 8 cores per node.
> set 2 has 14 machines with Westmere processors, 12 cores per node, but for 
> our application HT gives a 15-20% overall throughput boost so they look like 
> 24 cores per node.
> 
> Until you get about 12 jobs assigned to one of the Westmere nodes, they are 
> faster than the older ones.  Above that, they are slower.
> 
> So, the ideal way to distribute jobs would be to assign up to 12 jobs per 
> node on the set 2 machines, then assign up to 8 jobs per node to the set 1 
> ones, then finish filling up the 12 HT slots on the set 2 machines.
> 
> Much of the time this isn't a real problem, since for large jobs everything 
> is fully saturated, and we get the full throughput.  But if we have fewer 
> than a full load I would prefer it to be better than the current scheme.
> 
> Any suggestions would be appreciated.
> 
> Bill
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to