Hi,

Am 29.09.2011 um 15:02 schrieb Fabio Martinelli:

Dear Sun Grid Engine colleagues

is there a way to apply again a past workflow by using information like that stored inside the ARCO DB, the reporting file or the accounting file ?

obviously with a shorter time scale and we don't mind about about memory consumption and I/O, just slots assignment.

for this you will need to set up a share-tree policy. If you don't take the CPU usage into account, one can set: ACCT_RESERVED_USAGE and SHARETREE_RESERVED_USAGE in SGE's configuration (man sge_conf). As it then reflects the granted CPU time (and not the actual used one), you can devide it (the CPU time) by wallclock time and get the slot count.


basically we need to understand if the actual scheduler policy is "fair", at least according to our personal concept of "fairness";

Well, this is only your personal view. What do you judge as "fair"? Same slot count in the cluster for running jobs? Same slot count in the last 7 days, independet how long they ran?

-- Reuti



so far it's unfair and as a reaction we should tune some scheduler parameters and observe the Sun Grid Engine behavior but as jobs take hours or days to complete this tuning process is simply to long to manage.

just to cite a concrete solution, I never tried this Moab Simulator
http://www.adaptivecomputing.com/resources/docs/mwm/6-1/Content/topics/analyzing/simulations.html
but the 'simulation' concept and the related tools seem to be there, so I wonder what about {Sun} Grid Engine.

I'm using 6.2-5 on Linux 64bit

many thanks,
best regards
Fabio Martinelli




_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to