Dear Colleagues

yes, I have SHARETREE_RESERVED_USAGE=true and on my cluster the users can, or not, request h_vmem during the qsub phase.

so far I'm using these RPMs on the master:
# rpm -qa | grep sge
sun-sge-common-6.2-5
sun-sge-bin-linux24-x64-6.2-5
sun-sge-arco-6.2-5
sun-sge-inspect-6.2-5

and these one on the computational servers:
# rpm -qa | grep sge
sun-sge-bin-linux24-x64-6.2-5
sun-sge-common-6.2-5

so is it safe and recommended to upgrade them to the RPMs reported here: http://arc.liv.ac.uk/downloads/SGE/packages/RH5/ ?

thanks again for your valuable answers,
regards
Fabio


Joachim Gabler <jgabler at univa.com> writes:

> Hi Fabio,
>
> do you have global config, qmaster_params, SHARETREE_RESERVED_USAGE
> configured?
> In this case sge_execd doesn't report the actual values, but
> - wallclock * number of slots as cpu and
> - requested memory * number of slots * wallclock as mem.
>
>If a job has no memory request I would expect it to report 0 as mem.

Ah, I should have thought of that.  Other effects of the reserved_usage
settings was on my list of things to check, as they're undocumented (for
the next hour or so).

The next issue may be 6.2u5 reporting vmem incorrectly on 64-bit
GNU/Linux.  There's a kludge for it in the rpms at
http://arc.liv.ac.uk/downloads/SGE/packages/RH5/ or a proper fix in the
v8 sources.






_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to