No sir,

These lines were added only to the front end; I have checked again
and none of the compute nodes had them.

Best regards,
g

--
Gowtham
Information Technology Services
Michigan Technological University

(906) 487/3593
http://www.it.mtu.edu/


On Wed, 12 Dec 2012, Reuti wrote:

| Am 12.12.2012 um 14:04 schrieb Gowtham:
| 
| > From my cluster notes, I noticed that during ArcGIS 10.1 server
| > installation, it had prompted us to include the following lines
| > in '/etc/security/limits.conf'
| > 
| >  root soft nofile 65535
| >  root hard nofile 65535
| >  root soft nproc 25059
| >  root hard nproc 25059
| > 
| > The '-nan' error in qstat started showing only after the ArcGIS
| > installation. So, I commented out the above lines and restarted
| > the front end. 
| > 
| > '-nan' does not show up any more and the priorities are displayed
| > as before.
| 
| Nasty.
| 
| The entries were also added to the front end besides the nodes? I wonder how 
this could produce a "-nan" output, which usually refers to an illegal bit 
combination in the referenced place in memory.
| 
| -- Reuti
| 
| 
| > Best regards,
| > g
| > 
| > --
| > Gowtham
| > Information Technology Services
| > Michigan Technological University
| > 
| > (906) 487/3593
| > http://www.it.mtu.edu/
| > 
| > 
| > On Tue, 11 Dec 2012, Reuti wrote:
| > 
| > | Am 10.12.2012 um 16:07 schrieb Gowtham:
| > | 
| > | > 
| > | > While performing routine weekly checks on our test cluster (it runs
| > | > Rocks 6.0 with CentOS 6.2), I noticed that all currently every job's
| > | > priority as '-nan' (running and waiting).
| > | > 
| > | > Uncertain of what caused this issue, I submitted a test job
| > | > 'hello_world_serial.sh'. It seems to start out by indicating correct
| > | > priority but soon starts reporting '-nan'. The screenshots for the
| > | > test job are here:
| > | > 
| > | >  http://sgowtham.net/misc/sge/qstat_JobPriority_Correct.png
| > | >  http://sgowtham.net/misc/sge/qstat_JobPriority_NaN.png
| > | 
| > | What values you usually expect here, i.e. could it be an overflow due to 
the set weights in the scheduler configuration?
| > | 
| > | -- Reuti
| > | 
| > | 
| > | > I have restarted the sgemaster daemon on the front end via
| > | > 
| > | >  /etc/init.d/sgemaster.wigner stop
| > | >  /etc/init.d/sgemaster.wigner start
| > | > 
| > | > 
| > | > It doesn't seem to have helped.
| > | > 
| > | > Please advice.
| > | > 
| > | > Thank you for your time and help.
| > | > g
| > | > 
| > | > --
| > | > Gowtham
| > | > Information Technology Services
| > | > Michigan Technological University
| > | > 
| > | > (906) 487/3593
| > | > http://www.it.mtu.edu/
| > | > 
| > | > _______________________________________________
| > | > users mailing list
| > | > [email protected]
| > | > https://gridengine.org/mailman/listinfo/users
| > | 
| > | 
| 
| 
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to