Re: [OMPI users] OpenMPI with SGE: "-np N" for mpirun needed?

2012-05-09 Thread Ricardo Reis
On Wed, 9 May 2012, Jiri Polach wrote: You might want to use a smaller number of processors than those made available by SGE. Thanks for replying. I can imagine that in some special cases it might be useful to request N processors from SGE and than use M

Re: [OMPI users] OpenMPI with SGE: "-np N" for mpirun needed?

2012-05-09 Thread Jiri Polach
Dear all, is "-np N" parameter needed for mpirun when running jobs under SGE environment? All examples in http://www.open-mpi.org/faq/?category=running#run-n1ge-or-sge show that "-np N" is used, but in my opinion it should be redundant: mpirun should determine all parameters from SGE

[OMPI users] OpenMPI with SGE: "-np N" for mpirun needed?

2012-05-09 Thread Jiri Polach
Dear all, is "-np N" parameter needed for mpirun when running jobs under SGE environment? All examples in http://www.open-mpi.org/faq/?category=running#run-n1ge-or-sge show that "-np N" is used, but in my opinion it should be redundant: mpirun should determine all parameters from SGE

Re: [OMPI users] OpenMPI and SGE

2009-06-25 Thread Ray Muno
As a follow up, the problem was with host name resolution. The error was introduced, with a change to the Rocks environment, which broke reverse lookups for host names. -- Ray Muno

Re: [OMPI users] OpenMPI and SGE

2009-06-23 Thread Ray Muno
Rolf Vandevaart wrote: >> >> PMGR_COLLECTIVE ERROR: unitialized MPI task: Missing required >> environment variable: MPIRUN_RANK >> PMGR_COLLECTIVE ERROR: PMGR_COLLECTIVE ERROR: unitialized MPI task: >> Missing required environment variable: MPIRUN_RANK >> > I do not recognize these errors as

Re: [OMPI users] OpenMPI and SGE

2009-06-23 Thread Rolf Vandevaart
Ray Muno wrote: Rolf Vandevaart wrote: Ray Muno wrote: Ray Muno wrote: We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily). Scheduling is done through SGE. MPI communication is over InfiniBand. We also have OpenMPI 1.3 installed and receive

Re: [OMPI users] OpenMPI and SGE

2009-06-23 Thread Ray Muno
Ray Muno wrote: > Tha give me How about "That gives me" > > PMGR_COLLECTIVE ERROR: unitialized MPI task: Missing required > environment variable: MPIRUN_RANK > PMGR_COLLECTIVE ERROR: PMGR_COLLECTIVE ERROR: unitialized MPI task: > Missing required environment variable: MPIRUN_RANK > >

Re: [OMPI users] OpenMPI and SGE

2009-06-23 Thread Ray Muno
Rolf Vandevaart wrote: > Ray Muno wrote: >> Ray Muno wrote: >> >>> We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily). >>> Scheduling is done through SGE. MPI communication is over InfiniBand. >>> >>> >> >> We also have OpenMPI 1.3 installed and receive similar errors.-

Re: [OMPI users] OpenMPI and SGE

2009-06-23 Thread Rolf Vandevaart
Ray Muno wrote: Ray Muno wrote: We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily). Scheduling is done through SGE. MPI communication is over InfiniBand. We also have OpenMPI 1.3 installed and receive similar errors.- This does sound like a problem with SGE.

[OMPI users] OpenMPI and SGE

2009-06-23 Thread Ray Muno
We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily). Scheduling is done through SGE. MPI communication is over InfiniBand. We have been running with this setup for over 9 months. Last week, all user jobs stopped executing (cluster load dropped to zero). User can schedule jobs

Re: [OMPI users] OpenMPI-1.2.7 + SGE

2008-11-04 Thread Reuti
Hi, Am 04.11.2008 um 16:54 schrieb Sangamesh B: Hi all, In Rocks-5.0 cluster, OpenMPI-1.2.6 comes by default. I guess it gets installed through rpm. # /opt/openmpi/bin/ompi_info | grep gridengine MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.6)

[OMPI users] OpenMPI-1.2.7 + SGE

2008-11-04 Thread Sangamesh B
Hi all, In Rocks-5.0 cluster, OpenMPI-1.2.6 comes by default. I guess it gets installed through rpm. # /opt/openmpi/bin/ompi_info | grep gridengine MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.6) MCA pls: gridengine (MCA v1.0, API v1.3,

Re: [OMPI users] Openmpi with SGE

2008-03-01 Thread Reuti
Hi, Am 19.02.2008 um 12:49 schrieb Neeraj Chourasia: I am facing problem while calling mpirun in a loop when using with SGE. My sge version is SGE6.1AR_snapshot3. The script i am submitting via sge is xx xxx

Re: [OMPI users] Openmpi with SGE

2008-02-21 Thread Pak Lui
I am not quite sure. It seems that your AR (advance reservation) snapshot3 build is a bit new, and it may be a problem coming from it. I am not quite familiar with this new SGE feature. I'd ping the gridengine list to check on that error message coming from execd. Neeraj Chourasia wrote:

[OMPI users] Openmpi with SGE

2008-02-20 Thread Neeraj Chourasia
Hello everyone, I am facing problem while calling mpirun in a loop when using with SGE. My sge version is SGE6.1AR_snapshot3. The script i am submitting via sge is xlet i=0while [ $i -lt 100 ]do echo