Re: [OMPI users] How to check OMPI is using IB or not?

2010-01-27 Thread Sangamesh B
Thanks Brett for the useful information. On Wed, Jan 27, 2010 at 12:40 PM, Brett Pemberton <br...@vpac.org> wrote: > > - "Sangamesh B" <forum@gmail.com> wrote: > > > Hi all, > > > > If an infiniband network is configured successfully, how

[OMPI users] How to check OMPI is using IB or not?

2010-01-27 Thread Sangamesh B
Hi all, If an infiniband network is configured successfully, how to confirm that Open MPI is using infiniband, not other ethernet network available? In earlier versions, I've seen if OMPI is running on ethernet, it was giving warning - its runnig on slower network. Is this available in

Re: [OMPI users] problem with progress thread and orte

2010-01-12 Thread Sangamesh B
Hi, What are the advantages with progress-threads feature? Thanks, Sangamesh On Fri, Jan 8, 2010 at 10:13 PM, Ralph Castain wrote: > Yeah, the system doesn't currently support enable-progress-threads. It is a > two-fold problem: ORTE won't work that way, and some parts

[OMPI users] Is OpenMPI's orted = MPICH2's smpd?

2009-12-21 Thread Sangamesh B
Hi, MPICh2 has different process managers: MPD, SMPD, GFORKER etc. Is the Open MPI's startup daemon orted similar to MPICH2's smpd? Or something else? Thanks, Sangamesh

Re: [OMPI users] Job fails after hours of running on a specific node

2009-12-07 Thread Sangamesh B
ug I propose you use Ibdiaget, it is open source IB > network diagnostic tool : > http://linux.die.net/man/1/ibdiagnet > The tool is part of OFED distribution. > > Pasha. > > > Sangamesh B wrote: > >> Dear all, >> The CPMD application which is compiled wit

[OMPI users] With IMPI works fine,With OMPI fails

2009-10-28 Thread Sangamesh B
Hi all, The compilation of a fortran application - CPMD-3.13.2 - with OpenMP + OpenMPI-1.3.3 + ifort-10.1 + MKL-10.0 is failing with following error on a Rocks-5.1 Linux cluster: /lib/cpp -P -C -traditional -D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8 -DLINUX_IFC -DPARALLEL -DMYRINET

[OMPI users] OMPI-1.2.0 is not getting installed

2009-10-20 Thread Sangamesh B
Hi, Its required here to install Open MPI 1.2 on a HPC cluster with - Cent OS 5.2 Linux, Mellanox IB card, switch and OFED-1.4. But the configure is failing with: [root@master openmpi-1.2]# ./configure --prefix=/opt/mpi/openmpi/1.2/intel --with-openib=/usr .. ... --- MCA component

Re: [OMPI users] Openmpi not using IB and no warning message

2009-10-15 Thread Sangamesh B
tests to validate the IB network? > > george. > > > On Oct 12, 2009, at 03:38 , Sangamesh B wrote: > > Any hint for the previous mail? >> >> Does Open MPI-1.3.3 support only a limited versions of OFED? >> Or any version is ok? >> On Sun, Oct 11, 2009 at 3:5

Re: [OMPI users] Openmpi not using IB and no warning message

2009-10-12 Thread Sangamesh B
Any hint for the previous mail? Does Open MPI-1.3.3 support only a limited versions of OFED? Or any version is ok? On Sun, Oct 11, 2009 at 3:55 PM, Sangamesh B <forum@gmail.com> wrote: > Hi, > > A fortran application is installed with Intel Fortran 10.1, MKL-10 and > Openmp

[OMPI users] Openmpi not using IB and no warning message

2009-10-11 Thread Sangamesh B
Hi, A fortran application is installed with Intel Fortran 10.1, MKL-10 and Openmpi-1.3.3 on a Rocks-5.1 HPC Linux cluster. The jobs are not scaling when more than one node is used. The cluster has Intel Quad core Xeon (E5472) @ 3.00GHz Dual processor (total 8 cores per node, 16GB RAM) and

[OMPI users] job fails with "Signal: Bus error (7)"

2009-10-01 Thread Sangamesh B
Hi, A fortran application which is compiled with ifort-10.1 and open mpi 1.3.1 on Cent OS 5.2 fails after running 4 days with following error message: [compute-0-7:25430] *** Process received signal *** [compute-0-7:25433] *** Process received signal *** [compute-0-7:25433] Signal: Bus

[OMPI users] Job fails after hours of running on a specific node

2009-09-20 Thread Sangamesh B
Dear all, The CPMD application which is compiled with OpenMPI-1.3 (Intel 10.1 compilers) on CentOS-4.5, fails only, when a specific node i.e. node-0-2 is involved. But runs well on other nodes. Initially job failed after 5-10 mins (on node-0-2 + some other nodes). After googling

Re: [OMPI users] Lower performance on a Gigabit node compared toinfiniband node

2009-03-12 Thread Sangamesh B
> running on a node could be the differentiating factors. > > The standard wat32 benchmark is a good test for a single node. You can find > our benchmarking results here if you want to compare yours > http://www.cse.scitech.ac.uk/disco/dbd/index.html > > Regards, &g

[OMPI users] Lower performance on a Gigabit node compared to infiniband node

2009-03-09 Thread Sangamesh B
Dear Open MPI team, With Open MPI-1.3, the fortran application CPMD is installed on Rocks-4.3 cluster - Dual Processor Quad core Xeon @ 3 GHz. (8 cores per node) Two jobs (4 processes job) are run on two nodes, separately - one node has a ib connection ( 4 GB RAM) and the other node has

Re: [OMPI users] Low performance of Open MPI-1.3 over Gigabit

2009-03-04 Thread Sangamesh B
2.23 SECONDS No of nodes:6 cores used per node:4 total core: 6*4=24 CPU TIME :0 HOURS 51 MINUTES 50.41 SECONDS ELAPSED TIME :6 HOURS 6 MINUTES 38.67 SECONDS Any help/suggetsions to diagnose this problem. Thanks, Sangamesh On Wed, Feb 25, 2009 at 12:51 PM, Sangamesh B

Re: [OMPI users] Ompi runs thru cmd line but fails when run thru SGE

2009-02-26 Thread Sangamesh B
Hello Reuti, I'm sorry for the late response. On Mon, Jan 26, 2009 at 7:11 PM, Reuti <re...@staff.uni-marburg.de> wrote: > Am 25.01.2009 um 06:16 schrieb Sangamesh B: > >> Thanks Reuti for the reply. >> >> On Sun, Jan 25, 2009 at 2:22 AM, Reuti <re...@staff.

Re: [OMPI users] Fwd: [GE users] Open MPI job fails when run thru SGE

2009-02-01 Thread Sangamesh B
On Sun, Feb 1, 2009 at 10:37 PM, Reuti <re...@staff.uni-marburg.de> wrote: > Am 01.02.2009 um 16:00 schrieb Sangamesh B: > >> On Sat, Jan 31, 2009 at 6:27 PM, Reuti <re...@staff.uni-marburg.de> wrote: >>> >>> Am 31.01.2009 um 08:49 schrieb Sangamesh B:

Re: [OMPI users] Fwd: [GE users] Open MPI job fails when run thru SGE

2009-02-01 Thread Sangamesh B
On Sat, Jan 31, 2009 at 6:27 PM, Reuti <re...@staff.uni-marburg.de> wrote: > Am 31.01.2009 um 08:49 schrieb Sangamesh B: > >> On Fri, Jan 30, 2009 at 10:20 PM, Reuti <re...@staff.uni-marburg.de> >> wrote: >>> >>> Am 30.01.2009 um 15:02 schrieb Sanga

Re: [OMPI users] Ompi runs thru cmd line but fails when run thru SGE

2009-01-25 Thread Sangamesh B
ence caussing the problem. ssh issues: between master & node: works fine but with some delay. between nodes: works fine, no delay >From command line the open mpi jobs were run with no error, even master node is not used in hostfile. Thanks, Sangamesh > -- Reuti > > >> Jeremy Stou

Re: [OMPI users] Ompi runs thru cmd line but fails when run thru SGE

2009-01-24 Thread Sangamesh B
resolve this problem by adding "ulimit -l unlimited" near > the top of the SGE startup script on the computation nodes and > restarting SGE on every node. > > Jeremy Stout > > On Sat, Jan 24, 2009 at 6:06 AM, Sangamesh B <forum@gmail.com> wrote: >> Hello all

[OMPI users] Ompi runs thru cmd line but fails when run thru SGE

2009-01-24 Thread Sangamesh B
Hello all, Open MPI 1.3 is installed on Rocks 4.3 Linux cluster with support of SGE i.e using --with-sge. But the ompi_info shows only one component: # /opt/mpi/openmpi/1.3/intel/bin/ompi_info | grep gridengine MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.3) Is this

Re: [OMPI users] Cluster with IB hosts and Ethernet hosts

2009-01-23 Thread Sangamesh B
Any solution for the following problem? On Fri, Jan 23, 2009 at 7:58 PM, Sangamesh B <forum@gmail.com> wrote: > On Fri, Jan 23, 2009 at 5:41 PM, Jeff Squyres <jsquy...@cisco.com> wrote: >> On Jan 22, 2009, at 11:26 PM, Sangamesh B wrote: >> >>> We''ve a c

Re: [OMPI users] Cluster with IB hosts and Ethernet hosts

2009-01-23 Thread Sangamesh B
On Fri, Jan 23, 2009 at 5:41 PM, Jeff Squyres <jsquy...@cisco.com> wrote: > On Jan 22, 2009, at 11:26 PM, Sangamesh B wrote: > >> We''ve a cluster with 23 nodes connected to IB switch and 8 nodes >> have connected to ethernet switch. Master node is also connected to IB &

[OMPI users] Cluster with IB hosts and Ethernet hosts

2009-01-22 Thread Sangamesh B
Hello all, We''ve a cluster with 23 nodes connected to IB switch and 8 nodes have connected to ethernet switch. Master node is also connected to IB switch. SGE(with tight integration, -pe orte) is used for parallel/serial job submission. Open MPI-1.3 is installed on master node with IB

[OMPI users] HP CQ with status LOCAL LENGTH ERROR

2008-12-29 Thread Sangamesh B
Hello all, The MPI-Blast-PIO-1.5.0 is installed with Open MPI 1.2.8 + intel 10 compilers on Rocks-4.3 + Voltaire Infiniband + Voltaire Grid stack OFA roll. The 8 process parallel job is submitted through SGE: $ cat sge_submit.sh #!/bin/bash #$ -N OMPI-Blast-Job #$ -S /bin/bash #$ -cwd #$ -e

Re: [OMPI users] mpiblast + openmpi + gridengine job faila to run

2008-12-24 Thread Sangamesh B
23, 2008 at 4:45 PM, Reuti <re...@staff.uni-marburg.de> wrote: > Hi, > > Am 23.12.2008 um 12:03 schrieb Sangamesh B: > >> Hello, >> >> I've compiled MPIBLAST-1.5.0-pio app on Rocks 4.3,Voltaire >> infiniband based Linux cluster using Open MPI-

[OMPI users] mpiblast + openmpi + gridengine job faila to run

2008-12-23 Thread Sangamesh B
Hello, I've compiled MPIBLAST-1.5.0-pio app on Rocks 4.3,Voltaire infiniband based Linux cluster using Open MPI-1.2.8 + intel 10 compilers. The job is not running. Let me explain the configs: SGE job script: $ cat sge_submit.sh #!/bin/bash #$ -N OMPI-Blast-Job #$ -S /bin/bash #$ -cwd

Re: [OMPI users] Problem with feupdateenv

2008-12-10 Thread Sangamesh B
ags or > command line and that should get rid of that, if it bugs you. Someone else > can, I'm sure, explain in far more detail what the issue there is. > > Hope that helps.. if not, post the output of 'ldd hellompi' here, as well > as an 'ls /opt/openmpi_intel/1.2.8/' > &

[OMPI users] Problem with feupdateenv

2008-12-07 Thread Sangamesh B
Hello all, Installed Open MPI 1.2.8 with Intel C++compilers on Cent OS 4.5 based Rocks 4.3 linux cluster (& Voltaire infiniband). Installation was smooth. The following error occurred during compilation: # mpicc hellompi.c -o hellompi /opt/intel/cce/10.1.018/lib/libimf.so: warning: warning:

[OMPI users] OpenMPI-1.2.7 + SGE

2008-11-04 Thread Sangamesh B
Hi all, In Rocks-5.0 cluster, OpenMPI-1.2.6 comes by default. I guess it gets installed through rpm. # /opt/openmpi/bin/ompi_info | grep gridengine MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.6) MCA pls: gridengine (MCA v1.0, API v1.3,

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-25 Thread Sangamesh B
On Fri, Oct 24, 2008 at 11:26 PM, Eugene Loh <eugene@sun.com> wrote: > Sangamesh B wrote: > >> I reinstalled all softwares with -O3 optimization. Following are the >> performance numbers for a 4 process job on a single node: >> >> MPICH2: 26 m 54 s >

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-15 Thread Sangamesh B
On Fri, Oct 10, 2008 at 10:40 PM, Brian Dobbins wrote: > > Hi guys, > > On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen wrote: > >> Actually I had a much differnt results, >> >> gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7 >> pgi/7.2

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Sangamesh B
On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote: > On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote: > > Make sure you don't use a "debug" build of Open MPI. If you use trunk, the >> build system detects it and turns on debug by default. It really kills >>

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Sangamesh B
On Thu, Oct 9, 2008 at 2:39 AM, Brian Dobbins wrote: > > Hi guys, > > [From Eugene Loh:] > >> OpenMPI - 25 m 39 s. >>> MPICH2 - 15 m 53 s. >>> >> With regards to your issue, do you have any indication when you get that >> 25m39s timing if there is a grotesque amount of time

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
FYI attached here OpenMPI install details On Wed, Oct 8, 2008 at 7:56 PM, Sangamesh B <forum@gmail.com> wrote: > > > On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres <jsquy...@cisco.com> wrote: > >> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: >> >

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres <jsquy...@cisco.com> wrote: > On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: > >I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI >> supports both ethernet and infiniband. Before doing that I tested an &g

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
@umich.edu > (734)936-1985 > > > > > On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: > > Hi All, >> >> I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI >> supports both ethernet and infiniband. Before doing that I tested an >>

[OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
Hi All, I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU compilers. After this benchmark, I came to