Thanks Brett for the useful information.
On Wed, Jan 27, 2010 at 12:40 PM, Brett Pemberton <br...@vpac.org> wrote:
>
> - "Sangamesh B" <forum@gmail.com> wrote:
>
> > Hi all,
> >
> > If an infiniband network is configured successfully, how
Hi all,
If an infiniband network is configured successfully, how to confirm
that Open MPI is using infiniband, not other ethernet network available?
In earlier versions, I've seen if OMPI is running on ethernet, it was giving
warning - its runnig on slower network. Is this available in
Hi,
What are the advantages with progress-threads feature?
Thanks,
Sangamesh
On Fri, Jan 8, 2010 at 10:13 PM, Ralph Castain wrote:
> Yeah, the system doesn't currently support enable-progress-threads. It is a
> two-fold problem: ORTE won't work that way, and some parts
Hi,
MPICh2 has different process managers: MPD, SMPD, GFORKER etc. Is
the Open MPI's startup daemon orted similar to MPICH2's smpd? Or something
else?
Thanks,
Sangamesh
ug I propose you use Ibdiaget, it is open source IB
> network diagnostic tool :
> http://linux.die.net/man/1/ibdiagnet
> The tool is part of OFED distribution.
>
> Pasha.
>
>
> Sangamesh B wrote:
>
>> Dear all,
>> The CPMD application which is compiled wit
Hi all,
The compilation of a fortran application - CPMD-3.13.2 - with OpenMP +
OpenMPI-1.3.3 + ifort-10.1 + MKL-10.0 is failing with following error on a
Rocks-5.1 Linux cluster:
/lib/cpp -P -C -traditional -D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8
-DLINUX_IFC -DPARALLEL -DMYRINET
Hi,
Its required here to install Open MPI 1.2 on a HPC cluster with - Cent
OS 5.2 Linux, Mellanox IB card, switch and OFED-1.4.
But the configure is failing with:
[root@master openmpi-1.2]# ./configure --prefix=/opt/mpi/openmpi/1.2/intel
--with-openib=/usr
..
...
--- MCA component
tests to validate the IB network?
>
> george.
>
>
> On Oct 12, 2009, at 03:38 , Sangamesh B wrote:
>
> Any hint for the previous mail?
>>
>> Does Open MPI-1.3.3 support only a limited versions of OFED?
>> Or any version is ok?
>> On Sun, Oct 11, 2009 at 3:5
Any hint for the previous mail?
Does Open MPI-1.3.3 support only a limited versions of OFED?
Or any version is ok?
On Sun, Oct 11, 2009 at 3:55 PM, Sangamesh B <forum@gmail.com> wrote:
> Hi,
>
> A fortran application is installed with Intel Fortran 10.1, MKL-10 and
> Openmp
Hi,
A fortran application is installed with Intel Fortran 10.1, MKL-10 and
Openmpi-1.3.3 on a Rocks-5.1 HPC Linux cluster. The jobs are not scaling
when more than one node is used. The cluster has Intel Quad core Xeon
(E5472) @ 3.00GHz Dual processor (total 8 cores per node, 16GB RAM) and
Hi,
A fortran application which is compiled with ifort-10.1 and open mpi
1.3.1 on Cent OS 5.2 fails after running 4 days with following error
message:
[compute-0-7:25430] *** Process received signal ***
[compute-0-7:25433] *** Process received signal ***
[compute-0-7:25433] Signal: Bus
Dear all,
The CPMD application which is compiled with OpenMPI-1.3 (Intel 10.1
compilers) on CentOS-4.5, fails only, when a specific node i.e. node-0-2 is
involved. But runs well on other nodes.
Initially job failed after 5-10 mins (on node-0-2 + some other
nodes). After googling
> running on a node could be the differentiating factors.
>
> The standard wat32 benchmark is a good test for a single node. You can find
> our benchmarking results here if you want to compare yours
> http://www.cse.scitech.ac.uk/disco/dbd/index.html
>
> Regards,
&g
Dear Open MPI team,
With Open MPI-1.3, the fortran application CPMD is installed on
Rocks-4.3 cluster - Dual Processor Quad core Xeon @ 3 GHz. (8 cores
per node)
Two jobs (4 processes job) are run on two nodes, separately - one node
has a ib connection ( 4 GB RAM) and the other node has
2.23 SECONDS
No of nodes:6 cores used per node:4 total core: 6*4=24
CPU TIME :0 HOURS 51 MINUTES 50.41 SECONDS
ELAPSED TIME :6 HOURS 6 MINUTES 38.67 SECONDS
Any help/suggetsions to diagnose this problem.
Thanks,
Sangamesh
On Wed, Feb 25, 2009 at 12:51 PM, Sangamesh B
Hello Reuti,
I'm sorry for the late response.
On Mon, Jan 26, 2009 at 7:11 PM, Reuti <re...@staff.uni-marburg.de> wrote:
> Am 25.01.2009 um 06:16 schrieb Sangamesh B:
>
>> Thanks Reuti for the reply.
>>
>> On Sun, Jan 25, 2009 at 2:22 AM, Reuti <re...@staff.
On Sun, Feb 1, 2009 at 10:37 PM, Reuti <re...@staff.uni-marburg.de> wrote:
> Am 01.02.2009 um 16:00 schrieb Sangamesh B:
>
>> On Sat, Jan 31, 2009 at 6:27 PM, Reuti <re...@staff.uni-marburg.de> wrote:
>>>
>>> Am 31.01.2009 um 08:49 schrieb Sangamesh B:
On Sat, Jan 31, 2009 at 6:27 PM, Reuti <re...@staff.uni-marburg.de> wrote:
> Am 31.01.2009 um 08:49 schrieb Sangamesh B:
>
>> On Fri, Jan 30, 2009 at 10:20 PM, Reuti <re...@staff.uni-marburg.de>
>> wrote:
>>>
>>> Am 30.01.2009 um 15:02 schrieb Sanga
ence caussing the problem.
ssh issues:
between master & node: works fine but with some delay.
between nodes: works fine, no delay
>From command line the open mpi jobs were run with no error, even
master node is not used in hostfile.
Thanks,
Sangamesh
> -- Reuti
>
>
>> Jeremy Stou
resolve this problem by adding "ulimit -l unlimited" near
> the top of the SGE startup script on the computation nodes and
> restarting SGE on every node.
>
> Jeremy Stout
>
> On Sat, Jan 24, 2009 at 6:06 AM, Sangamesh B <forum@gmail.com> wrote:
>> Hello all
Hello all,
Open MPI 1.3 is installed on Rocks 4.3 Linux cluster with support of
SGE i.e using --with-sge.
But the ompi_info shows only one component:
# /opt/mpi/openmpi/1.3/intel/bin/ompi_info | grep gridengine
MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.3)
Is this
Any solution for the following problem?
On Fri, Jan 23, 2009 at 7:58 PM, Sangamesh B <forum@gmail.com> wrote:
> On Fri, Jan 23, 2009 at 5:41 PM, Jeff Squyres <jsquy...@cisco.com> wrote:
>> On Jan 22, 2009, at 11:26 PM, Sangamesh B wrote:
>>
>>> We''ve a c
On Fri, Jan 23, 2009 at 5:41 PM, Jeff Squyres <jsquy...@cisco.com> wrote:
> On Jan 22, 2009, at 11:26 PM, Sangamesh B wrote:
>
>> We''ve a cluster with 23 nodes connected to IB switch and 8 nodes
>> have connected to ethernet switch. Master node is also connected to IB
&
Hello all,
We''ve a cluster with 23 nodes connected to IB switch and 8 nodes
have connected to ethernet switch. Master node is also connected to IB
switch. SGE(with tight integration, -pe orte) is used for
parallel/serial job submission.
Open MPI-1.3 is installed on master node with IB
Hello all,
The MPI-Blast-PIO-1.5.0 is installed with Open MPI 1.2.8 + intel 10
compilers on Rocks-4.3 + Voltaire Infiniband + Voltaire Grid stack OFA
roll.
The 8 process parallel job is submitted through SGE:
$ cat sge_submit.sh
#!/bin/bash
#$ -N OMPI-Blast-Job
#$ -S /bin/bash
#$ -cwd
#$ -e
23, 2008 at 4:45 PM, Reuti <re...@staff.uni-marburg.de> wrote:
> Hi,
>
> Am 23.12.2008 um 12:03 schrieb Sangamesh B:
>
>> Hello,
>>
>> I've compiled MPIBLAST-1.5.0-pio app on Rocks 4.3,Voltaire
>> infiniband based Linux cluster using Open MPI-
Hello,
I've compiled MPIBLAST-1.5.0-pio app on Rocks 4.3,Voltaire
infiniband based Linux cluster using Open MPI-1.2.8 + intel 10
compilers.
The job is not running. Let me explain the configs:
SGE job script:
$ cat sge_submit.sh
#!/bin/bash
#$ -N OMPI-Blast-Job
#$ -S /bin/bash
#$ -cwd
ags or
> command line and that should get rid of that, if it bugs you. Someone else
> can, I'm sure, explain in far more detail what the issue there is.
>
> Hope that helps.. if not, post the output of 'ldd hellompi' here, as well
> as an 'ls /opt/openmpi_intel/1.2.8/'
>
&
Hello all,
Installed Open MPI 1.2.8 with Intel C++compilers on Cent OS 4.5 based
Rocks 4.3 linux cluster (& Voltaire infiniband). Installation was
smooth.
The following error occurred during compilation:
# mpicc hellompi.c -o hellompi
/opt/intel/cce/10.1.018/lib/libimf.so: warning: warning:
Hi all,
In Rocks-5.0 cluster, OpenMPI-1.2.6 comes by default. I guess it
gets installed through rpm.
# /opt/openmpi/bin/ompi_info | grep gridengine
MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.6)
MCA pls: gridengine (MCA v1.0, API v1.3,
On Fri, Oct 24, 2008 at 11:26 PM, Eugene Loh <eugene@sun.com> wrote:
> Sangamesh B wrote:
>
>> I reinstalled all softwares with -O3 optimization. Following are the
>> performance numbers for a 4 process job on a single node:
>>
>> MPICH2: 26 m 54 s
>
On Fri, Oct 10, 2008 at 10:40 PM, Brian Dobbins wrote:
>
> Hi guys,
>
> On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen wrote:
>
>> Actually I had a much differnt results,
>>
>> gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7
>> pgi/7.2
On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote:
> On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote:
>
> Make sure you don't use a "debug" build of Open MPI. If you use trunk, the
>> build system detects it and turns on debug by default. It really kills
>>
On Thu, Oct 9, 2008 at 2:39 AM, Brian Dobbins wrote:
>
> Hi guys,
>
> [From Eugene Loh:]
>
>> OpenMPI - 25 m 39 s.
>>> MPICH2 - 15 m 53 s.
>>>
>> With regards to your issue, do you have any indication when you get that
>> 25m39s timing if there is a grotesque amount of time
FYI attached here OpenMPI install details
On Wed, Oct 8, 2008 at 7:56 PM, Sangamesh B <forum@gmail.com> wrote:
>
>
> On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres <jsquy...@cisco.com> wrote:
>
>> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>>
>
On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres <jsquy...@cisco.com> wrote:
> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>
>I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
>> supports both ethernet and infiniband. Before doing that I tested an
&g
@umich.edu
> (734)936-1985
>
>
>
>
> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>
> Hi All,
>>
>> I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
>> supports both ethernet and infiniband. Before doing that I tested an
>>
Hi All,
I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
supports both ethernet and infiniband. Before doing that I tested an
application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
have been compiled with GNU compilers.
After this benchmark, I came to
38 matches
Mail list logo