n-mpi.org
cc
Subject
Re: [OMPI users] Performance question about OpenMPI and MVAPICH2
onIB
Hi Neeraj,
Were there specific collectives that were slower? Also what kind of
cluster were you running on? How many nodes and cores per node?
thanks,
--td
> Message:
ig Tierney <craig.tier...@noaa.gov>
Subject: Re: [OMPI users] Performance question about OpenMPI and
MVAPICH2 onIB
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <4a7b612c.8070...@noaa.gov>
Content-Type: text/plain; charset=ISO-8859-1
A followup
Part of problem
Tierney <craig.tier...@noaa.gov>*
Sent by: users-boun...@open-mpi.org
08/07/2009 04:43 AM
Please respond to
Open MPI Users <us...@open-mpi.org>
To
Open MPI Users <us...@open-mpi.org>
cc
Subject
Re: [OMPI users] Performance question about OpenMPI and MVAP
cases.
I will try the above options and get back to you.
Craig
thanks,
--td
Message: 4
Date: Thu, 06 Aug 2009 17:03:08 -0600
From: Craig Tierney <craig.tier...@noaa.gov>
Subject: Re: [OMPI users] Performance question about OpenMPI and
MVAPICH2 onIB
To: Open MPI Users <
n-mpi.org>
To
us...@open-mpi.org
cc
Subject
Re: [OMPI users] Performance question about OpenMPI and MVAPICH2 on
IB
Craig,
Did your affinity script bind the processes per socket or linearly to
cores. If the former you'll want to look at using rankfiles and place the
ranks base
s...@open-mpi.org>
To
Open MPI Users <us...@open-mpi.org>
cc
Subject
Re: [OMPI users] Performance question about OpenMPI and MVAPICH2 on IB
Gus Correa wrote:
> Hi Craig, list
>
> I suppose WRF uses MPI collective calls (MPI_Reduce,
> MPI_Bcast, MPI_Alltoall etc),
>
Craig,
Let me look at your script, if you'd like... I may be able to help
there. I've also been seeing some "interesting results for WRF on
OpenMPI, and we may want to see if we're taking complimentary approaches...
gerry
Craig Tierney wrote:
A followup
Part of problem was affinity.
Gus Correa wrote:
> Hi Craig, list
>
> I suppose WRF uses MPI collective calls (MPI_Reduce,
> MPI_Bcast, MPI_Alltoall etc),
> just like the climate models we run here do.
> A recursive grep on the source code will tell.
>
I will check this out. I am not the WRF expert, but
I was under the
A followup
Part of problem was affinity. I had written a script to do processor
and memory affinity (which works fine with MVAPICH2). It is an
idea that I got from TACC. However, the script didn't seem to
work correctly with OpenMPI (or I still have bugs).
Setting --mca
Hi Craig, list
I suppose WRF uses MPI collective calls (MPI_Reduce,
MPI_Bcast, MPI_Alltoall etc),
just like the climate models we run here do.
A recursive grep on the source code will tell.
If that is the case, you may need to tune the collectives dynamically.
We are experimenting with tuned
I am running openmpi-1.3.3 on my cluster which is using
OFED-1.4.1 for Infiniband support. I am comparing performance
between this version of OpenMPI and Mvapich2, and seeing a
very large difference in performance.
The code I am testing is WRF v3.0.1. I am running the
12km benchmark.
The two
11 matches
Mail list logo