Hello All,
This may not be something relates to the forum, so sorry for asking this
first of all :). Currently I have been working on an implementation of
parallel Quicksort using MPI and now I need some standard parallel Quicksort
implementation(s) for a performance evaluation. So can someone reco
On Thursday 06 August 2009 10:17:36 Prasadcse Perera wrote:
> Hello All,
> This may not be something relates to the forum, so sorry for asking this
> first of all :). Currently I have been working on an implementation of
> parallel Quicksort using MPI and now I need some standard parallel
> Quickso
Pasha,
se attached file.
I have traced how MPI_IPROBE is called and also managed to significantly
reduce the number of calls to MPI_IPROBE. Unfortunately this only
resulted in the program spending time in other routines. Basically the
code runs through a number of timesteps and after each time
Here's an interesting data point. I installed the RHEL rpm version of
OpenMPI 1.2.7-6 for ia64
mpirun -np 2 -mca btl self,sm -mca mpi_paffinity_alone 1 -mca
mpi_leave_pinned 1 $PWD/IMB-MPI1 pingpong
With v1.3 and -mca btl self,sm i get ~150MB/sec
With v1.3 and -mca btl self,tcp i get ~550MB/sec
Thanks a lot , I really appreciate it.! Now I'm on my way to install
OpenFOAM and try it out.
On Wed, Aug 5, 2009 at 11:59 PM, Mattijs Janssens
wrote:
> On Thursday 06 August 2009 10:17:36 Prasadcse Perera wrote:
> > Hello All,
> > This may not be something relates to the forum, so sorry for askin
I'm not sure what you're asking -- mpicxx just fork/exec's the
underlying compiler (you can "mpicxx --showme" to see what it does).
What do you need it to do with LD_RUN_PATH?
On Aug 3, 2009, at 4:21 PM, John R. Cary wrote:
In the latest versions of libtool, the runtime library path is
enc
Any chance you could re-try the experiment with Open MPI 1.3.3?
On Aug 4, 2009, at 11:10 AM, Hoelzlwimmer Andreas - S0810595005 wrote:
Hello,
I’ve wanted to run MPI on a couple of PS3 here. According to a
colleague who set it up, I had to set several HugePages. As the PS3
RAM is limited I
On Aug 4, 2009, at 5:15 PM, Jean-Christophe Ducom wrote:
When I try
dqcneh001$ mpirun -np 1 -H dqcneh002 -mca plm_rsh_agent
"/usr/kerberos/bin/rsh -F" klist
klist: No credentials cache found (ticket cache FILE:/tmp/
krb5cc_p3651)
Kerberos 4 ticket cache: /tmp/tkt82784
klist: You have no tic
Sorry for not replying earlier -- travel to the MPI Forum last week
put me wy behind on my INBOX. :-(
I don't think you want to "printenv > ~/.ssh/environment" -- you don't/
can't know for sure that the remote environment should be exactly the
same as your local environment.
Instead,
Hi Jeff,
thank you very much for your reply! :)
the problem wasn't only in the OMPI libs and bins, it was in other
binaries as well: OpenFOAM simulation suite is also installed locally so
a short PATH could't be informative enough.
Actually, I know that the environment is exactly the same, bec
I am running openmpi-1.3.3 on my cluster which is using
OFED-1.4.1 for Infiniband support. I am comparing performance
between this version of OpenMPI and Mvapich2, and seeing a
very large difference in performance.
The code I am testing is WRF v3.0.1. I am running the
12km benchmark.
The two bu
On Mon, Jul 13, 2009 at 01:24:54PM -0400, Mark Borgerding wrote:
>
> Here's my advice: Don't trust anyones advice. Benchmark it yourself and
> see.
>
> The problems vary so wildly that only you can tell if your problem will
> benefit from over-subscription. It really depends on too many facto
Hi Craig, list
I suppose WRF uses MPI collective calls (MPI_Reduce,
MPI_Bcast, MPI_Alltoall etc),
just like the climate models we run here do.
A recursive grep on the source code will tell.
If that is the case, you may need to tune the collectives dynamically.
We are experimenting with tuned col
A followup
Part of problem was affinity. I had written a script to do processor
and memory affinity (which works fine with MVAPICH2). It is an
idea that I got from TACC. However, the script didn't seem to
work correctly with OpenMPI (or I still have bugs).
Setting --mca mpi_paffinity_alone
Gus Correa wrote:
> Hi Craig, list
>
> I suppose WRF uses MPI collective calls (MPI_Reduce,
> MPI_Bcast, MPI_Alltoall etc),
> just like the climate models we run here do.
> A recursive grep on the source code will tell.
>
I will check this out. I am not the WRF expert, but
I was under the impre
Craig,
Let me look at your script, if you'd like... I may be able to help
there. I've also been seeing some "interesting results for WRF on
OpenMPI, and we may want to see if we're taking complimentary approaches...
gerry
Craig Tierney wrote:
A followup
Part of problem was affinity.
On Aug 6, 2009, at 2:43 PM, Tomislav Maric wrote:
the problem wasn't only in the OMPI libs and bins, it was in other
binaries as well: OpenFOAM simulation suite is also installed
locally so
a short PATH could't be informative enough.
Actually, I know that the environment is exactly the sam
17 matches
Mail list logo