It is worth mentioning that there is a FAQ that describes the algorithm.
http://www.open-mpi.org/faq/?category=tcp#tcp-routability-1.3
Jeff Squyres wrote:
Sorry for the delay in replying; I kept starting to look into this and
then getting distracted by shiny objects. :-(
OMPI v1.3 actually
Sorry for the delay in replying; this mail slipped by me in my inbox.
On Apr 26, 2009, at 11:50 PM, Rangesh Gupta wrote:
Hi all,
I m facing problem while running Openfoam1.5 the executable is
sonicTurbFoam with the help of openmpi it hang after some time,
every time it hang at
Sorry for the delay in replying; I kept starting to look into this and
then getting distracted by shiny objects. :-(
OMPI v1.3 actually has a fairly sophisticated TCP address/network
matching algorithm. The hostname resolution shouldn't really be the
issue; OMPI directly queries the
Jon Mason wrote:
On Wed, May 06, 2009 at 01:20:48PM -0400, Ken Cain wrote:
Thanks Jon. I have some responses inline.
Jon Mason wrote:
On Wed, May 06, 2009 at 12:15:19PM -0400, Ken Cain wrote:
I am trying to run NetPIPE-3.7.1 NPmpi using Open MPI version 1.3.2
with the openib btl in an
On Wed, May 06, 2009 at 01:20:48PM -0400, Ken Cain wrote:
> Thanks Jon. I have some responses inline.
>
> Jon Mason wrote:
>> On Wed, May 06, 2009 at 12:15:19PM -0400, Ken Cain wrote:
>>> I am trying to run NetPIPE-3.7.1 NPmpi using Open MPI version 1.3.2
>>> with the openib btl in an OFED-1.4
2009/5/6 Jeff Squyres :
> On May 5, 2009, at 10:01 AM, Matthieu Brucher wrote:
>
>> > What Terry said is correct. It means that "mpirun" will use, under the
>> > covers, the "native" launching mechanism of LSF to launch jobs (vs.,
>> > say,
>> > rsh or ssh). It'll also
On May 5, 2009, at 10:01 AM, Matthieu Brucher wrote:
> What Terry said is correct. It means that "mpirun" will use,
under the
> covers, the "native" launching mechanism of LSF to launch jobs
(vs., say,
> rsh or ssh). It'll also discover the hosts to use for this job
without the
> use of
On May 4, 2009, at 10:54 AM, Ricardo Fernández-Perea wrote:
I finally have opportunity to run the imb-3.2 benchmark over myrinet
I am running in a cluster of 16 node Xservers connected with myrinet
15 of them are 8core ones and the last one is a 4 cores one. Having
a limit of 124 process
Thanks Jon. I have some responses inline.
Jon Mason wrote:
On Wed, May 06, 2009 at 12:15:19PM -0400, Ken Cain wrote:
I am trying to run NetPIPE-3.7.1 NPmpi using Open MPI version 1.3.2 with
the openib btl in an OFED-1.4 environment. The system environment is two
Linux (2.6.27) ppc64 blades,
On Wed, May 06, 2009 at 12:15:19PM -0400, Ken Cain wrote:
> I am trying to run NetPIPE-3.7.1 NPmpi using Open MPI version 1.3.2 with
> the openib btl in an OFED-1.4 environment. The system environment is two
> Linux (2.6.27) ppc64 blades, each with one Chelsio RNIC device,
> interconnected
I am trying to run NetPIPE-3.7.1 NPmpi using Open MPI version 1.3.2 with
the openib btl in an OFED-1.4 environment. The system environment is two
Linux (2.6.27) ppc64 blades, each with one Chelsio RNIC device,
interconnected by a 10GbE switch. The problem is that I cannot (using
Open MPI)
Sorry, I don't understand, how can I try the fortran from macports??.
2009/5/6 Luis Vitorio Cargnini
> This problem is occuring because the fortran wasn't compiled with the debug
> symbols:
> warning: Could not find object file
>
This problem is occuring because the fortran wasn't compiled with the
debug symbols:
warning: Could not find object file "/Users/admin/build/i386-apple-
darwin9.0.0/libgcc/_udiv_w_sdiv_s.o" - no debug information available
for "../../../gcc-4.3-20071026/libgcc/../gcc/libgcc2.c".
Is the same
$0,02 of contribution, try macports
Le 09-05-04 à 11:42, Jeff Squyres a écrit :
FWIW, I don't use Xcode, but I use the precompiled gcc/gfortran from
here with good success:
http://hpc.sourceforge.net/
On May 4, 2009, at 11:38 AM, Warner Yuen wrote:
Have you installed a Fortran
I´m sorry if I didn´t said it before
the test where run with commands like the following
/opt/openmpi/bin/mpirun --bynode --mca pml cm --mca mtl mx -np 124 -hostfile
hostfile IMB-MPI1 [testname] 1>IMB1-[testname].results 2>&1
Ricardo
On Mon, May 4, 2009 at 5:36 PM, Bogdan Costescu <
On May 6, 2009, at 8:41 AM, MKondrin wrote:
There are some doubts about memory management in openmpi. Are there
alternatives to MCA memory parameter (which is currently ptmalloc2) -
just for testing?
I'm not sure what you're asking. Are you asking how to disable the
Open MPI memory
Hello!
There are some doubts about memory management in openmpi. Are there
alternatives to MCA memory parameter (which is currently ptmalloc2) -
just for testing?
M.Kondrin
17 matches
Mail list logo