Re: [OMPI users] Are the Messages delivered in order in the MPI?

2012-01-24 Thread Mateus Augusto
After a read: http://blogs.cisco.com/performance/more_traffic/ I understood that if a large message is sent and then a short message is sent, then the short message can reach before. But what if the messages have the same size, and are small enough so that no fragmentation occurs, the ordering

Re: [OMPI users] Are the Messages delivered in order in the MPI?

2012-01-24 Thread Jeff Squyres
Have a read of this blog entry and see if it helps: http://blogs.cisco.com/performance/more_traffic/ On Jan 24, 2012, at 7:35 PM, Mateus Augusto wrote: > I would like to know if the MPI is a FIFO (first in first out) channel, ie, > if the A message is sent before of the B message, then, MP

[OMPI users] Are the Messages delivered in order in the MPI?

2012-01-24 Thread Mateus Augusto
I would like to know if the MPI is a FIFO (first in first out) channel, ie, if the A message is sent before of the B message, then, MPI garantees that A will be received before B in the recipient. Does the MPI guarantee that A always will be received first? Or may B be received first sometimes? A

[OMPI users] OpenMPI: How many connections?

2012-01-24 Thread devendra rai
Hello All, I am trying to find out how many separate connections are opened by MPI as messages are sent. Basically, I have threaded-MPI calls to a bunch of different MPI processes (who, in turn have threaded MPI calls). The point is, with every thread added, are new ports opened (even if the

[OMPI users] Openmpi in Mingw

2012-01-24 Thread Temesghen Kahsai
Hello, I am having truble compiling openmpi (version 1.5.5rc2r25765 - nightly built) on Mingw. I am running Windows 7 and the latest version of Mingw. I keep on getting the following error: In file included from ../../opal/include/opal_config_bottom.h:258:0, from ../../opal/i

Re: [OMPI users] Cant build OpenMPI!

2012-01-24 Thread Jeff Squyres
One more thing to check: are you building on a networked filesystem, and the client on which you are building is not time synchronized with the file server? If you are not building on a networked file system, or if you are building on a NFS and the time is NTP-synchronized between client and ser

Re: [OMPI users] Cant build OpenMPI!

2012-01-24 Thread devendra rai
Hello Jeff, No. I did not run autogen.sh.  I just did the three steps that you showed. Will log files be of any help? (Also, if the log files are not generated by just pipe'ing or tee'ing, please let me know). Thanks a lot. Best Devendra From: Jeff Squy

Re: [OMPI users] Cant build OpenMPI!

2012-01-24 Thread Jeff Squyres
Did you try running autogen.sh? You should not need to -- you should only need to: ./configure ... make all make install On Jan 24, 2012, at 1:38 PM, devendra rai wrote: > Hello All, > > I am trying to build openMPI on a server (I do not have sudo on this server). > > When running make, I ge

[OMPI users] Cant build OpenMPI!

2012-01-24 Thread devendra rai
Hello All, I am trying to build openMPI on a server (I do not have sudo on this server). When running make, I get this error: libtool: compile:  gcc -DHAVE_CONFIG_H -I. -I../../opal/include -I../../orte/include -I../../ompi/include -I../../opal/mca/paffinity/linux/plpa/src/libplpa -I../.. -I/u

Re: [OMPI users] pure static "mpirun" launcher

2012-01-24 Thread Jeff Squyres
Ilias: Have you simply tried building Open MPI with flags that force static linking? E.g., something like this: ./configure --enable-static --disable-shared LDFLAGS=-Wl,-static I.e., put in LDFLAGS whatever flags your compiler/linker needs to force static linking. These LDFLAGS will be appl

[OMPI users] openib btl and MPI_THREAD_MULTIPLE

2012-01-24 Thread Ronald Heerema
I was wondering if anyone can comment on the current state of support for the openib btl when MPI_THREAD_MULTIPLE is enabled. regards, Ron Heerema

Re: [OMPI users] pure static "mpirun" launcher

2012-01-24 Thread Ralph Castain
Good point! I'm traveling this week with limited resources, but will try to address when able. Sent from my iPad On Jan 24, 2012, at 7:07 AM, Reuti wrote: > Am 24.01.2012 um 15:49 schrieb Ralph Castain: > >> I'm a little confused. Building procs static makes sense as libraries may >> not be

Re: [OMPI users] Possible bug in finalize, OpenMPI v1.5, head revision

2012-01-24 Thread Jeff Squyres
Ralph's fix has now been committed to the v1.5 trunk (yesterday). Did that fix it? On Jan 22, 2012, at 3:40 PM, Mike Dubman wrote: > it was compiled with the same ompi. > We see it occasionally on different clusters with different ompi folders. > (all v1.5) > > On Thu, Jan 19, 2012 at 5:44 PM

Re: [OMPI users] pure static "mpirun" launcher

2012-01-24 Thread Reuti
Am 24.01.2012 um 15:49 schrieb Ralph Castain: > I'm a little confused. Building procs static makes sense as libraries may not > be available on compute nodes. However, mpirun is only executed in one place, > usually the head node where it was built. So there is less reason to build it > purely

Re: [OMPI users] pure static "mpirun" launcher

2012-01-24 Thread Ralph Castain
I'm a little confused. Building procs static makes sense as libraries may not be available on compute nodes. However, mpirun is only executed in one place, usually the head node where it was built. So there is less reason to build it purely static. Are you trying to move mpirun somewhere? Or is

[OMPI users] pure static "mpirun" launcher

2012-01-24 Thread Ilias Miroslav
Dear experts, following http://www.open-mpi.org/faq/?category=building#static-build I successfully build static OpenMPI library. Using such prepared library I succeeded in building parallel static executable - dirac.x (ldd dirac.x-not a dynamic executable). The problem remains, however, with

Re: [OMPI users] localhost only

2012-01-24 Thread Ralph Castain
Sorry - I didn't get to it prior to catching a plane. Will try to resolve it upon return later this week. Sent from my iPad On Jan 23, 2012, at 10:01 AM, "MM" wrote: >> -Original Message- >> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On >> Behalf Of Ralph Cast