After a read: http://blogs.cisco.com/performance/more_traffic/
I understood that if a large message is sent and then a short message is sent,
then the short message can reach before. But what if the messages have the same
size, and are small enough so that no fragmentation occurs, the ordering
Have a read of this blog entry and see if it helps:
http://blogs.cisco.com/performance/more_traffic/
On Jan 24, 2012, at 7:35 PM, Mateus Augusto wrote:
> I would like to know if the MPI is a FIFO (first in first out) channel, ie,
> if the A message is sent before of the B message, then, MP
I would like to know if the MPI is a FIFO (first in first out) channel, ie, if
the A message is sent before of the B message, then, MPI garantees that A will
be received before B in the recipient.
Does the MPI guarantee that A always will be received first?
Or may B be received first sometimes?
A
Hello All,
I am trying to find out how many separate connections are opened by MPI as
messages are sent. Basically, I have threaded-MPI calls to a bunch of different
MPI processes (who, in turn have threaded MPI calls).
The point is, with every thread added, are new ports opened (even if the
Hello,
I am having truble compiling openmpi (version 1.5.5rc2r25765 - nightly
built) on Mingw. I am running Windows 7 and the latest version of Mingw.
I keep on getting the following error:
In file included from ../../opal/include/opal_config_bottom.h:258:0,
from ../../opal/i
One more thing to check: are you building on a networked filesystem, and the
client on which you are building is not time synchronized with the file server?
If you are not building on a networked file system, or if you are building on a
NFS and the time is NTP-synchronized between client and ser
Hello Jeff,
No. I did not run autogen.sh. I just did the three steps that you showed.
Will log files be of any help?
(Also, if the log files are not generated by just pipe'ing or tee'ing, please
let me know).
Thanks a lot.
Best
Devendra
From: Jeff Squy
Did you try running autogen.sh?
You should not need to -- you should only need to:
./configure ...
make all
make install
On Jan 24, 2012, at 1:38 PM, devendra rai wrote:
> Hello All,
>
> I am trying to build openMPI on a server (I do not have sudo on this server).
>
> When running make, I ge
Hello All,
I am trying to build openMPI on a server (I do not have sudo on this server).
When running make, I get this error:
libtool: compile: gcc -DHAVE_CONFIG_H -I. -I../../opal/include
-I../../orte/include -I../../ompi/include
-I../../opal/mca/paffinity/linux/plpa/src/libplpa -I../.. -I/u
Ilias: Have you simply tried building Open MPI with flags that force static
linking? E.g., something like this:
./configure --enable-static --disable-shared LDFLAGS=-Wl,-static
I.e., put in LDFLAGS whatever flags your compiler/linker needs to force static
linking. These LDFLAGS will be appl
I was wondering if anyone can comment on the current state of support for
the openib btl when MPI_THREAD_MULTIPLE is enabled.
regards,
Ron Heerema
Good point! I'm traveling this week with limited resources, but will try to
address when able.
Sent from my iPad
On Jan 24, 2012, at 7:07 AM, Reuti wrote:
> Am 24.01.2012 um 15:49 schrieb Ralph Castain:
>
>> I'm a little confused. Building procs static makes sense as libraries may
>> not be
Ralph's fix has now been committed to the v1.5 trunk (yesterday).
Did that fix it?
On Jan 22, 2012, at 3:40 PM, Mike Dubman wrote:
> it was compiled with the same ompi.
> We see it occasionally on different clusters with different ompi folders.
> (all v1.5)
>
> On Thu, Jan 19, 2012 at 5:44 PM
Am 24.01.2012 um 15:49 schrieb Ralph Castain:
> I'm a little confused. Building procs static makes sense as libraries may not
> be available on compute nodes. However, mpirun is only executed in one place,
> usually the head node where it was built. So there is less reason to build it
> purely
I'm a little confused. Building procs static makes sense as libraries may not
be available on compute nodes. However, mpirun is only executed in one place,
usually the head node where it was built. So there is less reason to build it
purely static.
Are you trying to move mpirun somewhere? Or is
Dear experts,
following http://www.open-mpi.org/faq/?category=building#static-build I
successfully build static OpenMPI library.
Using such prepared library I succeeded in building parallel static executable
- dirac.x (ldd dirac.x-not a dynamic executable).
The problem remains, however, with
Sorry - I didn't get to it prior to catching a plane. Will try to resolve it
upon return later this week.
Sent from my iPad
On Jan 23, 2012, at 10:01 AM, "MM" wrote:
>> -Original Message-
>> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
>> Behalf Of Ralph Cast
17 matches
Mail list logo