On 10/20/2010 8:30 PM, Scott Atchley wrote
Are you building OMPI with support for both MX and IB? If not and you only want
MX support, try configuring OMPI using --disable-memory-manager (check
configure for the exact option).
We have fixed this bug in the most recent 1.4.x and 1.5.x
On 10/20/2010 8:30 PM, Scott Atchley wrote:
On Oct 20, 2010, at 9:22 PM, Raymond Muno wrote:
On 10/20/2010 7:59 PM, Ralph Castain wrote:
The error message seems to imply that mpirun itself didn't segfault, but that
something else did. Is that segfault pid from mpirun?
This kind of problem
On Oct 20, 2010, at 9:22 PM, Raymond Muno wrote:
> On 10/20/2010 7:59 PM, Ralph Castain wrote:
>> The error message seems to imply that mpirun itself didn't segfault, but
>> that something else did. Is that segfault pid from mpirun?
>>
>> This kind of problem usually is caused by mismatched
On 10/20/2010 7:59 PM, Ralph Castain wrote:
The error message seems to imply that mpirun itself didn't segfault, but that
something else did. Is that segfault pid from mpirun?
This kind of problem usually is caused by mismatched builds - i.e., you compile
against your new build, but you pick
The error message seems to imply that mpirun itself didn't segfault, but that
something else did. Is that segfault pid from mpirun?
This kind of problem usually is caused by mismatched builds - i.e., you compile
against your new build, but you pick up the Myrinet build when you try to run
We are doing a test build of a new cluster. We are re-using our
Myrinet 10G gear from a previous cluster.
I have built OpenMPI 1.4.2 with PGI 10.4. We use this regularly on
our Infiniband based cluster and all the install elements were readily
available.
With a few go-arounds with the
Dear all,
I got confused with my recent C++ MPI program's behavior. I have an MPI
program in which I use clock() to measure the time spent between to
MPI_Barrier, just like this:
MPI::COMM_WORLD.Barrier();
if if(rank == master) t1 = clock();
"code A";
MPI::COMM_WORLD.Barrier();
if if(rank ==
If you want to use the STL with MPI, your best bet is the boost.mpi library.
On Oct 19, 2010, at 4:40 PM, Jack Bryan wrote:
> Hi,
>
> I need to design a data structure to transfer data between nodes on Open MPI
> system.
>
> Some elements of the the structure has dynamic size.
>
> For
Can you remove the -with-threads and -enable-mpi-threads options from
the configure line and see if that helps your 32 bit problem any?
--td
On 10/20/2010 09:38 AM, Siegmar Gross wrote:
Hi,
I have built Open MPI 1.5 on Linux x86_64 with the Oracle/Sun Studio C
compiler. Unfortunately
Thanks Dick, Eugene. That's what I figured. I was just hoping there might
be some more obscure MPI functions that might do what I want. I'll go ahead
and write my own yielding wrapper on irecv.
Thanks again,
Brian
sent from mobile phone
On Oct 20, 2010 5:24 AM, "Richard Treumann"
Just to be clear: it isn't mpiexec that is failing. It is your MPI application
processes that are failing.
On Oct 20, 2010, at 7:38 AM, Siegmar Gross wrote:
> Hi,
>
> I have built Open MPI 1.5 on Linux x86_64 with the Oracle/Sun Studio C
> compiler. Unfortunately "mpiexec" breaks when I run a
Hi,
I tried to build Open MPI 1.5 on Linux x86_64 with the Oracle/Sun Studio C
compiler in 64-bit mode. Unfortunately "make" breaks.
linpc4 openmpi-1.5-Linux.x86_64.64_cc 107 cc -V
cc: Sun C 5.10 Linux_i386 2009/06/03
usage: cc [ options] files. Use 'cc -flags' for details
linpc4
Hi,
I have built Open MPI 1.5 on Linux x86_64 with the Oracle/Sun Studio C
compiler. Unfortunately "mpiexec" breaks when I run a small propgram.
linpc4 small_prog 106 cc -V
cc: Sun C 5.10 Linux_i386 2009/06/03
usage: cc [ options] files. Use 'cc -flags' for details
linpc4 small_prog 107 uname
Thanks everyone for the useful information.
Ondrej
On Fri, Oct 1, 2010 at 11:02, Brice Goglin wrote:
>
> It mostly depends on the MPI implementation. Several of them are
> switching to hwloc for binding, so you will likely have a mpiexec option
> to do so.
>
> Otherwise,
Brian
Most HPC applications are run with one processor and one working thread
per MPI process. In this case, the node is not being used for other work
so if the MPI process does release a processor, there is nothing else
important for it to do anyway.
In these applications, the blocking MPI
>Thanks for the report. Someone reported pretty much the same issue to
me off-list a few days ago for RHEL5.
>
>It looks like RHEL5 / 6 ship with Autoconf 2.63, and have a
/usr/lib/rpm/macros that defines %configure to include options such as
--program-suffix. We bootstrapped Open MPI v1.5 with
16 matches
Mail list logo