Re: [OMPI users] Strange errors when running mpirun

2016-09-30 Thread Justin Chang
Thank you, using the default $TMPDIR works now. On Fri, Sep 30, 2016 at 7:32 AM, Gilles Gouaillardet < gilles.gouaillar...@gmail.com> wrote: > Justin and all, > > the root cause is indeed a bug i fixed in > https://github.com/open-mpi/ompi/pull/2135 > i also had this patch applied to home-brew, s

[OMPI users] How to paralellize the algorithm...

2016-09-30 Thread Bar�� Ke�eci via users
Hello everyone,I'm trying to investigate the paralellization of an algorithm with OpenMPI on a distributed computers' network. In the network there are one Master PC and 24 Computing Node PCs. I'm quite a newbie in this field. However i achieved installing the OpenMPI and compiling and running m

Re: [OMPI users] openmpi 2.1 large messages

2016-09-30 Thread Marlborough, Rick
Gilles; It works now. Thanks for pointing that out! Rick From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Gilles Gouaillardet Sent: Friday, September 30, 2016 8:55 AM To: Open MPI Users Subject: Re: [OMPI users] openmpi 2.1 large messages Rick, You must use th

Re: [OMPI users] openmpi 2.1 large messages

2016-09-30 Thread Gilles Gouaillardet
Rick, You must use the same value for root on all the tasks of the communicator. So the 4th parameter of MPI_Bcast should be hard-coded 0 instead of rank. Fwiw, with this test program If you MPI_Bcast a "small" message, then all your tasks send a message (that is never received) in eager mode, so

Re: [OMPI users] openmpi 2.1 large messages

2016-09-30 Thread Marlborough, Rick
Gilles; Thanks for your response. The network setup I have here is 20 computers connected over a 1 gig Ethernet lan. The computers are nehalems with 8 cores per. These are 64 bit machines. Not a high performance setup but this is simply a research bed. I am using a host file most

Re: [OMPI users] Strange errors when running mpirun

2016-09-30 Thread Gilles Gouaillardet
Justin and all, the root cause is indeed a bug i fixed in https://github.com/open-mpi/ompi/pull/2135 i also had this patch applied to home-brew, so if you re-install open-mpi, you should be fine. Cheers, Gilles for those who want to know more - Open MPI uses two Unix sockets, one by oob/usock a