Re: [OMPI users] problems with mpiJava in openmpi-1.9a1r27362

2012-09-27 Thread Siegmar Gross
Hi, the command works without linpc4 or with -mca btl ^sctp. mpiexec -np 4 -host rs0,sunpc4,linpc4 environ_mpi | & more [sunpc4.informatik.hs-fulda.de][[6074,1],2][../ tyr hello_1 162 mpiexec -np 4 -host rs0,sunpc4 environ_mpi | & more Now 3 slave tasks are sending their environment. tyr

[OMPI users] fortran bindings for MPI_Op_commutative

2012-09-27 Thread Ake Sandgren
Hi! Building 1.6.1 and 1.6.2 i seem to be missing the actual fortran bindings for MPI_Op_commutative and a bunch of other functions. My configure is ./configure --enable-orterun-prefix-by-default --enable-cxx-exceptions When looking in libmpi_f77.so there is no mpi_op_commutative_ defined.

Re: [OMPI users] fortran bindings for MPI_Op_commutative

2012-09-27 Thread Ake Sandgren
On Thu, 2012-09-27 at 16:31 +0200, Ake Sandgren wrote: > Hi! > > Building 1.6.1 and 1.6.2 i seem to be missing the actual fortran > bindings for MPI_Op_commutative and a bunch of other functions. > > My configure is > ./configure --enable-orterun-prefix-by-default --enable-cxx-exceptions > >

Re: [OMPI users] problems with mpiJava in openmpi-1.9a1r27362

2012-09-27 Thread Ralph Castain
On Wed, Sep 26, 2012 at 10:58 PM, Siegmar Gross < siegmar.gr...@informatik.hs-fulda.de> wrote: > Hi, > > the command works without linpc4 or with -mca btl ^sctp. > Excellent! > > mpiexec -np 4 -host rs0,sunpc4,linpc4 environ_mpi | & more > > [sunpc4.informatik.hs-fulda.de][[6074,1],2][../ > >

Re: [OMPI users] fortran bindings for MPI_Op_commutative

2012-09-27 Thread Ralph Castain
Ouch! Thanks - I'll fix that and check for any other missing entries (Jeff is on a plane back from Europe today). Don't know when Jeff will want to roll a replacement 1.6.3 release, but he can address that when he returns to the airwaves. On Thu, Sep 27, 2012 at 7:45 AM, Ake Sandgren

[OMPI users] About MPI_TAG_UB

2012-09-27 Thread Sébastien Boisvert
Hello, I am running Ray (a distributed software in genomics) with Open-MPI on 2048 processes and everything runs fine. Ray has a any-to-any communication pattern. To avoid using too much memory, I implemented a virtual message router. Without the virtual message router, I get messages like