We have found that virtually all Rmpi jobs need to be started with
$ mpirun -np 1 R CMD BATCH
This is, as I understand it, because the first R will initialize the
MPI environment and then when you create the cluster, it wants to be
able to start the rest of the processes. When you
Note this is just a workaround, this simply disables the mxm mtl (e.g.
Mellanox optimized infiniband driver).
basically, there are two ways to run a single task mpi program (a.out)
- mpirun -np 1 ./a.out (this is the "standard" way)
- ./a.out (aka singleton mode)
the logs you posted do not
Dear Gilles,
I tried export OMPI_MCA_pml=ob1, and it worked! Thank you very much for
your brilliant suggestion.
By the way, I don't really understand what do you mean by '*can you also
extract the command tha launch the test ?*'...
Cheers,
Pan
That could be specific to mtl/mxm
could you
export OMPI_MCA_pml=ob1
and try again ?
can you also extract the command tha launch the test ?
I am curious whether this is via mpirun or as a singleton
Cheers,
Gilles
On Monday, July 11, 2016, pan yang wrote:
> Dear
Dear OpenMPI community,
I faced this problem when I am installing the Rmpi:
> install.packages('Rmpi',repos='http://cran.r-project.org
',configure.args=c(
+ '--with-Rmpi-include=/usr/mpi/gcc/openmpi-1.8.2/include/',
+ '--with-Rmpi-libpath=/usr/mpi/gcc/openmpi-1.8.2/lib64/',
+