Re: [OMPI users] Problem when installing Rmpi package in HPC cluster

2016-07-11 Thread Bennet Fauber
We have found that virtually all Rmpi jobs need to be started with

$ mpirun -np 1 R CMD BATCH 

This is, as I understand it, because the first R will initialize the
MPI environment and then when you create the cluster, it wants to be
able to start the rest of the processes.  When you intialize the rest
of the workers, it should be with one less than the total number of
processors.  Something like

mpi.spawn.Rslaves(nslaves=mpi.universe.size()-1)

We have a very trivial example at

http://arc-ts.umich.edu/software/rmpi/

I have also found that making sure to add the profile that is included
with Rmpi is also needed.  I do this by adding two lines to the R
executable script near the top to set and export the and renaming that
Rmpi.  The profile should be in the Rmpi installation directory.  In
our case, that makes the first few lines of the R startup script look
like

#!/bin/sh
# Shell wrapper for R executable.
R_PROFILE=/usr/cac/rhel6/Rmpi/R-3.2.2/0.6-5/Rmpi/Rprofile
export R_PROFILE
R_HOME_DIR=/usr/cac/rhel6/R/3.2.2/lib64/R


Things get dicier if you are using doMPI on top of Rmpi rather than Rmpi itself.

Just in case that is of any help,

-- bennet




On Mon, Jul 11, 2016 at 7:52 PM, Gilles Gouaillardet  wrote:
> Note this is just a workaround, this simply disables the mxm mtl (e.g.
> Mellanox optimized infiniband driver).
>
>
> basically, there are two ways to run a single task mpi program (a.out)
>
> - mpirun -np 1 ./a.out (this is the "standard" way)
>
> - ./a.out (aka singleton mode)
>
> the logs you posted do not specify how the test was launch (e.g. with or
> without mpirun)
>
>
> bottom line, if you hit a singleton limitation (e.g. mxm/mtl is not working
> in singleton mode), then you can simply
>
> mpirun -np  a.out
>
> your Rmpi applications, and this should be just fine.
>
>
> if not, you need to
>
> export OMPI_MCA_pml=ob1
>
> regardless you are using mpirun or not.
>
> /* and for the sake of completion, if you are using mpirun, an equivalent
> option is to
>
> mpirun --mca pml ob1 ...
>
> */
>
>
> Cheers,
>
> Gilles
>
> On 7/12/2016 1:34 AM, pan yang wrote:
>
> Dear Gilles,
>
> I tried export OMPI_MCA_pml=ob1, and it worked! Thank you very much for your
> brilliant suggestion.
>
> By the way, I don't really understand what do you mean by 'can you also
> extract the command tha launch the test ?'...
>
> Cheers,
> Pan
>
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29640.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29643.php


Re: [OMPI users] Problem when installing Rmpi package in HPC cluster

2016-07-11 Thread Gilles Gouaillardet
Note this is just a workaround, this simply disables the mxm mtl (e.g. 
Mellanox optimized infiniband driver).



basically, there are two ways to run a single task mpi program (a.out)

- mpirun -np 1 ./a.out (this is the "standard" way)

- ./a.out (aka singleton mode)

the logs you posted do not specify how the test was launch (e.g. with or 
without mpirun)



bottom line, if you hit a singleton limitation (e.g. mxm/mtl is not 
working in singleton mode), then you can simply


mpirun -np  a.out

your Rmpi applications, and this should be just fine.


if not, you need to

export OMPI_MCA_pml=ob1

regardless you are using mpirun or not.

/* and for the sake of completion, if you are using mpirun, an 
equivalent option is to


mpirun --mca pml ob1 ...

*/


Cheers,

Gilles

On 7/12/2016 1:34 AM, pan yang wrote:

Dear Gilles,

I tried export OMPI_MCA_pml=ob1, and it worked! Thank you very much 
for your brilliant suggestion.


By the way, I don't really understand what do you mean by '/can you 
also extract the command tha launch the test ?/'...


Cheers,
Pan




___
users mailing list
us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/07/29640.php




Re: [OMPI users] Need libmpi_f90.a

2016-07-11 Thread Jeff Squyres (jsquyres)
On Jul 11, 2016, at 3:25 PM, Mahmood Naderan  wrote:

> # ls -l libmpi*
> -rw-r--r-- 1 root root 1029580 Jul 11 23:51 libmpi_mpifh.a
> -rw-r--r-- 1 root root   17292 Jul 11 23:51 libmpi_usempi.a

These are the two for v1.10.x.

Sorry; one thing I wasn't clear on (I had forgotten): the libraries changed 
name from v1.6.x to v1.10.x.

This is why we recommend you use the wrapper compilers to compile MPI 
applications, and/or at least use the wrapper compilers or pkg-config to 
determine what the correct flags are to pass to the compiler/linker if you're 
not using the wrapper compilers directly.

See this FAQ item if you don't want to use the wrapper compilers:

https://www.open-mpi.org/faq/?category=mpi-apps#cant-use-wrappers

I see that we have not mentioned in there that you can use pkg-config, too; 
I'll see if we can add that shortly.  The use of pkg-config is documented here:

https://github.com/open-mpi/ompi/blob/master/README#L1744-L1771

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



Re: [OMPI users] Need libmpi_f90.a

2016-07-11 Thread Mahmood Naderan
Excuse me... that command only creates libmpi_f90.a for V1.6.5.
What about V1.10.3? I don't see such file even with --enable-static. Does
it have a different name?


# ls -l libmpi*
-rw-r--r-- 1 root root 5888466 Jul 11 23:51 libmpi.a
-rw-r--r-- 1 root root  962656 Jul 11 23:51 libmpi_cxx.a
-rwxr-xr-x 1 root root1210 Jul 11 23:51 libmpi_cxx.la
lrwxrwxrwx 1 root root  19 Jul 11 23:51 libmpi_cxx.so ->
libmpi_cxx.so.1.1.3
lrwxrwxrwx 1 root root  19 Jul 11 23:51 libmpi_cxx.so.1 ->
libmpi_cxx.so.1.1.3
-rwxr-xr-x 1 root root  139927 Jul 11 23:51 libmpi_cxx.so.1.1.3
-rwxr-xr-x 1 root root1139 Jul 11 23:51 libmpi.la
-rw-r--r-- 1 root root 1029580 Jul 11 23:51 libmpi_mpifh.a
-rwxr-xr-x 1 root root1232 Jul 11 23:51 libmpi_mpifh.la
lrwxrwxrwx 1 root root  22 Jul 11 23:51 libmpi_mpifh.so ->
libmpi_mpifh.so.12.0.1
lrwxrwxrwx 1 root root  22 Jul 11 23:51 libmpi_mpifh.so.12 ->
libmpi_mpifh.so.12.0.1
-rwxr-xr-x 1 root root  584518 Jul 11 23:51 libmpi_mpifh.so.12.0.1
lrwxrwxrwx 1 root root  16 Jul 11 23:51 libmpi.so -> libmpi.so.12.0.3
lrwxrwxrwx 1 root root  16 Jul 11 23:51 libmpi.so.12 -> libmpi.so.12.0.3
-rwxr-xr-x 1 root root 2903817 Jul 11 23:51 libmpi.so.12.0.3
-rw-r--r-- 1 root root   17292 Jul 11 23:51 libmpi_usempi.a
-rwxr-xr-x 1 root root1288 Jul 11 23:51 libmpi_usempi.la
lrwxrwxrwx 1 root root  22 Jul 11 23:51 libmpi_usempi.so ->
libmpi_usempi.so.5.1.0
lrwxrwxrwx 1 root root  22 Jul 11 23:51 libmpi_usempi.so.5 ->
libmpi_usempi.so.5.1.0
-rwxr-xr-x 1 root root   11900 Jul 11 23:51 libmpi_usempi.so.5.1.0




Regards,
Mahmood



On Sun, Jul 10, 2016 at 8:39 PM, Mahmood Naderan 
wrote:

> >./configure --disable-shared --enable-static
>
> Thank you very much
>
> Regards,
> Mahmood
>
>


Re: [OMPI users] Problem when installing Rmpi package in HPC cluster

2016-07-11 Thread pan yang
Dear Gilles,

I tried export OMPI_MCA_pml=ob1, and it worked! Thank you very much for
your brilliant suggestion.

By the way, I don't really understand what do you mean by '*can you also
extract the command tha launch the test ?*'...

Cheers,
Pan


Re: [OMPI users] Can OMPI 1.8.8 or later support LSF 9.1.3 or 10.1?

2016-07-11 Thread Josh Hursey
IBM will be helping to support the LSF functionality in Open MPI. We don't
have any detailed documentation just yet, other than the FAQ on the Open
MPI site. However, the LSF components in Open MPI should be functional in
the latest releases. I've tested recently with LSF 9.1.3 and 10.1.

I pushed some changes to the Open MPI 1.10.3 release (and 2.0.0
pre-release) for affinity support in MPMD configurations. That was tested
on machine with LSF 9.1.3. This is using the the "-R" affinity options to
bsub - so the affinity specification mechanism built into LSF. It worked as
I expected it to for the few configurations I tried.

I have not tested with v1.8 series since it's an older series. I would
suggest trying the 1.10.3 release (and the soon to be released 2.0.0) on
your system.


On Fri, Jul 8, 2016 at 1:20 PM, Gang Chen  wrote:

> Hi,
>
> I am wondering if there's integration test conducted with v1.8.8 and IBM
> LSF 9.1.3 or 10.1, especially the cpu affinity parts. Is there somewhere I
> can find detail info?
>
> Thanks,
> Gordon
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29621.php
>


Re: [OMPI users] openmpi 1.10.2 and PGI 15.9

2016-07-11 Thread Åke Sandgren
Looks like you are compiling with slurm support.

If so, you need to remove the "-pthread" from libslurm.la and libpmi.la

On 07/11/2016 02:54 PM, Michael Di Domenico wrote:
> I'm trying to get openmpi compiled using the PGI compiler.
> 
> the configure goes through and the code starts to compile, but then
> gets hung up with
> 
> entering: openmpi-1.10.2/opal/mca/common/pmi
> CC common_pmi.lo
> CCLD libmca_common_pmi.la
> pgcc-Error-Unknown switch: - pthread
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/07/29635.php
> 

-- 
Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
Internet: a...@hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90-580 14
Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se


Re: [OMPI users] openmpi 1.10.2 and PGI 15.9

2016-07-11 Thread Michael Di Domenico
On Mon, Jul 11, 2016 at 9:11 AM, Gilles Gouaillardet
 wrote:
> Can you try the latest 1.10.3 instead ?

i can but it'll take a few days to pull the software inside.

> btw, do you have a license for the pgCC C++ compiler ?
> fwiw, FreePGI on OSX has no C++ license and PGI C and gnu g++ does not work
> together out of the box, hopefully I will have a fix ready sometimes this
> week

we should, but i'm not positive.  we're running PGI on linux x64, we
typically buy the full suite, but i'll double check.


Re: [OMPI users] openmpi 1.10.2 and PGI 15.9

2016-07-11 Thread Gilles Gouaillardet
Can you try the latest 1.10.3 instead ?

btw, do you have a license for the pgCC C++ compiler ?
fwiw, FreePGI on OSX has no C++ license and PGI C and gnu g++ does not work
together out of the box, hopefully I will have a fix ready sometimes this
week

Cheers,

Gilles

On Monday, July 11, 2016, Michael Di Domenico 
wrote:

> I'm trying to get openmpi compiled using the PGI compiler.
>
> the configure goes through and the code starts to compile, but then
> gets hung up with
>
> entering: openmpi-1.10.2/opal/mca/common/pmi
> CC common_pmi.lo
> CCLD libmca_common_pmi.la
> pgcc-Error-Unknown switch: - pthread
> ___
> users mailing list
> us...@open-mpi.org 
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29635.php
>


[OMPI users] openmpi 1.10.2 and PGI 15.9

2016-07-11 Thread Michael Di Domenico
I'm trying to get openmpi compiled using the PGI compiler.

the configure goes through and the code starts to compile, but then
gets hung up with

entering: openmpi-1.10.2/opal/mca/common/pmi
CC common_pmi.lo
CCLD libmca_common_pmi.la
pgcc-Error-Unknown switch: - pthread


Re: [OMPI users] Problem when installing Rmpi package in HPC cluster

2016-07-11 Thread Gilles Gouaillardet
That could be specific to mtl/mxm

could you
export OMPI_MCA_pml=ob1
and try again ?

can you also extract the command tha launch the test ?
I am curious whether this is via mpirun or as a singleton

Cheers,

Gilles

On Monday, July 11, 2016, pan yang  wrote:

> Dear OpenMPI community,
>
> I faced this problem when I am installing the Rmpi:
>
> > install.packages('Rmpi',repos='http://cran.r-project.org
> ',configure.args=c(
> + '--with-Rmpi-include=/usr/mpi/gcc/openmpi-1.8.2/include/',
> + '--with-Rmpi-libpath=/usr/mpi/gcc/openmpi-1.8.2/lib64/',
> + '--with-Rmpi-type=OPENMPI'))
> Installing package into ?d1/pyangac/R_lilbs?
> (as 滎ib?is unspecified)
> trying URL 'http://cran.r-project.org/src/contrib/Rmpi_0.6-6.tar.gz'
> Content type 'application/x-gzip' length 105181 bytes (102 Kb)
> opened URL
> ==
> downloaded 102 Kb
>
> * installing *source* package 槔mpi?...
> ** package 槔mpi?successfully unpacked and MD5 sums checked
> checking for openpty in -lutil... no
> checking for main in -lpthread... no
> configure: creating ./config.status
> config.status: creating src/Makevars
> ** libs
> gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -DPACKAGE_NAME=\"\"
> -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\"
> -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\"
> -I/usr/mpi/gcc/openmpi-1.8.2/include/  -DMPI2 -DOPENMPI
> -I/usr/local/include-fpic  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
> -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64
> -mtune=generic  -c Rmpi.c -o Rmpi.o
> gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -DPACKAGE_NAME=\"\"
> -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\"
> -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\"
> -I/usr/mpi/gcc/openmpi-1.8.2/include/  -DMPI2 -DOPENMPI
> -I/usr/local/include-fpic  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
> -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64
> -mtune=generic  -c conversion.c -o conversion.o
> gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -DPACKAGE_NAME=\"\"
> -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\"
> -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\"
> -I/usr/mpi/gcc/openmpi-1.8.2/include/  -DMPI2 -DOPENMPI
> -I/usr/local/include-fpic  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
> -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64
> -mtune=generic  -c internal.c -o internal.o
> gcc -m64 -std=gnu99 -shared -L/usr/local/lib64 -o Rmpi.so Rmpi.o
> conversion.o internal.o -L/usr/mpi/gcc/openmpi-1.8.2/lib64/ -lmpi
> -L/usr/lib64/R/lib -lR
> installing to /d1/pyangac/R_lilbs/Rmpi/libs
> ** R
> ** demo
> ** inst
> ** preparing package for lazy loading
> ** help
> *** installing help indices
>   converting help for package 槔mpi?
> finding HTML links ... done
> hosts   html
> internalhtml
> mpi.abort   html
> mpi.apply   html
> mpi.barrier html
> mpi.bcast   html
> mpi.bcast.Robj  html
> mpi.bcast.cmd   html
> mpi.cart.coords html
> mpi.cart.create html
> mpi.cart.gethtml
> mpi.cart.rank   html
> mpi.cart.shift  html
> mpi.cartdim.get html
> mpi.commhtml
> mpi.comm.disconnect html
> mpi.comm.free   html
> mpi.comm.inter  html
> mpi.comm.set.errhandler html
> mpi.comm.spawn  html
> mpi.const   html
> mpi.dims.create html
> mpi.exithtml
> mpi.finalizehtml
> mpi.gather  html
> mpi.gather.Robj html
> mpi.get.count   html
> mpi.get.processor.name  html
> mpi.get.sourcetag   html
> mpi.iapply  html
> mpi.infohtml
> mpi.intercomm.merge html
> mpi.parSim  html
> mpi.parapplyhtml
> mpi.probe   html
> mpi.realloc html
> mpi.reduce  html
> mpi.remote.exec html
> mpi.scatter html
> mpi.scatter.Robjhtml
> mpi.sendhtml
> mpi.send.Robj   html
> mpi.sendrecv 

[OMPI users] Problem when installing Rmpi package in HPC cluster

2016-07-11 Thread pan yang
Dear OpenMPI community,

I faced this problem when I am installing the Rmpi:

> install.packages('Rmpi',repos='http://cran.r-project.org
',configure.args=c(
+ '--with-Rmpi-include=/usr/mpi/gcc/openmpi-1.8.2/include/',
+ '--with-Rmpi-libpath=/usr/mpi/gcc/openmpi-1.8.2/lib64/',
+ '--with-Rmpi-type=OPENMPI'))
Installing package into ?d1/pyangac/R_lilbs?
(as 滎ib?is unspecified)
trying URL 'http://cran.r-project.org/src/contrib/Rmpi_0.6-6.tar.gz'
Content type 'application/x-gzip' length 105181 bytes (102 Kb)
opened URL
==
downloaded 102 Kb

* installing *source* package 槔mpi?...
** package 槔mpi?successfully unpacked and MD5 sums checked
checking for openpty in -lutil... no
checking for main in -lpthread... no
configure: creating ./config.status
config.status: creating src/Makevars
** libs
gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -DPACKAGE_NAME=\"\"
-DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\"
-DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\"
-I/usr/mpi/gcc/openmpi-1.8.2/include/  -DMPI2 -DOPENMPI
-I/usr/local/include-fpic  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
-fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64
-mtune=generic  -c Rmpi.c -o Rmpi.o
gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -DPACKAGE_NAME=\"\"
-DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\"
-DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\"
-I/usr/mpi/gcc/openmpi-1.8.2/include/  -DMPI2 -DOPENMPI
-I/usr/local/include-fpic  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
-fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64
-mtune=generic  -c conversion.c -o conversion.o
gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -DPACKAGE_NAME=\"\"
-DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\"
-DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\"
-I/usr/mpi/gcc/openmpi-1.8.2/include/  -DMPI2 -DOPENMPI
-I/usr/local/include-fpic  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
-fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64
-mtune=generic  -c internal.c -o internal.o
gcc -m64 -std=gnu99 -shared -L/usr/local/lib64 -o Rmpi.so Rmpi.o
conversion.o internal.o -L/usr/mpi/gcc/openmpi-1.8.2/lib64/ -lmpi
-L/usr/lib64/R/lib -lR
installing to /d1/pyangac/R_lilbs/Rmpi/libs
** R
** demo
** inst
** preparing package for lazy loading
** help
*** installing help indices
  converting help for package 槔mpi?
finding HTML links ... done
hosts   html
internalhtml
mpi.abort   html
mpi.apply   html
mpi.barrier html
mpi.bcast   html
mpi.bcast.Robj  html
mpi.bcast.cmd   html
mpi.cart.coords html
mpi.cart.create html
mpi.cart.gethtml
mpi.cart.rank   html
mpi.cart.shift  html
mpi.cartdim.get html
mpi.commhtml
mpi.comm.disconnect html
mpi.comm.free   html
mpi.comm.inter  html
mpi.comm.set.errhandler html
mpi.comm.spawn  html
mpi.const   html
mpi.dims.create html
mpi.exithtml
mpi.finalizehtml
mpi.gather  html
mpi.gather.Robj html
mpi.get.count   html
mpi.get.processor.name  html
mpi.get.sourcetag   html
mpi.iapply  html
mpi.infohtml
mpi.intercomm.merge html
mpi.parSim  html
mpi.parapplyhtml
mpi.probe   html
mpi.realloc html
mpi.reduce  html
mpi.remote.exec html
mpi.scatter html
mpi.scatter.Robjhtml
mpi.sendhtml
mpi.send.Robj   html
mpi.sendrecvhtml
mpi.setup.rng   html
mpi.spawn.Rslaves   html
mpi.universe.size   html
mpi.waithtml
** building package indices
** testing if installed package can be loaded
--
Error obtaining unique transport key from ORTE
(orte_precondition_transports not present in
the environme