[OMPI users] segmentation fault to use openMPI

2017-10-11 Thread RUI ZHANG
Hello everyone, I am trying to debug through the MPI functionality at our local clusters. I use openmpi 3.0 and the executable were compiled by PGI 10.9. The executable is a regional air quality model called "CAMx" which is widely used in our community. In our local clusters setting, I have a

Re: [OMPI users] alltoallv

2017-10-11 Thread Peter Kjellström
On Tue, 10 Oct 2017 11:57:51 -0400 Michael Di Domenico wrote: > i'm getting stuck trying to run some fairly large IMB-MPI alltoall > tests under openmpi 2.0.2 on rhel 7.4 What is the IB stack used, just RHEL inbox? Do you run openmpi on the psm mtl for qlogic and openib

Re: [OMPI users] openmpi mgmt traffic

2017-10-11 Thread Gilles Gouaillardet
mpirun --mca oob_tcp_if_include ib0 --mca oob tcp ... Note that if your cluster is large and ARP tables are not fully populated, IPoIB might not be the best idea. Cheers, Gilles Michael Di Domenico wrote: >my cluster nodes are connected on 1g ethernet eth0/eth1 and

[OMPI users] openmpi mgmt traffic

2017-10-11 Thread Michael Di Domenico
my cluster nodes are connected on 1g ethernet eth0/eth1 and via infiniband rdma and ib0 my understanding is that openmpi will detect all these interfaces. using eth0/eth1 for connection setup and use rdma for msg passing what would be an appropriate to command line parameters to tell openmpi to

[OMPI users] CentOS-7/openmpi-3.0.0/cuda-9.0.176_384.81: libopen-pal.so: undefined reference to `nvmlDeviceGetPciInfo_v3

2017-10-11 Thread Tru Huynh
Hi, I have successfully built openmpi-3.0.0 from source with cuda and 7.5.18 on CentOS-7 x86_64 (default system gnu compilers). I am trying to build openmpi-3.0.0 with cuda9 on CentOS-7 and failed with cuda9 with this error: make[2]: Leaving directory