Re: [OMPI users] Memory usage for MPI program

2015-06-17 Thread Ralph Castain
How is the file read? From stdin? Or do they open it directly? If the latter, then it sure sounds like a CGNS issue to me - looks like they are slurping in the entire file, and then forget to free the memory when they close it. I can’t think of any solution short of getting them to look at the

Re: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5

2015-06-17 Thread Fei Mao
Thanks! On Jun 17, 2015, at 3:08 PM, Rolf vandeVaart wrote: > There is no short-term plan but we are always looking at ways to improve > things so this could be looked at some time in the future. > > Rolf > > From: users [mailto:users-boun...@open-mpi.org] On Behalf

Re: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5

2015-06-17 Thread Rolf vandeVaart
There is no short-term plan but we are always looking at ways to improve things so this could be looked at some time in the future. Rolf From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Fei Mao Sent: Wednesday, June 17, 2015 1:48 PM To: Open MPI Users Subject: Re: [OMPI users]

Re: [OMPI users] No suitable active ports warning and -mca btl_openib_if_include option

2015-06-17 Thread Mike Dubman
Hi, the message in question belongs to MXM and it is warning (silenced in latter releases of MXM). To select specific device in MXM, please pass: mpirun -x MXM_IB_PORTS=mlx4_0:2 ... M On Wed, Jun 17, 2015 at 9:38 PM, Na Zhang wrote: > Hi all, > > I am trying to

[OMPI users] No suitable active ports warning and -mca btl_openib_if_include option

2015-06-17 Thread Na Zhang
Hi all, I am trying to launch MPI jobs (with version openmpi-1.6.5) on a node with multiple InfiniBand HCA cards (pls. see ibstat info below). I just want to use the only active port: mlx4_0 port 2. Thus I issued mpirun *-mca btl_openib_if_include "mlx4_0:2"* -np... I thought this command would

Re: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5

2015-06-17 Thread Fei Mao
Hi Rolf, Thank you very much for clarifying the problem. Is there any plan to support GPU RDMA for reduction in the future? On Jun 17, 2015, at 1:38 PM, Rolf vandeVaart wrote: > Hi Fei: > > The reduction support for CUDA-aware in Open MPI is rather simple. The GPU

[OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5

2015-06-17 Thread Fei Mao
Hi there, I am doing benchmarks on a GPU cluster with two CPU sockets and 4 K80 GPUs each node. Two K80 are connected with CPU socket 0, another two with socket 1. An IB ConnectX-3 (FDR) is also under socket 1. We are using Linux’s OFED, so I know there is no way to do GPU RDMA inter-node

Re: [OMPI users] Memory usage for MPI program

2015-06-17 Thread Manoj Vaghela
Hi, While checking for memory issues related with CGNS 2.5.5, on a test machine, the following output is display when just opening and closing CGNS file. Can anybody please help me on this? This machine is Centos 7 (minimal installation) with GCC 4.8.3 and CentOS 7. The gcc compiler is used. The