Re: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5

2015-06-17 Thread Fei Mao
@open-mpi.org] On Behalf Of Fei Mao > Sent: Wednesday, June 17, 2015 1:48 PM > To: Open MPI Users > Subject: Re: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5 > > Hi Rolf, > > Thank you very much for clarifying the problem. Is there any plan to support > GP

Re: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5

2015-06-17 Thread Rolf vandeVaart
A IPC or GPU Direct RDMA in the reduction. Rolf From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Fei Mao Sent: Wednesday, June 17, 2015 1:08 PM To: us...@open-mpi.org<mailto:us...@open-mpi.org> Subject: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5 Hi there, I a

Re: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5

2015-06-17 Thread Fei Mao
gt; > Rolf > > From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Fei Mao > Sent: Wednesday, June 17, 2015 1:08 PM > To: us...@open-mpi.org > Subject: [OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5 > > Hi there, > > I am doing benchmarks

[OMPI users] CUDA-aware MPI_Reduce problem in Openmpi 1.8.5

2015-06-17 Thread Fei Mao
Hi there, I am doing benchmarks on a GPU cluster with two CPU sockets and 4 K80 GPUs each node. Two K80 are connected with CPU socket 0, another two with socket 1. An IB ConnectX-3 (FDR) is also under socket 1. We are using Linux’s OFED, so I know there is no way to do GPU RDMA inter-node