Re: [OMPI users] Can displs in Scatterv/Gatherv/etc be a GPU array for CUDA-aware MPI?

2019-06-11 Thread Fang, Leo via users
Dear George, Thank you very much for your quick and clear explanation. I will take your words as performance guidance :) Sincerely, Leo --- Yao-Lung Leo Fang Assistant Computational Scientist Computational Science Initiative Brookhaven National Laboratory Bldg. 725, Room 2-169 P.O. Box 5000,

Re: [OMPI users] Can displs in Scatterv/Gatherv/etc be a GPU array for CUDA-aware MPI?

2019-06-11 Thread George Bosilca via users
Leo, In a UMA system having the displacement and/or recvcounts arrays on managed GPU memory should work, but it will incur overheads for at least 2 reasons: 1. the MPI API arguments are checked for correctness (here recvcounts) 2. the collective algorithm part that executes on the CPU uses the

[OMPI users] Can displs in Scatterv/Gatherv/etc be a GPU array for CUDA-aware MPI?

2019-06-11 Thread Fang, Leo via users
Hello, I understand that once Open MPI is built against CUDA, sendbuf/recvbuf can be pointers to GPU memory. I wonder whether or not the “displs" argument of the collective calls on variable data (Scatterv/Gatherv/etc) can also live on GPU. CUDA awareness isn’t part of the MPI standard (yet),