Re: [OMPI users] Can displs in Scatterv/Gatherv/etc be a GPU array for CUDA-aware MPI?

2019-06-11 Thread Fang, Leo via users
Dear George,


Thank you very much for your quick and clear explanation. I will take your 
words as performance guidance :)


Sincerely,
Leo

---
Yao-Lung Leo Fang
Assistant Computational Scientist
Computational Science Initiative
Brookhaven National Laboratory
Bldg. 725, Room 2-169
P.O. Box 5000, Upton, NY 11973-5000
Office: (631) 344-3265
Email: leof...@bnl.gov
Website: https://leofang.github.io/

George Bosilca mailto:bosi...@icl.utk.edu>> 於 2019年6月11日 
上午11:49 寫道:

Leo,

In a UMA system having the displacement and/or recvcounts arrays on managed GPU 
memory should work, but it will incur overheads for at least 2 reasons:
1. the MPI API arguments are checked for correctness (here recvcounts)
2. the collective algorithm part that executes on the CPU uses the 
displacements and recvcounts to issue and manage communications and it 
therefore need access to both.

Moreover, as you mention your code will not be portable anymore.

  George.


On Tue, Jun 11, 2019 at 11:27 AM Fang, Leo via users 
mailto:users@lists.open-mpi.org>> wrote:
Hello,


I understand that once Open MPI is built against CUDA, sendbuf/recvbuf can be 
pointers to GPU memory. I wonder whether or not the “displs" argument of the 
collective calls on variable data (Scatterv/Gatherv/etc) can also live on GPU. 
CUDA awareness isn’t part of the MPI standard (yet), so I suppose it’s worth 
asking or even documenting.

Thank you.


Sincerely,
Leo

---
Yao-Lung Leo Fang
Assistant Computational Scientist
Computational Science Initiative
Brookhaven National Laboratory
Bldg. 725, Room 2-169
P.O. Box 5000, Upton, NY 11973-5000
Office: (631) 344-3265
Email: leof...@bnl.gov
Website: 
https://leofang.github.io/

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Can displs in Scatterv/Gatherv/etc be a GPU array for CUDA-aware MPI?

2019-06-11 Thread George Bosilca via users
Leo,

In a UMA system having the displacement and/or recvcounts arrays on managed
GPU memory should work, but it will incur overheads for at least 2 reasons:
1. the MPI API arguments are checked for correctness (here recvcounts)
2. the collective algorithm part that executes on the CPU uses the
displacements and recvcounts to issue and manage communications and it
therefore need access to both.

Moreover, as you mention your code will not be portable anymore.

  George.


On Tue, Jun 11, 2019 at 11:27 AM Fang, Leo via users <
users@lists.open-mpi.org> wrote:

> Hello,
>
>
> I understand that once Open MPI is built against CUDA, sendbuf/recvbuf can
> be pointers to GPU memory. I wonder whether or not the “displs" argument of
> the collective calls on variable data (Scatterv/Gatherv/etc) can also live
> on GPU. CUDA awareness isn’t part of the MPI standard (yet), so I suppose
> it’s worth asking or even documenting.
>
> Thank you.
>
>
> Sincerely,
> Leo
>
> ---
> Yao-Lung Leo Fang
> Assistant Computational Scientist
> Computational Science Initiative
> Brookhaven National Laboratory
> Bldg. 725, Room 2-169
> P.O. Box 5000, Upton, NY 11973-5000
> Office: (631) 344-3265
> Email: leof...@bnl.gov
> Website: https://leofang.github.io/
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users