MPI-3 shared memory gives you direct access, meaning potentially zero
copies if you eg just read shared state.

Optimizing intranode MPI comm just reduces copies. Since MPI comm semantics
require one copy, you can't do better in RMA. In Send-Recv, I guess you can
do only one copy with a CMA implementation, else probably two copies (to
and from shared memory).

So there is definitely an advantage to MPI shared memory.

Jeff

On Monday, April 11, 2016, Tom Rosmond <rosm...@reachone.com> wrote:

> Hello,
>
> I have been looking into the MPI-3 extensions that added ways to do direct
> memory copying on multi-core 'nodes' that share memory. Architectures
> constructed from these nodes are universal now, so improved ways to exploit
> them are certainly needed.  However, it is my understanding that Open-MPI
> and other widely used MPI implementations, e.g. Intel, MPICH, use hardware
> locality to identify shared memory regions and do direct memory copies
> between processes in these cases, rather than network communication.  If
> this is the case, are there any additional advantages from explicit
> programming of memory copying using MPI-3 shared memory features?
>
> T. Rosmond
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/04/28915.php
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/

Reply via email to