> On Feb 9, 2016, at 3:54 PM, Hefty, Sean <sean.he...@intel.com> wrote:
> 
> I want to provide an intra-node communication (i.e. loopback) utility to 
> libfabric.  The loopback utility could be part of a stand-alone provider, or 
> incorporated into other providers.  For this, I'm looking at selecting a 
> single, easily maintained implementation.  These are my choices so far:
> 
> 1. Control messages transfer over shared memory
>   Data transfers use shared memory bounce buffers

You may want to use mmapped bounce buffers instead of shared memory objects. 
The latter remain until reboot if the app does not clean up properly. Also, the 
size of the shared memory space may be artificially small (~10 MB/node on 
Titan, for example).

> 2. Control messages transfer over shared memory
>   Data transfers occur over CMA
>   (small transfers go with control messages)
> 3. Use XPMEM in some TBD way

XPMEM requires bounce buffers as a fallback method if the source or sink buffer 
is not well aligned.


> Some of these options are only available on Linux.  Does the portability of 
> this solution matter?  FreeBSD and Solaris would fall back to using the 
> network loopback device.
> 
> How much concern needs to be given to security?  Should the loopback utility 
> enforce RMA registrations?  Do we require processes to share a certain level 
> of access, such as ptrace capability?
> 
> I think we need input on this not just from the MPI community, but other 
> users as well.
> 
> - Sean
> _______________________________________________
> ofiwg mailing list
> ofiwg@lists.openfabrics.org
> http://lists.openfabrics.org/mailman/listinfo/ofiwg
> 

_______________________________________________
ofiwg mailing list
ofiwg@lists.openfabrics.org
http://lists.openfabrics.org/mailman/listinfo/ofiwg

Reply via email to