Re: [OMPI users] libfabric verb provider for iWARP RNIC

2016-04-11 Thread Jeff Squyres (jsquyres)
On Apr 11, 2016, at 2:38 PM, dpchoudh . wrote: > > If the vendor of a new type of fabric wants to include support for OpenMPI, > then, as long as they can implement a libfabric provider, they can use the > OFI MTL without adding any code to the OpenMPI source tree itself.

Re: [OMPI users] libfabric verb provider for iWARP RNIC

2016-04-11 Thread dpchoudh .
Hi Howard and all Thank you very much for the information. I have a follow up question: If the vendor of a new type of fabric wants to include support for OpenMPI, then, as long as they can implement a libfabric provider, they can use the OFI MTL without adding any code to the OpenMPI source

Re: [OMPI users] resolution of MPI_Wtime

2016-04-11 Thread Dave Love
George Bosilca writes: > MPI_Wtick is not about the precision but about the resolution of the > underlying timer (aka. the best you can hope to get). What's the distinction here? (clock_getres(2) says "resolution (precision)".) My point (like JH's?) is that it doesn't

Re: [OMPI users] resolution of MPI_Wtime

2016-04-11 Thread Dave Love
Jeff Hammond writes: > George: > > Indeed, MPI_Wtick is not always a good measure of the precision of > MPI_Wtime. The way I would measure resolution is to call MPI_Wtime a few > million times. Is there typically a problem with just looping until the result changes a

Re: [OMPI users] What about MPI-3 shared memory features?

2016-04-11 Thread Nathan Hjelm
For two-sided Open MPI uses CMA, XPMEM, or KNEM for single-copy shared memory if available. Otherwise it does two copies. -Nathan On Mon, Apr 11, 2016 at 09:02:38AM -0700, Jeff Hammond wrote: >MPI-3 shared memory gives you direct access, meaning potentially zero >copies if you eg just

Re: [OMPI users] What about MPI-3 shared memory features?

2016-04-11 Thread Jeff Hammond
MPI-3 shared memory gives you direct access, meaning potentially zero copies if you eg just read shared state. Optimizing intranode MPI comm just reduces copies. Since MPI comm semantics require one copy, you can't do better in RMA. In Send-Recv, I guess you can do only one copy with a CMA

[OMPI users] What about MPI-3 shared memory features?

2016-04-11 Thread Tom Rosmond
Hello, I have been looking into the MPI-3 extensions that added ways to do direct memory copying on multi-core 'nodes' that share memory. Architectures constructed from these nodes are universal now, so improved ways to exploit them are certainly needed. However, it is my understanding that