Thanks for the help guys.

In my case the memory will be allocated and pinned by my other device
driver.  Is it safe to simply use that memory?  My pages won't be unpinned
as a result?

As far as registration, I am sure that OpenMPI will do a better job of that
than I could, so I won't even attempt to futz with that.

Thanks,
 Brian

On 11/2/06, Brian W Barrett <bbarr...@lanl.gov> wrote:

Locking a page with mlock() is not all that is required for RDMA
using InfiniBand (or Myrinet, for that matter).  You have to call
that device's registration function first.  In Open MPI, that can be
done implicitly with the mpi_leave_pinned option, which will pin
memory as needed and then leave it pinned for the life of the
buffer.  Or it can be done ahead of time by calling MPI_ALLOC_MEM.

Because the amount of memory a NIC can have pinned at any time may
not directly match the total amount of memory that can be mlock()ed
at any given time, it's also not a safe assumption that a buffer
allocated with MPI_ALLOC_MEM or used with an RDMA transfer from MPI
is going to be mlock()ed as a side effect of NIC registration.  Open
MPI internally might unregister that memory with the NIC in order to
register a different memory segment for another memory transfer.

Brian


On Nov 2, 2006, at 12:22 PM, Brian Budge wrote:

> Thanks for the pointer, it was a very interesting read.
>
>  It seems that by default OpenMPI uses the nifty pipelining trick
> with pinning pages while transfer is happening.  Also the pinning
> can be (somewhat) perminant and the state is cached so that next
> usage requires no registration.  I guess it is possible to use pre-
> pinned memory, but do I need to do anything special to do so?  I
> will already have some buffers pinned to allow DMAs to devices
> across PCI-Express, so it makes sense to use one pinned buffer so
> that I can avoid memcpys.
>
> Are there any HOWTO tutorials or anything?  I've searched around,
> but it's possible I just used the wrong search terms.
>
> Thanks,
>   Brian
>
>
>
> On 11/2/06, Jeff Squyres <jsquy...@cisco.com> wrote: This paper
> explains it pretty well:
>
>      http://www.open-mpi.org/papers/euro-pvmmpi-2006-hpc-protocols/
>
>
>
> On Nov 2, 2006, at 1:37 PM, Brian Budge wrote:
>
> > Hi all -
> >
> > I'm wondering how DMA is handled in OpenMPI when using the
> > infiniband protocol.  In particular, will I get a speed gain if my
> > read/write buffers are already pinned via mlock?
> >
> > Thanks,
> >   Brian
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to