On Aug 29, 2013, at 7:33 PM, Christopher Samuel <sam...@unimelb.edu.au> wrote:

> OK, so I'll try testing again with a larger limit to see if that will
> ameliorate this issue.  I'm also wondering where this is happening in
> OMPI, I've a sneaking suspicion this is at MPI_INIT().


FWIW, the stack traces you sent are not during MPI_INIT.

What happens with OMPI's memory manager is that it inserts itself to be *the* 
memory allocator for the entire process before main() even starts.  We have to 
do this as part of the horribleness of that is OpenFabrics/verbs and how it 
just doesn't match the MPI programming model at all.  :-(  (I think I wrote 
some blog entries about this a while ago...  Ah, here's a few:

http://blogs.cisco.com/performance/rdma-what-does-it-mean-to-mpi-applications/
http://blogs.cisco.com/performance/registered-memory-rma-rdma-and-mpi-implementations/

Or, more generally: http://blogs.cisco.com/tag/rdma/

Therefore, (in C) if you call malloc() before MPI_Init(), it'll be calling 
OMPI's ptmalloc.  The stack traces you sent imply that it's just when your app 
is calling the fortran allocate -- which is after MPI_Init().

FWIW, you can build OMPI with --without-memory-manager, or you can setenv 
OMPI_MCA_memory_linux_disable to 1 (note: this is NOT a regular MCA parameter 
-- it *must* be set in the environment before the MPI app starts).  If this env 
variable is set, OMPI will *not* interpose its own memory manager in the 
pre-main hook.  That should be a quick/easy way to try with and without the 
memory manager and see what happens.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

Reply via email to