On Wed, Jul 16, 2014 at 07:59:14AM -0700, Ralph Castain wrote:
> I discussed this over IM with Nathan to try and get a better understanding of 
> the options. Basically, we have two approaches available to us:
> 
> 1. my solution resolves the segv problem and eliminates leaks so long as the 
> user calls MPI_Init/Finalize after calling the MPI_T init/finalize functions. 
> This method will still leak memory if the user doesn't use MPI after calling 
> the MPI_T functions, but does mean that all memory used by MPI will be 
> released upon MPI_Finalize. So if the user program continues beyond MPI, they 
> won't be carrying the MPI memory footprint with them. This continues our 
> current behavior.
> 
> 2. the destructor method, which release the MPI memory footprint upon final 
> program termination instead of at MPI_Finalize. This also solves the segv and 
> leak problems, and ensures that someone calling only the MPI_T init/finalize 
> functions will be valgrind-clean, but means that a user program that runs 
> beyond MPI will carry the MPI memory footprint with them. This is a change in 
> our current behavior.

Correct. Though the only thing we will carry around until termination is
the memory associated with opal/mca/if, opal/mca/event, opal_net,
opal_malloc, opal_show_help, opal_output, opal_dss, opal_datatype, and
opal_class. Not sure how much memory this is.

-Nathan

Attachment: pgpnPkl7xqqrj.pgp
Description: PGP signature

Reply via email to