On 13 Mar 2014, at 13:43, Jed Brown <[email protected]> wrote:

> "Garth N. Wells" <[email protected]> writes:
> 
>> On 13 Mar 2014, at 13:03, Benjamin Kehlet <[email protected]> wrote:
>> 
>>> Other possible solutions include
>>> * Requiring the user to call a init_dolfin() function before doing
>>> anything (From python this could be done implicitly when doing "import
>>> dolfin”)
>> 
>> I think this is the ‘best’ approach, but others probably won’t like a
>> using having to initialise MPI manually in C++.
> 
> Agree.
> 
>>> * Adding calls to init_mpi() in the functions in MPI.h where this is 
>>> missing.
>>> 
>> 
>> This is too low-level. init_mpi() should be called from the highest possible 
>> level.
> 
> Initialization is more-or-less okay so long as the first object is made
> collectively (even if it is not a collective object; it just can't
> depend on a non-deterministic condition).  Finalization is tricky if
> tied to an object, because other objects might be created later, while
> you perhaps?  plan to MPI_Finalize when the reference count drops to
> zero.  Note that MPI can only be initialized once and that the user
> might create non-dolfin objects that use MPI and outlive dolfin in the
> application.  That might also include profiling, such as -log_summary in
> PETSc.

At the moment, DOLFIN finalises MPI when a singleton class goes out of scope at 
the end of program *if* DOLFIN initialised MPI. There was a bug in PyTrilinos 
(which we reported and which has since been fixed) whereby PyTrilinos didn’t 
check if it was responsible for initialising and finalising MPI.

Garth

_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to