Sounds good. I think it's also a good idea to have a close() function in
all file
classes, even those that don't have a collective destructor, for a
consistent API.

Based on this we can implement __exit__ to call close() for the file
classes and get with statement support.

Martin

On 7 October 2014 17:00, Garth N. Wells <[email protected]> wrote:

> How about this as a compromise solution: we add a close(), clear() or
> destroy() function to each class that has a collective destructor. If a
> user runs in parallel and has not manually cleaned up by calling
> close/clear/destroy, we print a warning message from the destructor.
>
> We can add options to turn the message off in parallel, or to turn it on
> in serial (for users who want to check that their serial code is ready to
> run in parallel), or to turn it on only from Python.
>
> Garth
>
>
>
> On Tue, 7 Oct, 2014 at 3:20 PM, Martin Sandve Alnæs <[email protected]>
> wrote:
>
>> On 7 October 2014 15:58, Jed Brown <[email protected]> wrote:
>> >
>> > "Garth N. Wells" <[email protected]> writes:
>> > > I thought the issue we're discussing is that the above Python pattern,
>> > > which is the natural way to do things, can break in parallel because
>> of
>> > > non-deterministic garbage collection and that we need an alternative?
>> >
>> > The with statement (PEP 343) provides __exit__, which is deterministic.
>> > That pattern is idiomatic and correct.  Relying on garbage collection
>> > when a variable falls out of scope is the problem.
>>
>> Exactly. This includes the three patterns which are everywhere in dolfin
>> tests and demos:
>>
>> File(...) << u
>>
>> def f():
>>     file = File(...)
>>     file << u
>>
>> def g():
>>     file = File(...)
>>     file << u
>>     del file
>>
>> (And as Jed said I was of course wrong about RAII providing exception
>> safety in the MPI context).
>>
>> Martin
>>
>
>
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to