On 13 May 2011 16:40, Jed Brown <jed at 59a2.org> wrote: > On Fri, May 13, 2011 at 15:25, Lisandro Dalcin <dalcinl at gmail.com> wrote: >> >> Python could be even harder than >> C++, because of the GC that can start running at any time. > > Is it _ever_ acceptable to let the garbage collector manage destruction of > PETSc objects? >
I think you are actually talking about "normal" collection, that is, when the Python refcount drops to zero and the object is deallocated. And no, even this is not acceptable in parallel, things can fail. This issue is going to happens with ANY dynamic lang with automatic memory management. What should I do? Force users to call obj.destroy() for every object? Or perhaps issue a warning if a PETSc object refcount drop to zero in Python land with no explicit call to destroy() ? For reference, mpi4py does the opposite to petsc4py. Every MPI instance has to be explicitly obj.Free(), or you leak the C handle. I did this because MPI handles are not refcounted, or you cannot implement refcounting for all of them. Still, I'm not happy with this: I'm mostly confident that objects like Group, Datatype, Errorhandler, Op, etc. that are "local" can be automatically MPI_Xxx_free(), while Comm, Win, File do require the user to explicitly Free() them. Do this make sense? In petsc4py, as PETSc objects do are refcounted, I chose the pythonic way, though I know it break. I'm really not sure about what to do. Any comments about this would be really welcome. > It seems to me that you can't ever take this chance with > objects that may be parallel. This includes objects that may be destroyed > explicitly when a 'with' block exits or a 'finally' block is reached due to > an exception that is not collective. Under what circumstances does the GC > running affect PETSc objects in any way? -- Lisandro Dalcin --------------- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169
