> By "revive cycles", I mean make sure that they are referenced by an > independent referrer (one that won't go away as part of the __del__ > calling process).
I think this is a) unfortunate terminology (as the cycle is not dead, so no need to revive), and b) unnecessary, as calling __del__ will add a reference, anyway (in the implicit self parameter). So the object won't go away as long as __del__ runs. It might go away immediately after __del__ returns, which may or may not be a problem. > This is similar to how the tp_dealloc code > increases the refcount (actually sets it to 1, because it was > certainly 0 when entering the destructor) before calling the __del__ > slot. Without reviving the object before calling its __del__ in the > destructor, and without reviving the objects of a cycle before calling > its __del__'s, the __del__ Pythonic code may be exposed to "dead > objects" (refcount==0). No, that can't happen, and, AFAICT, is *not* the reason why the tp_dealloc resurrects the object. Instead, if it wouldn't resurrect it, tp_dealloc might become recursive, deallocating the object twice. > Consider the cycle: > a.x = b > b.x = a > > Lets suppose the a object has a __del__. Lets assume each object in > the cycle has a refcount of 1 (and the cycle should die). Now lets say > this is a's __del__ code: > def __del__(self): > self.x = None > > Running it will set 'b's refcount to 0 and call its destructor, which > will set 'a's refcount to 0 and also call its destructor. But its > __del__ is currently running - so "self" must not have a refcount of > 0. And it won't, because (say) PyObject_CallMethod (to call __del__) calls PyObject_GetAttrString, which returns a bound method which refers to im_self for the entire life of > If you only incref on 'a' before calling __del__, then you are > probably alright, as long as there is only one __del__. Why would you think so? We explicitly call one __del__. Assume that breaks the cycle, causing another object with __del__ to go to refcount zero. Now, tp_dealloc is called, raises the refcount, calls __del__ of the other object, and releases its storage. >> Can you please elaborate? What would such __del__ ordering issues be? > > If you call b's __del__ first then a's __del__ will fail. If you call > a's __del__ first, then all is well. Ofcourse you can create true > cyclic dependencies that no order will work, and its pretty clear > there is no way to deduce the right order anyway. This is what I mean > by "ordering issues". I see. As we are in interpreter shutdown, any such exceptions should be ignored (as exceptions in __del__ are, anyway). Programs involving such cycles should be considered broken, and be rewritten to avoid them (which I claim is always possible, and straight-forward) > Note that the __del__'s themselves may be breaking cycles and > refcounts will go to 0 - unless you temporarily revive (incref) the > entire cycle first. See above - you shouldn't need to. >> I still don't understand what "revive the cycle" means. You will need to >> incref the object for which you call __del__, that's all. > > Unless there are multiple __del__'s in the cycle. Not even then. Regards, Martin _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com