On Wed, Apr 8, 2015 at 1:57 PM Armin Rigo <ar...@tunes.org> wrote:

> On 8 April 2015 at 11:43, Yuriy Taraday <yorik....@gmail.com> wrote:
> > It's already broken if it's used in multithreaded app. For
> single-threaded
> > apps we can make an exception and keep things running as they are now,
> i.e.
> > keep it single-threaded. This will also prevent unnecessary
> multithreading
> > initialization.
>
> You'd end up with cases where you can have a deadlock in
> single-threaded programs that magically goes away if you just add
> anywhere the line "thread.start_new_thread(lambda:None, ())"...  But
> maybe creating lock objects should be enough to change where
> destructors run?  You can't easily have deadlocks without user lock
> objects.
>

Having locks in single-threaded app is rather strange, but it can be some
library code, so programmer can be not aware of them and rely on that
"serialized __del__".

Or, if you care about deadlocks, maybe your Python program should
> explicitly start it own finalizer thread.  Your
> potentially-deadlocking __del__ methods should use a decorator that,
> when called, simply puts the actual method into a Queue.Queue which is
> consumed by this finalizer thread.  It would be the same, but not
> transparent.
>

That would be creating new references to self in __del__ which is bad (at
least docs say so). And after that these references will be gone once
again and __del__ will be called again.

I think that the idea of doing it transparently may be interesting,
> but it needs some more careful design before we can think about
> changing PyPy like that.  (Not that there is *any* idea about the GC
> that doesn't require careful design :-)
>

Oh, sure. I just think that we should at least consider this approach.
_______________________________________________
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev

Reply via email to