Martin v. Löwis <mar...@v.loewis.de> added the comment:

> But what counts is where tuples can be created in massive numbers or
> sizes: the eval loop, the tuple type's tp_new, and a couple of other
> places. We don't need to optimize every single tuple created by the
> interpreter or extension modules (and even the, one can simply call
> _PyTuple_Optimize()).

Still, I think this patch does too much code duplication. There should
be only a single function that does the optional untracking; this then
gets called from multiple places.

> Also, this approach is more expensive

I'm skeptical. It could well be *less* expensive, namely if many tuples
get deleted before gc even happens. Then you currently check whether you
can untrack them, which is pointless if the tuple gets deallocated
quickly, anyway.

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue4688>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to