On Tue, Dec 27, 2005, Andrea Arcangeli wrote: > > My suggestion to fix this problem in autopilot mode (without requiring > explicit gc.collect()) is to invoke a gc.collect() (or anyway to go > deep down freeing everything possible) at least once every time the > amount of anonymous memory allocated by the interpreter doubles. The > tunable should be a float >= 1. When the tunable is 1 the feature > is disabled (so it works like current python today). Default should > be 2 (which means to invoke gc.collect() after a 100% increase of > the anonymous memory allocated by the interpreter). We could also > have yet another threshold that sets a minimum of ram after which > this heuristic in function of size kicks in, but it's not very > important and it may not be worth it (whem little memory is involved > gc.collect() should be fast anyway).
If you feel comfortable with C code, the best way to get this to happen would be to make the change yourself, then test to find out what effects this has on Python (in terms of speed and memory usage and whether it breaks any of the regression tests). Once you've satisfied yourself that it works, submit a patch, and post here again with the SF number. Note that since your tunable parameter is presumably accessible from Python code, you'll also need to submit doc patches and tests to verify that it's working correctly. -- Aahz ([EMAIL PROTECTED]) <*> http://www.pythoncraft.com/ "Don't listen to schmucks on USENET when making legal decisions. Hire yourself a competent schmuck." --USENET schmuck (aka Robert Kern) _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com