> Idea 1: Allow GC to run automatically no more often than n CPU seconds,
> n being perhaps 5 or 10.

I think it's very easy to exhaust the memory with such a policy, even
though much memory would still be available. Worse, in a program
producing a lot of garbage, performance will go significantly down as
the python starts thrashing the swap space.

> Idea 2: Allow GC to run no more often than f(n) CPU seconds, where n is
> the time taken by the last GC round.

How would that take the incremental GC into account? (i.e. what is
"the time taken by the last GC round"?)

Furthermore, the GC run time might well be under the resolution of the
CPU seconds clock.

> These limits could be reset or scaled by the GC collecting more than n%
> of the generation 0 objects or maybe the number of PyMalloc arenas
> increasing by a certain amount?

I don't think any strategies based on timing will be successful.
Instead, one should count and analyze objects (although I'm unsure
how exactly that could work).

Regards,
Martin
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to