On 12/3/2010 5:55 PM, Dima Tisnek wrote:
How hard or reasonable would it be to free memory pages on OS level?

[pcmiiw] Gabage collection within a generation involves moving live
objects to compact the generation storage. This separates the memory
region into 2 parts "live" and "cleared", the pointer to the beginning
of the "cleared" part is where next allocation is going to happen.

When this is done, does Python gc move the objects preserving order or
does it try to populate garbaged slot with some live object
disregarding order? Earlier case is more applicable, but latter case
is a target for below too.

If we were to look at memory regions from the OS point of view, they
are allocated as pages (or possibly as hugetlb pages). So if we are to
compact something like this [LL__][_L__][____][L_LL][LFFF]  where []
is a page, L is live object and _ is garbage and F is free memory,
would it not make more sense to tell OS that [____] is not needed
anymore, and not move some of the consequtive [L_LL][LFFF] at all, or
at least not move those objects as far down the memory region?

This would of course have a certain overhead of tracking which pages
are given back to OS and mapping them back when needed, but at the
same time, be beneficial because fewer objets are moved and also
possibly improve cpu cache performance because objects won't be moved
so far out.

p.s. if someone has an athoritative link to modern python gc design,
please let me know.

gc is implementation specific. CPython uses ref counting + cycle gc. A constraint on all implementations is that objects have a fixed, unique id during their lifetime. CPython uses the address as the id, so it cannot move objects. Other implementations do differently. Compacting gc requires an id to current address table or something.

--
Terry Jan Reedy

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to