On 15. 12. 21 23:57, Guido van Rossum wrote:
On Wed, Dec 15, 2021 at 6:04 AM Antoine Pitrou <anto...@python.org <mailto:anto...@python.org>> wrote:

    On Wed, 15 Dec 2021 14:13:03 +0100
    Antoine Pitrou <anto...@python.org <mailto:anto...@python.org>> wrote:

     > Did you try to take into account the envisioned project for adding a
     > "complete" GC and removing the GIL?

    Sorry, I was misremembering the details.  Sam Gross' proposal
    (posted here on 07/10/2021) doesn't switch to a "complete GC", but it
    changes reference counting to a more sophisticated scheme (which
    includes immortalization of objects):

    
https://docs.google.com/document/d/18CXhDb1ygxg-YXNBJNzfzZsDFosB5e6BfnXLlejd9l0/edit
    
<https://docs.google.com/document/d/18CXhDb1ygxg-YXNBJNzfzZsDFosB5e6BfnXLlejd9l0/edit>


A note about this: Sam's immortalization covers exactly the objects that Eric is planning to move into the interpreter state struct: "such as interned strings, small integers, statically allocated PyTypeObjects, and the True, False, and None objects". (Well, he says "such as" but I think so does Eric. :-)

Sam's approach is to use the lower bit of the ob_refcnt field to indicate immortal objects. This would not work given the stable ABI (which has macros that directly increment and decrement the ob_refcnt field). In fact, I think that Sam's work doesn't preserve the stable ABI at all. However, setting a very high bit (the bit just below the sign bit) would probably work. Say we're using 32 bits. We use the value 0x_6000_0000 as the initial refcount for immortal objects. The stable ABI will sometimes increment this, sometimes decrement it. But as long as the imbalance is less than 0x_2000_0000, the refcount will remain in the inclusive range [ 0x_4000_0000 , 0x_7FFF_FFFF ] and we can test for immortality by testing a single bit:

if (o->ob_refcnt & 0x_4000_0000)

I don't know how long that would take, but I suspect that a program that just increments the refcount relentlessly would have to run for hours before hitting this range. On a 64-bit machine the same approach would require years to run before a refcount would exceed the maximum allowable imbalance. (These estimates are from Mark Shannon.)

But does the sign bit need to stay intact, and do we actually need to rely on the immortal bit to always be set for immortal objects? If the refcount rolls over to zero, an immortal object's dealloc could bump it back and give itself another few minutes. Allowing such rollover would mean having to deal with negative refcounts, but that might be acceptable.

Another potential issue is that there may be some applications that take refcounts at face value (perhaps obtained using sys.getrefcount()). These would find that immortal objects have a very large refcount, which might surprise them. But technically a very large refcount is totally valid, and the kinds of objects that we plan to immortalize are all widely shared -- who cares if the refcount for None is 5000 or 1610612736? As long as the refcount of *mortal* objects is the same as it was before, this shouldn't be a problem.

A very small refcount would be even more surprising, but the same logic applies: who cares if the refcount for None is 5000 or -5000?


_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2RZLYU2YPJET6SQYDORFEQSE53KPPCYJ/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to