On Sat, Aug 12, 2017 at 6:27 PM, Yury Selivanov <yselivanov...@gmail.com> wrote:
> Yes, I considered this idea myself, but ultimately rejected it because:
> 1. Current solution makes it easy to introspect things. Get the
> current EC and print it out. Although the context item idea could be
> extended to `sys.create_context_item('description')` to allow that.
My first draft actually had the description argument :-). But then I
deleted it on the grounds that there's also no way to introspect a
list of all threading.local objects, and no-one seems to be bothered
by that, so why should we bother here. Obviously it'd be trivial to
add though, yeah; I don't really care either way.
> 2. What if we want to pickle the EC? If all items in it are
> pickleable, it's possible to dump the EC, send it over the network,
> and re-use in some other process. It's not something I want to
> consider in the PEP right now, but it's something that the current
> design theoretically allows. AFAIU, `ci = sys.create_context_item()`
> context item wouldn't be possible to pickle/unpickle correctly, no?
That's true. In this API, supporting pickling would require some kind
of opt-in on the part of EC users.
But... pickling would actually need to be opt-in anyway. Remember, the
set of all EC items is a piece of global shared state; we expect new
entries to appear when random 3rd party libraries are imported. So we
have no idea what is in there or what it's being used for. Blindly
pickling the whole context will lead to bugs (when code unexpectedly
ends up with context that wasn't designed to go across processes) and
crashes (there's no guarantee that all the objects are even
If we do decide we want to support this in the future then we could
add a generic opt-in mechanism something like:
MY_CI = sys.create_context_item(__name__, "MY_CI", pickleable=True)
But I'm not sure that it even make sense to have a global flag
enabling pickle. Probably it's better to have separate flags to opt-in
to different libraries that might want to pickle in different
situations for different reasons: pickleable-by-dask,
pickleable-by-curio.run_in_process, ... And that's doable without any
special interpreter support. E.g. you could have
curio.Local(pickle=True) coordinate with curio.run_in_process.
> Some more comments:
> On Sat, Aug 12, 2017 at 7:35 PM, Nathaniel Smith <n...@pobox.com> wrote:
>> The advantages are:
>> - Eliminates the current PEP's issues with namespace collision; every
>> context item is automatically distinct from all others.
> TBH I think that the collision issue is slightly exaggerated.
>> - Eliminates the need for the None-means-del hack.
> I consider Execution Context to be an API, not a collection. It's an
> important distinction, If you view it that way, deletion on None is
> doesn't look that esoteric.
Deletion on None is still a special case that API users need to
remember, and it's a small footgun that you can't just take an
arbitrary Python object and round-trip it through the context.
Obviously these are both APIs and they can do anything that makes
sense, but all else being equal I prefer APIs that have fewer special
>> - Lets the interpreter hide the details of garbage collecting context values.
> I'm not sure I understand how the current PEP design is bad from the
> GC standpoint. Or how this proposal can be different, FWIW.
When the ContextItem object becomes unreachable and is collected, then
the interpreter knows that all of the values associated with it in
different contexts are also unreachable and can be collected.
I mentioned this in my email yesterday -- look at the hoops
threading.local jumps through to avoid breaking garbage collection.
This is closely related to the previous point, actually -- AFAICT the
only reason why it *really* matters that None deletes the item is that
you need to be able to delete to free the item from the dictionary,
which only matters if you want to dynamically allocate keys and then
throw them away again. In the ContextItem approach, there's no need to
manually delete the entry, you can just drop your reference to the
ContextItem and the the garbage collector take care of it.
>> - Allows for more implementation flexibility. This could be
>> implemented directly on top of Yury's current prototype. But it could
>> also, for example, be implemented by storing the context values in a
>> flat array, where each context item is assigned an index when it's
> You still want to have this optimization only for *some* keys. So I
> think a separate API is still needed.
Wait, why is it a requirement that some keys be slow? That seems like
weird requirement :-).
Nathaniel J. Smith -- https://vorpus.org
Python-ideas mailing list
Code of Conduct: http://python.org/psf/codeofconduct/