[Python-Dev] Re: PEP 554 comments

2020-04-28 Thread Jim J. Jewett
Even then, disconnect seems like the primary use case, with a 
channel.kill_for_all being a specialized subclass.  One argument for leaving it 
to a subclass is that it isn't clear what other interpreters should do when 
that happens.  Shut down?  Start getting exceptions if they happen to use it 
again, with no information until then?
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MKMAHVG6FE4Q46SAXSKWHQYEHG2QWM2G/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: killing static types (for sub-interpreters?)

2020-04-28 Thread Jim J. Jewett
Ronald Oussoren wrote:
> > On 28 Apr 2020, at 20:38, Jim J. Jewett jimjjew...@gmail.com wrote:
> > Why do sub-interpreters require (separate and) heap-allocated types?
> > It seems types that are statically allocated are a pretty good use for 
> > immortal
> > objects, where you never change the refcount ... and then I don't see why 
> > you need more
> > than one copy.

>  …  One reason is type.__subclasses__(), that returns a list of
> all subclasses and when a type is shared between sub-interpreters the return 
> value might
> refer to objects in another interpreter. That could be fixed by another level 
> of
> indirection I guess.  But extension types could contain other references to 
> Python
> objects, and it is a lot easier to keep track of which subinterpreter those 
> belong to when
> every subinterpreter has its own copy of the type.

So the problem is that even static types often have mutable (containers of) 
references back 
into the heap, and with multiple interpreters, these references would have to 
be made 
per-interpreter?

If I'm not still missing something, then that could get ugly, but doesn't seem 
any worse
than other things sub-interpreters have to multiply.

>  ... “Never changing the refcount” could be expensive

Absolutely!  That has always been the problem in the past.

> in its own right, that adds a branch to every invocation of Py_INCREF and 
> Py_DECREF.  See
> also the benchmark data in  https://bugs.python.org/issue40255> 
> (which contains a patch that disables refcount updates for arbitrary objects).

The updated patch shows that not having to write the memory (and invalidate 
caches, etc) is enough to 
make up for the extra testing.  https://bugs.python.org/msg366605  Obviously, 
that might not hold up
on other machines, etc., but it is already good enough to be interesting, and 
there is room for additional
experimentation.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/X5GM3TSGD4XR3U6OGU533232FPKGB2LZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Comments on PEP 554 (Multiple Interpreters in the Stdlib)

2020-04-28 Thread Eric Snow
On Wed, Apr 22, 2020 at 7:40 PM Kyle Stanley  wrote:
> If there's not an implementation detail that makes this impractical,
> I'd like to give my +1 on the `Interpreter.run()` method returning
> values. From a usability perspective, it seems incredibly convenient
> to have the ability to call a function in a subinterpreter, and then
> directly get the return value instead of having to send the result
> through a channel (for more simple use cases).

The PEP only proposes the ability to run code (a string treated as a
script to run in the __main__ module) in an interpreter.  See
PyRun_StringFlags() in the C-API.  Passing a function to
Interpreter.run() is out of scope.  So returning anything doesn't make
much sense.

> Also, not that the API for subinterpreters needs to be at all similar
> to asyncio, but it would be consistent with `asyncio.run()` with
> regards to being able to return values. Although one could certainly
> argue that `asyncio.run()` and `Interpreter.run()` will have
> significantly different use cases; with `asyncio.run()` being intended
> as a primary entry point for a program, and `Interpreter.run()` being
> used to execute arbitrary code in a single interpreter.

While somewhat different, this is something we should keep in mind.

-eric
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EHU5DQJ53MNMQNUXHBL5HQ2B7CSWPRZI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Comments on PEP 554 (Multiple Interpreters in the Stdlib)

2020-04-28 Thread Eric Snow
On Tue, Apr 21, 2020 at 10:42 AM Mark Shannon  wrote:
> I'm generally in favour of PEP 554, but I don't think it is ready to be
> accepted in its current form.

Yay(ish)! :)

> My main objection is that without per-subinterpeter GILs (SILs?) PEP 554
> provides no value over threading or multi-processing.
> Multi-processing provides true parallelism and threads provide shared
> memory concurrency.

I disagree. :)  I believe there are merits to the kind of programming
one can do via subinterpreter + channels (i.e. threads with opt-in
sharing).  I would also like to get broader community exposure to the
subinterpreter functionality sooner rather than later.  Getting the
Python API out there now will help folks get ready sooner for the
(later?) switch to per-interpreter GIL.  As Antoine put it, it allows
folks to start experimenting.  I think there is enough value in all
that to warrant landing PEP 554 in 3.9 even if per-interpreter GIL
only happens in 3.10.

> If per-subinterpeter GILs are possible then, and only then,
> sub-interpreters will provide true parallelism and (limited) shared
> memory concurrency.
>
> The problem is that we don't know whether we can implement
> per-subinterpeter GILs without too large a negative performance impact.
> I think we can, but we can't say so for certain.

I think we can as well, but I'd like to hear more about what obstacles
you think we might run into.

> So, IMO, we should not accept PEP 554 until we know for sure that
> per-subinterpeter GILs can be implemented efficiently.
>
>
>
> Detailed critique
> -
>
> I don't see how `list_all()` can be both safe and accurate. The Java
> equivalent makes no guarantees of accuracy.
> Attempting to lock the list is likely to lead to deadlock and not
> locking it will lead to races; potentially dangerous ones.
> I think it would be best to drop this.
>
> `list_all_channels()`. See `list_all()` above.
>
> [out of order] `Channel.interpreters` see `list_all()` and 
> `list_all_channels()` above.

I'm not sure I understand your objection.  If a user calls the
function then they get a list.  If that list becomes outdated in the
next minute or the next millisecond, it does not impact the utility of
having that list.  For example, without that list how would one make
sure all other interpreters have been destroyed?

> `.destroy()` is either misleading or unsafe.
> What does this do?
>
>  >>> is.destroy()
>  >>> is.run()
>
> If `run()` raises an exception then the interpreter must exist. Rename
> to `close()` perhaps?

I see what you mean.  "Interpreter" objects are wrappers rather than
the actual interpreters, but that might not stop folks from thinking
otherwise.  I agrree that "close" may communicate that nature better.
I guess so would "finalize", which is what the C-API calls it.  Then
again, you can't tell an object to "destroy" itself, can you?  It just
isn't clear what you are destroying (nor why we're so destructive
).

So "close" aligns with other similarly purposed methods out there,
while "finalize" aligns with the existing C-API and also elevates the
complex nature of what happens.  If we change the name from "destroy"
then I'd lean toward "finalize".

FWIW, in your example above, the is.run() call would raise a
RuntimeError saying that it couldn't find an interpreter with "that"
ID.

> How does `is_shareable()` work? Are you proposing some mechanism to
> transfer an object from one sub-interpreter to another? How would that
> work?

The PEP purposefully does not proscribe how "is_shareable()" works.
That depends on the implementation for channels, for which there could
be several, and which will likely differ based on the Python
implementation.  Likewise the PEP does not dictate how channels work
(i.e. how objects are "shared").  That is an implementation detail.
We could talk about how we've implemented PEP 554, but that's not
highly relevant to the merits of this proposal (its API in
particular).

> If objects are not shared but serialized, why not use marshal or
> pickle instead of inventing a third serialization protocol?

Again, that's an implementation detail.  The PEP does not specify that
objects are actually shared or not.  In fact, I was careful to say:

This does not necessarily mean that the actual objects will be
shared.  Insead, it means that the objects' underlying data will
be shared in a cross-interpreter way, whether via a proxy, a
copy, or some other means.

> It would be clearer if channels only dealt with simple, contiguous
> binary data. As it stands the PEP doesn't state what form the received
> object will take.

You're right.  The PEP is not clear enough about what object an
interpreter will receive for a given sent object.  The intent is that
it will be the same type with the same data.  This might not always be
possible, so there may be cases where we allow for a compatible proxy.
Either way, I'll clarify this point in the PEP.

> Once channels supporting the 

[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-28 Thread Nathaniel Smith
On Mon, Apr 20, 2020 at 6:21 PM Eric Snow  wrote:
>
> Nathaniel,
>
> Your tone and approach to this conversation concern me.  I appreciate
> that you have strong feelings here and readily recognize I have my own
> biases, but it's becoming increasingly hard to draw any constructive
> insight from what tend to be very longs posts from you.  It ends up
> being a large commitment of time for small gains.  And honestly, it's
> also becoming hard to not counter some of your more elaborate
> statements with my own unhelpful prose.  In the interest of making
> things better, please take it all down a notch or two.

I'm sorry it's landing that way on you. I am frustrated, and I think
that's a reasonable reaction. But I know we're all here because we
want to make Python better. So let me try again to explain my
position, to maybe reboot the conversation in a more productive way.

All engineering decisions come down to costs vs. benefits. My
frustration is about how you're approaching the costs, and how you're
approaching the benefits.

**Costs**

I think you've been downplaying the impact of subinterpreter support
on the existing extension ecosystem. All features have a cost, which
is why PEPs always require substantial rationales and undergo intense
scrutiny. But subinterpreters are especially expensive. Most features
only affect a small group of modules (e.g. async/await affected
twisted and tornado, but 99% of existing libraries didn't care); OTOH
subinterpreters require updates to every C extension module. And if we
start telling users that subinterpreters are a supported way to run
arbitrary Python code, then we've effectively limited extension
authors options to "update to support subinterpreters" or "explain to
users why they aren't writing a proper Python module", which is an
intense amount of pressure; for most features maintainers have the
option of saying "well, that isn't relevant to me", but with
subinterpreter support that option's been removed. (You object to my
calling this an API break, but you're literally saying that old code
that worked fine is being redefined to be incorrect, and that all
maintainers need to learn new techniques. That's the definition of an
API break!) And until everything is updated, you're creating a schism
in the ecosystem, between modules that support subinterpreters and
those that don't.

I did just read your reply to Sebastian, and it sounds like you're
starting to appreciate this impact more, which I'm glad to see.

None of this means that subinterpreters are necessarily a bad idea.
For example, the Python 2 -> Python 3 transition was very similar, in
terms of maintainers being forced to go along and creating a temporary
schism in the ecosystem, and that was justified by the deep, unfixable
problems with Python 2. But it does mean that subinterpreters need an
even stronger rationale than most features.

And IMO, the point where PEP 554 is accepted and we start adding new
public APIs for subinterpreters is the point where most of these costs
kick in, because that's when we start sending the message that this is
a real thing and start forcing third-party maintainers to update their
code. So that's when we need the rationale.

**Benefits**

In talks and informal conversations, you paint a beautiful picture of
all the wonderful things subinterpreters will do. Lots of people are
excited by these wonderful things. I tried really hard to be excited
too. (In fact I spent a few weeks trying to work out a
subinterpreter-style proposal myself way back before you started
working on this!) But the problem is, whenever I look more closely at
the exciting benefits, I end up convincing myself that they're a
mirage, and either they don't work at all (e.g. quickly sharing
arbitrary objects between interpreters), or else end up being
effectively a more complex, fragile version of things that already
exist.

I've been in lots of groups before where everyone (including me!) got
excited about a cool plan, focused exclusively on the positives, and
ignored critical flaws until it was too late. See also: "groupthink",
"confirmation bias", etc. The whole subinterpreter discussion feels
very familiar that way. I'm worried that that's what's happening.

Now, I might be right, or I might be wrong, I dunno; subinterpreters
are a complex topic. Generally the way we sort these things out is to
write down the arguments for and against and figure out the technical
merits. That's one of the purposes of writing a PEP. But: you've been
*systematically refusing to do this.* Every time I've raised a concern
about one rationale, then instead of discussing the technical
substance of my concern, you switch to a different rationale, or say
"oh well, that rationale isn't the important one right now". And the
actual text in PEP 554 is *super* vague, like it's so vague it's kind
of an insult to the PEP process.

>From your responses in this thread, I think your core position now is
that the rationale is irrelevant, 

[Python-Dev] Re: PEP 554 comments

2020-04-28 Thread Eric Snow
On Tue, Apr 21, 2020 at 11:21 PM Greg Ewing  wrote:
> What I'm suggesting is that close() should do what the
> PEP defines release() as doing, and release() shouldn't
> exist.
>
> I don't see why an interpreter needs the ability to close
> a channel for any *other* interpreter. There is no such
> ability for files and pipes.

Ah, thanks for clarifying.  One of the main inspirations for the
proposed channels is CSP (and somewhat relatedly, my in-depth
experience with Go).  Channels are more than just a thread-safe data
transport between interpreters.  They also provide relatively
straightforward mechanisms for managing cooperation in a group of
interpreters.  Having a distinct "close()" vs. "release()" is part of
that.  Furthermore, IMHO "release" is better at communicating the
per-interpreter nature than "close".  "release()" doesn't close the
channel.  It communicates that that particular interpreter is done
using that end of the channel.

I appreciate that you brought up comparisons with other objects and
data types.  I'm a fan of adapting existing APIs and patterns,
especially from proven sources.  That said, the comparison with files
would be more complete if channels were persistent.  With pipes the
main difference is how many actors are involved.  Pipes involve one
sender and one receiver, right?  FWIW, I also looked at other data
types.  Queues are the closest thing to the proposed channels, and I
almost called them that, but there are a few subtle differences from
queue.Queue and I didn't want folks inadvertently confusing the two.

-eric
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DUVEROBVGZPNKR3M7C45C3AUVDLAGK37/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 comments

2020-04-28 Thread Eric Snow
On Wed, Apr 22, 2020 at 2:13 AM Kyle Stanley  wrote:
> If you'd like an example format for marking a section of the docs as
> provisional w/ reST, something like this at the top should suffice
> (with perhaps something more specific to the subinterpreters module):
>
>
> .. note::
> This section of the documentation and all of its members have been
> added *provisionally*. For more details, see :term:`provisional api`.
>
>
> :term:`provisional api` generates a link to
> https://docs.python.org/3/glossary.html#term-provisional-api.

Thanks!

-eric
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/U53DXD7KAXGKEAYODISBNFXRY62HTXZU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-28 Thread Eric Snow
On Sun, Apr 26, 2020 at 2:21 PM Carl Shapiro  wrote:
> While this PEP may not create a maintenance burden for CPython, it
> does have the effect of raising the complexity bar for an alternative
> Python implementation.

FWIW, I did reach out to the various major Python implementation about
this and got a favorable response.  See:
https://www.python.org/dev/peps/pep-0554/#alternate-python-implementations

> A thought that may have already been mentioned elsewhere: perhaps the
> PEP could be more made more acceptable by de-scoping it to expose a
> minimal set of C-API hooks to enable third-party libraries for the
> sub-interpreter feature rather than providing that feature in the
> standard library?

Note that the low-level implementation of PEP 554 is an extension
module that only uses the public C-API (no "internal" C-API).  So all
the hooks are already there.  Or did I misunderstand?

-eric
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5GGT546APMHRFPIVZS4DW6MQ2FFFU32I/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-28 Thread Eric Snow
On Wed, Apr 22, 2020 at 2:43 AM Ronald Oussoren  wrote:
> My mail left out some important information, sorry about that.

No worries. :)

> PyObjC is a two-way bridge between Python and Objective-C. One half of this 
> is that is bridging Objective-C classes (and instances) to Python. This is 
> fairly straightforward, although the proxy objects are not static and can 
> have methods defined in Python (helper methods that make the Objective-C 
> classes nicer to use from Python, for example to define methods that make it 
> possible to use an NSDictionary as if it were a regular Python dict).

Cool.  (also fairly straightforward!)

> The other half is that it is possible to implement Objective-C classes in 
> Python:
>
>class MyClass (Cocoa.NSObject):
>def anAction_(self, sender): …
>
> This defines a Python classes named “MyClass”, but also an Objective-C class 
> of the same name that forwards Objective-C calls to Python.

Even cooler! :)

>  The implementation for this uses PyGILState_Ensure, which AFAIK is not yet 
> useable with sub-interpreters.

That is correct.  It is one of the few major subinterpreter
bugs/"bugs" remaining to be addressed in the CPython code.  IIRC,
there were several proposed solutions (between 2 BPO issues) that
would fix it but we got distracted before the matter was settled.

> PyObjC also has Objective-C proxy classes for generic Python objects, making 
> it possible to pass a normal Python dictionary to an Objective-C API that 
> expects an NSDictionary instance.

Also super cool.  How similar is this to Jython and IronPython?

> Things get interesting when combining the two with sub-interpreters: With the 
> current implementation the Objective-C world would be a channel for passing 
> “live” Python objects between sub-interpreters.

+1

> The translation tables for looking up existing proxies (mapping from Python 
> to Objective-C and vice versa) are currently singletons.
>
> This is probably fixable with another level of administration, by keeping 
> track of the sub-interpreter that owns a Python object I could ensure that 
> Python objects owned by a different sub-interpreter are proxied like any 
> other Objective-C object which would close this loophole.  That would require 
> significant changes to a code base that’s already fairly complex, but should 
> be fairly straightforward.

Do you think there are any additions we could make to the C-API (more
than have been done recently, e.g. PEP 573) that would make this
easier.  From what I understand, this pattern of a cache/table of
global Python objects is a relatively common one.  So anything we can
do to help transition these to per-interpreter would be broadly
beneficial.  Ideally it would be done in the least intrusive way
possible, reducing churn and touch points.  (e.g. a macro to convert
existing tables,etc. + an init func to call during module init.)

Also, FWIW, I've been thinking about possible approaches where the
first/main interpreter uses the existing static types, etc. and
further subinterpreters use a heap type (etc.) derived mostly
automatically from the static one.  It's been on my mind because this
is one of the last major hurdles to clear in the CPython code before
we can make the GIL per-interpreter.

> > What additional API would be needed?
>
> See above, the main problem is PyGILState_Ensure.  I haven’t spent a lot of 
> time thinking about this though, I might find other issues when I try to 
> support sub-interpreters.

Any feedback on this would be useful.

> >> As far as I understand proper support for subinterpreters also requires 
> >> moving
> >> away from static type definitions to avoid sharing objects between 
> >> interpreters
> >> (that is, use the PyType_FromSpec to build types).
> >
> > Correct, though that is not technically a problem until we stop sharing the 
> > GIL.
>
> Right. But a major selling point of sub-interpreters is that this provide a 
> way forward towards having multiple Python threads that don’t share a GIL.
>
> IMHO it would be better to first work out what’s needed to get there, and in 
> particular what changes are needed in extensions. Otherwise extensions may 
> have to be changed multiple times.

Yeah, I see what you're saying.  It's been a hard balance to strike.
There are really 2 things that have to be done: move all global state
to per-interpreter and deal with static types (etc.).  Do you think
both will require significant work in the community?  My instinct,
partly informed by my work in CPython along these lines, is that the
former is more work and sometimes trickier.  The latter is fairly
straightforward and much more of an opportunity for automatic
approaches.

> >> At first glance this API does not support everything I do in PyObjC (fun 
> >> with metaclasses, in C code).
> >
> > What specific additions/changes would you need?
>
> At least:
>
> - A variant of PyGILState_Ensure that supports sub-interpreters
> - Defining subclasses of 

[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-28 Thread Eric Snow
On Tue, Apr 21, 2020 at 11:17 AM Sebastian Berg
 wrote:
> Maybe one of the frustrating points about this criticism is that it
> does not belong in this PEP. And that is actually true! I
> wholeheartedly agree that it doesn't really belong in this PEP itself.
>
> *But* the existence of a document detailing the "state and vision for
> subinterpreters" that includes these points is probably a prerequisite
> for this PEP. And this document must be linked prominently from the
> PEP.
>
> So the suggestion should maybe not be to discuss it in the PEP, but to
> to write it either in the documentation on subinterpreters or as an
> informational PEP. Maybe such document already exists, but then it is
> not linked prominently enough probably.

That is an excellent point.  It would definitely help to have more
clarity about the feature (subinterpreters).  I'll look into what
makes the most sense.  I've sure Victor has already effectively
written something like this. :)

-eric
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2W2GOKCUV35BAOX75ILD7XOIGWCXJ5KN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-28 Thread Eric Snow
On Tue, Apr 21, 2020 at 10:24 AM Sebastian Berg
 wrote:
> On Tue, 2020-04-21 at 16:21 +0200, Victor Stinner wrote:
> > I fail to follow your logic. When the asyncio PEP was approved, I
> > don't recall that suddenly the whole Python community started to
> > rewrite all projects to use coroutines everywhere. I tried hard to
> > replace eventlet with asyncio in OpenStack and I failed because such
> > migration was a very large project with dubious benefits (people
> > impacted by eventlet issues were the minority).
>
> Sure, but this is very different. You can still use NumPy in a project
> using asyncio. You are _not_ able to use NumPy in a project using
> subinterpreters.

True.  Is that a short-term problem?  I don't know.  A long-term
problem?  Definitely.  So it will have to be addressed at some point.

The biggest concern here is what is the resulting burden on extension
authors and what can we do to help mitigate that.  The first step is
to understand what that burden might entail.

> Right now, I have to say as soon as the first bug report asking for
> this is opened and tells me: But see PEP 554 you should support it! I
> would be tempted to put on the NumPy Roadmap/Vision that no current
> core dev will put serious efforts into subinterpreters. Someone is
> bound to be mad.

Yeah.  And I don't want to put folks in the position that they get
fussed at for something like this.  This isn't a ploy to force
projects like numpy to fix their subinterpreter support.

My (honest) question is, how many folks using subinterpreters are
going to want to use numpy (or module X) enough to get mad about it
before the extension supports subinterpreters?  What will user
expectations be when it comes to subinterpreters?

We will make the docs as clear as we can, but there are plenty of
users out there that will not pay enough attention to know that most
extension modules will not support subinterpreters at first.  Is there
anything we can do to mitigate this impact?  How much would it help if
the ImportError for incompatible modules give a clear (though
lengthier) explanation of the situation?

> Basically, if someone wants it in NumPy, I personally may expect them
> to be prepared to invest a year worth of good dev time [1]. Maybe that
> is pessimistic, but your guess is as good as mine. At normal dev-pace
> it will be at least a few years of incremental changes before NumPy
> might be ready (how long did it take Python?)?
>
> The PEP links to NumPy bugs, I am not sure that we ever fixed a single
> one. Even if, the remaining ones are much larger and deeper. As of now,
> the NumPy public API has to be changed to even start supporting
> subinterpreters as far as I aware [2]. This is because right now we
> sometimes need to grab the GIL (raise errors) in functions that are not
> passed GIL state.

What do you expect to have to change?  It might not be as bad as you
think...or I suppose it could be. :)

Keep in mind that subinterpreter support means making sure all of the
module's global state is per-interpreter.  I'm hearing about things
like passing around GIL state and using the limited C-API.  None of
that should be a factor.

> This all is not to say that this PEP itself doesn't seem harmless. But
> the _expectation_ that subinterpreters should be first class citizens
> will be a real and severe transition burden. And if it does not, the
> current text of the PEP gives me, as someone naive about
> subinterpreters, very few reasons why I should put in that effort or
> reasons to make me believe that it actually is not as bad a transition
> as it seems.

Yeah, the PEP is very light on useful information extension module
maintainers.  What information do you think would be most helpful?

> Right now, I would simply refuse to spend time on it. But as Nathaniel
> said, it may be worse if I did not refuse and in the end only a handful
> of users get anything out of my work: The time is much better spend
> elsewhere. And you, i.e. CPython will spend your "please fix your C-
> extension" chips on subinterpreters. Maybe that is the only thing on
> the agenda, but if it is not, it could push other things away.

Good point.

> Reading the PEP, it is fuzzy on the promises (the most concrete I
> remember is that it may be good for security relevant reasons), which
> is fine, because the goal is "experimentation" more than use?

The PEP is definitely lacking clear details on how folks might use
subinterpreters (via the proposed module).  There are a variety of
reasons.

I originally wrote the PEP mostly as "let's expose existing
functionality more broadly", with the goal of getting it into folks'
hands sooner rather than later.  My focus was mostly on the API.  I
didn't see a strong need to convince anyone that the feature itself
was worth it (since it already existed).  In many ways the PEP is a
side effect of my efforts to achieve a good multi-core Python story
(via a per-interpreter GIL).  All the relevant parties in that 

[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-28 Thread Eric Snow
On Tue, Apr 21, 2020 at 11:31 PM Greg Ewing  wrote:
> To put it another way, the moment you start using subinterpreters,
> the set of extension modules you are able to use will shrink
> *enormously*.

Very true but we have to start somewhere.

> And if I understand correctly, you won't get any nice "This
> module does not support subinterpreters" exception if you
> import an incompatible module -- just an obscure crash,
> probably of the core-dumping variety.

As Petr noted, we can use PEP 489 (Multi-phase Extension Module
Initialization) support as an indicator and raise ImportError for any
other extension modules (when in a subinterpreter).  That seems like a
reasonable way to avoid the hard-to-debug failures that would result
otherwise.

The only question I have is if it makes sense to offer a way to
disable such a check (e.g. a flag when creating a subinterpreter).  I
think so, since then extension authors could more easily test their
extension under subinterpreters without having to release a separate
build that has PEP 489 support.

-eric
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IB5IAQUMPFV3W2V3EBBKWRJWROZ42OVD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-28 Thread Victor Stinner
Oh, I didn't know this Python 3.8 new feature
(@functools.cached_property). It does exactly what I needed, cool!

Victor

Le mar. 28 avr. 2020 à 21:18, Brett Cannon  a écrit :
>
> Victor Stinner wrote:
> > Hi,
> > A pattern that I used multiple times is to compute an object attribute
> > only once and cache the result into the object. Dummy example:
>
> How is that different from 
> https://docs.python.org/3/library/functools.html?highlight=cached_property#functools.cached_property?
>
> -Brett
>
> > class X:
> > def __init__(self, name):
> > self.name = name
> > self._cached_upper = None
> > def _get(self):
> > if self._cached_upper is None:
> > print("compute once")
> > self._cached_upper = self.name.upper()
> > return self._cached_upper
> > upper = property(_get)
> >
> > obj = X("victor")
> > print(obj.upper)
> > print(obj.upper)   # use cached value
> > It would be interesting to be able to replace obj.upper property with
> > an attribute (to reduce the performance overhead of calling _get()
> > method), but "obj.upper = value" raises an error since the property
> > prevents to set the attribute.
> > I understood that the proposed @called_once would store the cached
> > value into the function namespace.
> > Victor
> > Le lun. 27 avr. 2020 à 23:44, t...@tomforb.es a écrit :
> > >
> > > Hello,
> > > After a great discussion in python-ideas[1][2] it was suggested that I 
> > > cross-post this
> > > proposal to python-dev to gather more comments from those who don't follow
> > > python-ideas.
> > > The proposal is to add a "call_once" decorator to the functools module 
> > > that, as the
> > > name suggests, calls a wrapped function once, caching the result and 
> > > returning it with
> > > subsequent invocations. The rationale behind this proposal is that:
> > >
> > > Developers are using "lru_cache" to achieve this right now, which is less 
> > > efficient
> > > than it could be
> > > Special casing "lru_cache" to account for zero arity methods isn't 
> > > trivial and we
> > > shouldn't endorse lru_cache as a way of achieving "call_once" semantics
> > > Implementing a thread-safe (or even non-thread safe) "call_once" method is
> > > non-trivial
> > > It complements the lru_cache and cached_property methods currently 
> > > present in
> > > functools.
> > >
> > > The specifics of the method would be:
> > >
> > > The wrapped method is guaranteed to only be called once when called for 
> > > the first time
> > > by concurrent threads
> > > Only functions with no arguments can be wrapped, otherwise an exception is
> > > thrown
> > > There is a C implementation to keep speed parity with lru_cache
> > >
> > > I've included a naive implementation below (that doesn't meet any of the 
> > > specifics
> > > listed above) to illustrate the general idea of the proposal:
> > > def call_once(func):
> > > sentinel = object()  # in case the wrapped method returns None
> > > obj = sentinel
> > > @functools.wraps(func)
> > > def inner():
> > > nonlocal obj, sentinel
> > > if obj is sentinel:
> > > obj = func()
> > > return obj
> > > return inner
> > >
> > > I'd welcome any feedback on this proposal, and if the response is 
> > > favourable I'd love
> > > to attempt to implement it.
> > >
> > > https://mail.python.org/archives/list/python-id...@python.org/thread/5OR3LJO...
> > > https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-...
> > >
> > >
> > > Python-Dev mailing list -- python-dev@python.org
> > > To unsubscribe send an email to python-dev-le...@python.org
> > > https://mail.python.org/mailman3/lists/python-dev.python.org/
> > > Message archived at 
> > > https://mail.python.org/archives/list/python-dev@python.org/message/5CFUCM4W...
> > > Code of Conduct: http://python.org/psf/codeofconduct/
> > > --
> > Night gathers, and now my watch begins. It shall not end until my death.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/TYUV24XOPPWBK6HRK24A3BRDF4VGMQOT/
> Code of Conduct: http://python.org/psf/codeofconduct/



-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/KEF6ZAPX3LX3ILNU7CSLQBT4D5G25K5B/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-28 Thread Raymond Hettinger

>  t...@tomforb.es wrote:
> 
> I would like to suggest adding a simple “once” method to functools. As the 
> name suggests, this would be a decorator that would call the decorated 
> function, cache the result and return it with subsequent calls.

It seems like you would get just about everything you want with one line:

call_once = lru_cache(maxsize=None)

which would be used like this:

   @call_once
   def welcome():
   len('hello')

> Using lru_cache like this works but it’s not as efficient as it could be - in 
> every case you’re adding lru_cache overhead despite not requiring it.


You're likely imagining more overhead than there actually is.  Used as shown 
above, the lru_cache() is astonishingly small and efficient.  Access time is 
slightly cheaper than writing d[()]  where d={(): some_constant}. The 
infinite_lru_cache_wrapper() just makes a single dict lookup and returns the 
value.¹ The lru_cache_make_key() function just increments the empty args tuple 
and returns it.²   And because it is a C object, calling it will be faster than 
for a Python function that just returns a constant, "lambda: some_constant()".  
This is very, very fast.


Raymond


¹ https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c#L870
² https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c#L809






> 
> Hello,
> After a great discussion in python-ideas[1][2] it was suggested that I 
> cross-post this proposal to python-dev to gather more comments from those who 
> don't follow python-ideas.
> 
> The proposal is to add a "call_once" decorator to the functools module that, 
> as the name suggests, calls a wrapped function once, caching the result and 
> returning it with subsequent invocations. The rationale behind this proposal 
> is that:
> 1. Developers are using "lru_cache" to achieve this right now, which is less 
> efficient than it could be
> 2. Special casing "lru_cache" to account for zero arity methods isn't trivial 
> and we shouldn't endorse lru_cache as a way of achieving "call_once" 
> semantics 
> 3. Implementing a thread-safe (or even non-thread safe) "call_once" method is 
> non-trivial
> 4. It complements the lru_cache and cached_property methods currently present 
> in functools.
> 
> The specifics of the method would be:
> 1. The wrapped method is guaranteed to only be called once when called for 
> the first time by concurrent threads
> 2. Only functions with no arguments can be wrapped, otherwise an exception is 
> thrown
> 3. There is a C implementation to keep speed parity with lru_cache
> 
> I've included a naive implementation below (that doesn't meet any of the 
> specifics listed above) to illustrate the general idea of the proposal:
> 
> ```
> def call_once(func):
>sentinel = object()  # in case the wrapped method returns None
>obj = sentinel
>@functools.wraps(func)
>def inner():
>nonlocal obj, sentinel
>if obj is sentinel:
>obj = func()
>return obj
>return inner
> ```
> 
> I'd welcome any feedback on this proposal, and if the response is favourable 
> I'd love to attempt to implement it.
> 
> 1. 
> https://mail.python.org/archives/list/python-id...@python.org/thread/5OR3LJO7LOL6SC4OOGKFIVNNH4KADBPG/#5OR3LJO7LOL6SC4OOGKFIVNNH4KADBPG
> 2. 
> https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-functions-with-no-parameters/3956
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/5CFUCM4W3Z36U3GZ6Q3XBLDEVZLNFS63/
> Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/OYBYJ2373OTHALHTPQJV5EBX6N5M4DDL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: killing static types (for sub-interpreters?)

2020-04-28 Thread Steve Dower

On 28Apr2020 2006, Steve Dower wrote:
(For those who aren't following it, there's a discussion with a patch 
and benchmarks going on at https://bugs.python.org/issue40255 about 
making objects individually immortal. It's more focused around 
copy-on-write, rather than subinterpreters, but the benefits apply to 
both.)


More precisely, the benefits are different, but the implementation 
provides each to each scenario.


I also want to draw attention to one specific post 
https://bugs.python.org/issue40255#msg366577 where some additional 
changes (making more objects immortal) brought the benchmarks well back 
within error margins, after initial checks found more than 10% 
regression on about 1/4 of the performance suite.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VGIO4GB25DEDJ5FXOUBJV5JMKFDDD5JY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-28 Thread Brett Cannon
Victor Stinner wrote:
> Hi,
> A pattern that I used multiple times is to compute an object attribute
> only once and cache the result into the object. Dummy example:

How is that different from 
https://docs.python.org/3/library/functools.html?highlight=cached_property#functools.cached_property?

-Brett

> class X:
> def __init__(self, name):
> self.name = name
> self._cached_upper = None
> def _get(self):
> if self._cached_upper is None:
> print("compute once")
> self._cached_upper = self.name.upper()
> return self._cached_upper
> upper = property(_get)
> 
> obj = X("victor")
> print(obj.upper)
> print(obj.upper)   # use cached value
> It would be interesting to be able to replace obj.upper property with
> an attribute (to reduce the performance overhead of calling _get()
> method), but "obj.upper = value" raises an error since the property
> prevents to set the attribute.
> I understood that the proposed @called_once would store the cached
> value into the function namespace.
> Victor
> Le lun. 27 avr. 2020 à 23:44, t...@tomforb.es a écrit :
> >
> > Hello,
> > After a great discussion in python-ideas[1][2] it was suggested that I 
> > cross-post this
> > proposal to python-dev to gather more comments from those who don't follow
> > python-ideas.
> > The proposal is to add a "call_once" decorator to the functools module 
> > that, as the
> > name suggests, calls a wrapped function once, caching the result and 
> > returning it with
> > subsequent invocations. The rationale behind this proposal is that:
> > 
> > Developers are using "lru_cache" to achieve this right now, which is less 
> > efficient
> > than it could be
> > Special casing "lru_cache" to account for zero arity methods isn't trivial 
> > and we
> > shouldn't endorse lru_cache as a way of achieving "call_once" semantics
> > Implementing a thread-safe (or even non-thread safe) "call_once" method is
> > non-trivial
> > It complements the lru_cache and cached_property methods currently present 
> > in
> > functools.
> > 
> > The specifics of the method would be:
> > 
> > The wrapped method is guaranteed to only be called once when called for the 
> > first time
> > by concurrent threads
> > Only functions with no arguments can be wrapped, otherwise an exception is
> > thrown
> > There is a C implementation to keep speed parity with lru_cache
> > 
> > I've included a naive implementation below (that doesn't meet any of the 
> > specifics
> > listed above) to illustrate the general idea of the proposal:
> > def call_once(func):
> > sentinel = object()  # in case the wrapped method returns None
> > obj = sentinel
> > @functools.wraps(func)
> > def inner():
> > nonlocal obj, sentinel
> > if obj is sentinel:
> > obj = func()
> > return obj
> > return inner
> > 
> > I'd welcome any feedback on this proposal, and if the response is 
> > favourable I'd love
> > to attempt to implement it.
> > 
> > https://mail.python.org/archives/list/python-id...@python.org/thread/5OR3LJO...
> > https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-...
> > 
> > 
> > Python-Dev mailing list -- python-dev@python.org
> > To unsubscribe send an email to python-dev-le...@python.org
> > https://mail.python.org/mailman3/lists/python-dev.python.org/
> > Message archived at 
> > https://mail.python.org/archives/list/python-dev@python.org/message/5CFUCM4W...
> > Code of Conduct: http://python.org/psf/codeofconduct/
> > --
> Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TYUV24XOPPWBK6HRK24A3BRDF4VGMQOT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: killing static types (for sub-interpreters?)

2020-04-28 Thread Steve Dower
If the object is going to live until the "end of time" 
(process/runtime/whatever) then there'll never be a need to deallocate 
it, and so there's no point counting how many references exist (and 
ditto for anything that it references).


Currently, statically allocated types include references to 
heap-allocated objects, and since different interpreters may use 
different heaps (via different allocators), this means they can't share 
the static types either. These references are for freelists, weak 
references, and some others that I forget but apparently make it 
unfixable. Those with a __dict__ object also need to be per-interpreter.


If statically allocated types were truly constant, that would be great! 
Then they could be freely reused. The same applies for many of our 
built-in non-container types too, in my opinion (and my goal would be to 
make code objects fully shareable, so you don't have to recompile/reload 
them for each new interpreter).


(For those who aren't following it, there's a discussion with a patch 
and benchmarks going on at https://bugs.python.org/issue40255 about 
making objects individually immortal. It's more focused around 
copy-on-write, rather than subinterpreters, but the benefits apply to both.)


Cheers,
Steve

On 28Apr2020 1949, Paul Ganssle wrote:

I don't know the answer to this, but what are some examples of objects
where you never change the refcount? Are these Python objects? If so,
wouldn't doing something like adding the object to a list necessarily
change its refcount, since the list implementation only knows, "I have a
reference to this object, I must increase the reference count", and it
doesn't know that the object doesn't need its reference count changed?

Best,
Paul

On 4/28/20 2:38 PM, Jim J. Jewett wrote:

Why do sub-interpreters require (separate and) heap-allocated types?

It seems types that are statically allocated are a pretty good use for immortal 
objects, where you never change the refcount ... and then I don't see why you 
need more than one copy.

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MKMGRHYKA2WLQ6UPLJQS5TXCC7CFEN43/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: killing static types (for sub-interpreters?)

2020-04-28 Thread Ronald Oussoren via Python-Dev

> On 28 Apr 2020, at 20:38, Jim J. Jewett  wrote:
> 
> Why do sub-interpreters require (separate and) heap-allocated types?  
> 
> It seems types that are statically allocated are a pretty good use for 
> immortal objects, where you never change the refcount ... and then I don't 
> see why you need more than one copy.

I guess it depends…  One reason is type.__subclasses__(), that returns a list 
of all subclasses and when a type is shared between sub-interpreters the return 
value might refer to objects in another interpreter. That could be fixed by 
another level of indirection I guess.  But extension types could contain other 
references to Python objects, and it is a lot easier to keep track of which 
subinterpreter those belong to when every subinterpreter has its own copy of 
the type.  

If subinterpreters get their own GIL maintaining the refcount is another reason 
for not sharing types between subinterpreters.  “Never changing the refcount” 
could be expensive in its own right, that adds a branch to every invocation of 
Py_INCREF and Py_DECREF.  See also the benchmark data in 
>  
(which contains a patch that disables refcount updates for arbitrary objects).

Ronald
—

Twitter / micro.blog: @ronaldoussoren
Blog: https://blog.ronaldoussoren.net/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DAKZI7352EWIMJ7Y2YLHPCHJST7DIZWB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: killing static types (for sub-interpreters?)

2020-04-28 Thread Paul Ganssle
I don't know the answer to this, but what are some examples of objects
where you never change the refcount? Are these Python objects? If so,
wouldn't doing something like adding the object to a list necessarily
change its refcount, since the list implementation only knows, "I have a
reference to this object, I must increase the reference count", and it
doesn't know that the object doesn't need its reference count changed?

Best,
Paul

On 4/28/20 2:38 PM, Jim J. Jewett wrote:
> Why do sub-interpreters require (separate and) heap-allocated types?  
>
> It seems types that are statically allocated are a pretty good use for 
> immortal objects, where you never change the refcount ... and then I don't 
> see why you need more than one copy.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/S674C2BJ7NHKB3SOJF4VFRXVNQDNSCHP/
> Code of Conduct: http://python.org/psf/codeofconduct/



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/J64VIJPXBCR7DQPDFSZWTLRVIYGCXYPF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Announcement: pip 20.1b1 beta release

2020-04-28 Thread Sumana Harihareswara
Thanks for the testing, all. Pip 20.1 is now out and 
https://pip.pypa.io/en/latest/news/ has the changes since the beta.

--
Sumana Harihareswara
Changeset Consulting
https://changeset.nyc
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/74B6RDZU3UYO5XY3IDXGKJAOB3LDRGBF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] killing static types (for sub-interpreters?)

2020-04-28 Thread Jim J. Jewett
Why do sub-interpreters require (separate and) heap-allocated types?  

It seems types that are statically allocated are a pretty good use for immortal 
objects, where you never change the refcount ... and then I don't see why you 
need more than one copy.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/S674C2BJ7NHKB3SOJF4VFRXVNQDNSCHP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Improvement to SimpleNamespace

2020-04-28 Thread Mike Miller


On 2020-04-16 04:33, Rob Cliffe via Python-Dev wrote:
Here's another revolutionary thought:  add a new operator and associated dunder 
method (to object?)
whose meaning is *undefined*.  Its default implementation would do *nothing* 
useful (raise an error? return None?).

E.g. suppose the operator were `..`
Then in a specific class you could implement x..y to mean x['y']
and then you could write
     obj..abc..def..ghi
Still fairly concise, but warns that what is happening is not normal attribute 
lookup.


Interesting, I've thought the same thing.  Double dot might be a good option.

In practice however I've not encountered key names in JSON that conflict with 
the dictionary methods.  A missing protocol could handle clashes when they 
happen, as applied to keys.  Keys that conflict are simply shadowed by the 
method names unless you use [''] notation.


I know, that answer is not satisfying to the purist.  Double dot is better in 
that regard.  Yet haven't found it to be a concrete problem.


Perhaps linters could find code using uncalled dict method names as a 
mitigation.  Suppose it boils down to a judgement call in the end.


-Mike
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YPMKEB3ASW7KJAIP6F7K3F3X7FHIINRN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] [RELEASE] Python 3.9.0a6 is now available for testing

2020-04-28 Thread Łukasz Langa
On behalf of the entire Python development community, and the currently serving 
Python release team in particular, I’m pleased to announce the release of 
Python 3.9.0a6. Get it here:

https://www.python.org/downloads/release/python-390a6/ 

This is an early developer preview of Python 3.9

Python 3.9 is still in development. This release, 3.9.0a6, is the last out of 
six planned alpha releases. Alpha releases are intended to make it easier to 
test the current state of new features and bug fixes and to test the release 
process. During the alpha phase, features may be added up until the start of 
the beta phase (2020-05-18) and, if necessary, may be modified or deleted up 
until the release candidate phase (2020-08-10). Please keep in mind that this 
is a preview release and its use is not recommended for production environments.

Major new features of the 3.9 series, compared to 3.8

Many new features for Python 3.9 are still being planned and written. Among the 
new major new features and changes so far:

PEP 584 , Union Operators in dict
PEP 585 , Type Hinting Generics In 
Standard Collections
PEP 593 , Flexible function and 
variable annotations
PEP 602 , Python adopts a stable 
annual release cadence
PEP 616 , String methods to remove 
prefixes and suffixes
PEP 617 , New PEG parser for CPython
BPO 38379 , garbage collection does not 
block on resurrected objects;
BPO 38692 , os.pidfd_open added that allows 
process management without races and signals;
BPO 39926 , Unicode support updated to 
version 13.0.0
BPO 1635741 , when Python is initialized 
multiple times in the same process, it does not leak memory anymore
A number of Python builtins (range, tuple, set, frozenset, list) are now sped 
up using PEP 590  vectorcall
A number of standard library modules (audioop, ast, grp, _hashlib, pwd, 
_posixsubprocess, random, select, struct, termios, zlib) are now using the 
stable ABI defined by PEP 384 .
(Hey, fellow core developer, if a feature you find important is missing from 
this list, let Łukasz know .)
The next pre-release, the first beta release of Python 3.9, will be 3.9.0b1. It 
is currently scheduled for 2020-05-18.

Your friendly release team,
Ned Deily @nad 
Steve Dower @steve.dower 
Łukasz Langa @ambv ___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JJWIXYICQHCEFCJCCXVSWTP5O67UVCQC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Request For Review: Add support for CAN_J1939 sockets (bpo-40291)

2020-04-28 Thread Guido van Rossum
This seems to be a reasonable feature.

On Tue, Apr 28, 2020 at 07:29 Karl Ding  wrote:

> Hi all,
>
> Could someone take a look at the following PR to add support for CAN_J1939
> to the socket module? I'd like to try landing this for 3.9. This
> enhancement would be useful for anyone working in automotive and/or dealing
> with the SAE J1939 CAN protocol.
>
> This feature is available on Linux 5.4+ (Ubuntu 20.04 LTS ships with a
> compatible kernel).
>
> PR Link: https://github.com/python/cpython/pull/19538
> BPO Link: https://bugs.python.org/issue40291
>
> You may find the following links useful if you want to find out more about
> how the kernel J1939 implementation is used.
>
> J1939 Kernel Docs:
> https://www.kernel.org/doc/html/latest/networking/j1939.html
> J1939 header:
> https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/can/j1939.h
>
> Thanks!
>
>
> --
> Karl Ding
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ESCVNT24QHL26XV6TUF3JEH7OONIQV4W/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-- 
--Guido (mobile)
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VSJMVOGX7H3QZPQVDTLXKVKYHNK2J3RV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Request For Review: Add support for CAN_J1939 sockets (bpo-40291)

2020-04-28 Thread Karl Ding
Hi all,

Could someone take a look at the following PR to add support for CAN_J1939
to the socket module? I'd like to try landing this for 3.9. This
enhancement would be useful for anyone working in automotive and/or dealing
with the SAE J1939 CAN protocol.

This feature is available on Linux 5.4+ (Ubuntu 20.04 LTS ships with a
compatible kernel).

PR Link: https://github.com/python/cpython/pull/19538
BPO Link: https://bugs.python.org/issue40291

You may find the following links useful if you want to find out more about
how the kernel J1939 implementation is used.

J1939 Kernel Docs:
https://www.kernel.org/doc/html/latest/networking/j1939.html
J1939 header:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/can/j1939.h

Thanks!

-- 
Karl Ding
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ESCVNT24QHL26XV6TUF3JEH7OONIQV4W/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-28 Thread Victor Stinner
Hi,

A pattern that I used multiple times is to compute an object attribute
only once and cache the result into the object. Dummy example:
---
class X:
def __init__(self, name):
self.name = name
self._cached_upper = None

def _get(self):
if self._cached_upper is None:
print("compute once")
self._cached_upper = self.name.upper()
return self._cached_upper
upper = property(_get)

obj = X("victor")
print(obj.upper)
print(obj.upper)   # use cached value
---

It would be interesting to be able to replace obj.upper property with
an attribute (to reduce the performance overhead of calling _get()
method), but "obj.upper = value" raises an error since the property
prevents to set the attribute.

I understood that the proposed @called_once would store the cached
value into the function namespace.

Victor


Le lun. 27 avr. 2020 à 23:44,  a écrit :
>
> Hello,
> After a great discussion in python-ideas[1][2] it was suggested that I 
> cross-post this proposal to python-dev to gather more comments from those who 
> don't follow python-ideas.
>
> The proposal is to add a "call_once" decorator to the functools module that, 
> as the name suggests, calls a wrapped function once, caching the result and 
> returning it with subsequent invocations. The rationale behind this proposal 
> is that:
> 1. Developers are using "lru_cache" to achieve this right now, which is less 
> efficient than it could be
> 2. Special casing "lru_cache" to account for zero arity methods isn't trivial 
> and we shouldn't endorse lru_cache as a way of achieving "call_once" semantics
> 3. Implementing a thread-safe (or even non-thread safe) "call_once" method is 
> non-trivial
> 4. It complements the lru_cache and cached_property methods currently present 
> in functools.
>
> The specifics of the method would be:
> 1. The wrapped method is guaranteed to only be called once when called for 
> the first time by concurrent threads
> 2. Only functions with no arguments can be wrapped, otherwise an exception is 
> thrown
> 3. There is a C implementation to keep speed parity with lru_cache
>
> I've included a naive implementation below (that doesn't meet any of the 
> specifics listed above) to illustrate the general idea of the proposal:
>
> ```
> def call_once(func):
> sentinel = object()  # in case the wrapped method returns None
> obj = sentinel
> @functools.wraps(func)
> def inner():
> nonlocal obj, sentinel
> if obj is sentinel:
> obj = func()
> return obj
> return inner
> ```
>
> I'd welcome any feedback on this proposal, and if the response is favourable 
> I'd love to attempt to implement it.
>
> 1. 
> https://mail.python.org/archives/list/python-id...@python.org/thread/5OR3LJO7LOL6SC4OOGKFIVNNH4KADBPG/#5OR3LJO7LOL6SC4OOGKFIVNNH4KADBPG
> 2. 
> https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-functions-with-no-parameters/3956
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/5CFUCM4W3Z36U3GZ6Q3XBLDEVZLNFS63/
> Code of Conduct: http://python.org/psf/codeofconduct/



--
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2I6YNJIRSQD4VCQHPVX5WDHTBQJPTCPH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-28 Thread Steve Dower

On 28Apr2020 1243, Petr Viktorin wrote:

On 2020-04-28 00:26, Steve Dower wrote:

On 27Apr2020 2311, Tom Forbes wrote:
Why not? It's a decorator, isn't it? Just make it check for number 
of arguments at decoration time and return a different object.


It’s not that it’s impossible, but I didn’t think the current 
implementation doesn’t make it easy 


This is the line I'd change: 
https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323c97/Lib/functools.py#L763 



At this point, you could inspect the user_function object and choose a 
different wrapper than _lru_cache_wrapper if it takes zero arguments. 
Though you'd likely still end up with a lot of the code being replicated.


Making a stdlib function completely change behavior based on a function 
signature feels a bit too magic to me.
I know lots of libraries do this, but I always thought of it as a cool 
little hack, good for debugging and APIs that lean toward being simple 
to use rather than robust. The explicit `call_once` feels more like API 
that needs to be supported for decades.


I've been trying to clarify whether call_once is intended to be the 
functional equivalent of lru_cache (without the stats-only mode). If 
that's not the behaviour, then I agree, magically switching to it is no 
good.


But if it's meant to be the same but just more efficient, then we 
already do that kind of thing all over the place (free lists, strings, 
empty tuple singleton, etc.). And I'd argue that it's our responsibility 
to select the best implementation automatically, as it saves libraries 
from having to pull the same tricks.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/J6G33EDWEH6ZAFW4BRH2EBYG77DNX6OI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-28 Thread Petr Viktorin

On 2020-04-28 00:26, Steve Dower wrote:

On 27Apr2020 2311, Tom Forbes wrote:
Why not? It's a decorator, isn't it? Just make it check for number of 
arguments at decoration time and return a different object.


It’s not that it’s impossible, but I didn’t think the current 
implementation doesn’t make it easy 


This is the line I'd change: 
https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323c97/Lib/functools.py#L763 



At this point, you could inspect the user_function object and choose a 
different wrapper than _lru_cache_wrapper if it takes zero arguments. 
Though you'd likely still end up with a lot of the code being replicated.


Making a stdlib function completely change behavior based on a function 
signature feels a bit too magic to me.
I know lots of libraries do this, but I always thought of it as a cool 
little hack, good for debugging and APIs that lean toward being simple 
to use rather than robust. The explicit `call_once` feels more like API 
that needs to be supported for decades.



You're probably right to go for the C implementation. If the Python 
implementation is correct, then best to leave the inefficiencies there 
and improve the already-fast version.


Looking at 
https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c 
it seems the fast path for no arguments could be slightly improved, but 
it doesn't look like it'd be much. (I'm deliberately not saying how I'd 
improve it in case you want to do it anyway as a learning exercise, and 
because I could be wrong :) )


Equally hard to say how much more efficient a new API would be, so 
unless it's written already and you have benchmarks, that's probably not 
the line of reasoning to use. An argument that people regularly get this 
wrong and can't easily get it right with what's already there is most 
compelling - see the recent removeprefix/removesuffix discussions if you 
haven't.


Cheers,
Steve

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/V3R7DZDPCO4WZPRMZXZAGNA5VXU7OKF5/
Code of Conduct: http://python.org/psf/codeofconduct/