[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-29 Thread Paul Moore
On Wed, 29 Apr 2020 at 02:26, Eric Snow  wrote:
> Subinterpreters run all Python code right now.  I'm guessing by
> "general python code" you are talking about the code folks are writing
> plus their dependencies.  In that case, it's only with extension
> modules that we run into a problem, and we still don't know with how
> many of those it's a problem where it will take a lot of work.
> However, I *am* convinced that there is a non-trivial amount of work
> there and that it impacts large extension modules more than others.
> The question is, what can we do to mitigate the amount of work there?

One thing that isn't at all clear to me here is that when you say
"Subinterpreters run all Python code", do you *just* mean the core
language? Or the core language plus all builtins? Or the core
language, builtins and the standard library? Because I think that the
vast majority of users would expect a core/stdlib function like
subinterpreters to support the full core+stdlib language.

So my question would be, do all of the stdlib C extension modules
support subinterpreters[1]? If they don't, then I think it's very
reasonable to expect that to be fixed, in the spirit of "eating our
own dogfood" - if we aren't willing or able to make the stdlib support
subinterpreters, it's not exactly reasonable or fair to expect 3rd
party extensions to do so.

If, on the other hand, the stdlib *is* supported, then I think that
"all of Python and the stdlib, plus all 3rd party pure Python
packages" is a significant base of functionality, and an entirely
reasonable starting point for the feature. It certainly still excludes
big parts of the Python ecosystem (notably scientific / data science
users) but that seems fine to me - big extension users like those can
be expected to have additional limitations. It's not really that
different from the situation around C extension support in PyPy.

Paul

[1] Calling threading from a subinterpreter would be an interesting
test of that ;-)
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/XORJE2ECDGIVIWMVME3RE4O77YGETQ6G/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-29 Thread Tom Forbes
Hey Raymond,
Thanks for your input here! A new method wouldn’t be worth adding purely for 
performance reasons then, but there is still an issue around semantics and 
locking.

Should we encourage/document `lru_cache` as the way to do `call_once`? If so, 
then I guess that’s suitable, but people have brought up that it might be hard 
to discover and that it doesn’t actually ensure the function is called once.

The reason I bring this up is that I’ve seen several ad-hoc `call_once` 
implementations recently, and creating one is surprisingly complex for someone 
who’s not that experienced with Python.

So I think there’s room to improve the discoverability of lru_cache as an 
“almost” `call_once` alternative, or room for a dedicated method that might 
re-use bits of the`lru_cache` implementation.

Tom

> On 28 Apr 2020, at 20:51, Raymond Hettinger  
> wrote:
> 
> 
>> [email protected] wrote:
>> 
>> I would like to suggest adding a simple “once” method to functools. As the 
>> name suggests, this would be a decorator that would call the decorated 
>> function, cache the result and return it with subsequent calls.
> 
> It seems like you would get just about everything you want with one line:
> 
>call_once = lru_cache(maxsize=None)
> 
> which would be used like this:
> 
>   @call_once
>   def welcome():
>   len('hello')
> 
>> Using lru_cache like this works but it’s not as efficient as it could be - 
>> in every case you’re adding lru_cache overhead despite not requiring it.
> 
> 
> You're likely imagining more overhead than there actually is.  Used as shown 
> above, the lru_cache() is astonishingly small and efficient.  Access time is 
> slightly cheaper than writing d[()]  where d={(): some_constant}. The 
> infinite_lru_cache_wrapper() just makes a single dict lookup and returns the 
> value.¹ The lru_cache_make_key() function just increments the empty args 
> tuple and returns it.²   And because it is a C object, calling it will be 
> faster than for a Python function that just returns a constant, "lambda: 
> some_constant()".  This is very, very fast.
> 
> 
> Raymond
> 
> 
> ¹ 
> https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c#L870
> ² 
> https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c#L809
> 
> 
> 
> 
> 
> 
>> 
>> Hello,
>> After a great discussion in python-ideas[1][2] it was suggested that I 
>> cross-post this proposal to python-dev to gather more comments from those 
>> who don't follow python-ideas.
>> 
>> The proposal is to add a "call_once" decorator to the functools module that, 
>> as the name suggests, calls a wrapped function once, caching the result and 
>> returning it with subsequent invocations. The rationale behind this proposal 
>> is that:
>> 1. Developers are using "lru_cache" to achieve this right now, which is less 
>> efficient than it could be
>> 2. Special casing "lru_cache" to account for zero arity methods isn't 
>> trivial and we shouldn't endorse lru_cache as a way of achieving "call_once" 
>> semantics 
>> 3. Implementing a thread-safe (or even non-thread safe) "call_once" method 
>> is non-trivial
>> 4. It complements the lru_cache and cached_property methods currently 
>> present in functools.
>> 
>> The specifics of the method would be:
>> 1. The wrapped method is guaranteed to only be called once when called for 
>> the first time by concurrent threads
>> 2. Only functions with no arguments can be wrapped, otherwise an exception 
>> is thrown
>> 3. There is a C implementation to keep speed parity with lru_cache
>> 
>> I've included a naive implementation below (that doesn't meet any of the 
>> specifics listed above) to illustrate the general idea of the proposal:
>> 
>> ```
>> def call_once(func):
>>   sentinel = object()  # in case the wrapped method returns None
>>   obj = sentinel
>>   @functools.wraps(func)
>>   def inner():
>>   nonlocal obj, sentinel
>>   if obj is sentinel:
>>   obj = func()
>>   return obj
>>   return inner
>> ```
>> 
>> I'd welcome any feedback on this proposal, and if the response is favourable 
>> I'd love to attempt to implement it.
>> 
>> 1. 
>> https://mail.python.org/archives/list/[email protected]/thread/5OR3LJO7LOL6SC4OOGKFIVNNH4KADBPG/#5OR3LJO7LOL6SC4OOGKFIVNNH4KADBPG
>> 2. 
>> https://discuss.python.org/t/reduce-the-overhead-of-functools-lru-cache-for-functions-with-no-parameters/3956
>> ___
>> Python-Dev mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at 
>> https://mail.python.org/archives/list/[email protected]/message/5CFUCM4W3Z36U3GZ6Q3XBLDEVZLNFS63/
>> Code of Conduct: http://python.org/psf/codeofconduct/
> 
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
htt

[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-29 Thread Eric V. Smith

On 4/29/2020 3:55 AM, Tom Forbes wrote:

Hey Raymond,
Thanks for your input here! A new method wouldn’t be worth adding purely for 
performance reasons then, but there is still an issue around semantics and 
locking.


One thing I don't understand about the proposed @call_once (or whatever 
it's called): why is locking a concern here any more than it's a concern 
for @lru_cache? Is there something special about it? Or, if locking is a 
requirement for @call_once (maybe optionally), then wouldn't adding the 
same support to @lru_cache make sense?


Eric
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/SKYGVHA3BHZSQC6VFSQOE2D6GJQGEDMD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Comments on PEP 554 (Multiple Interpreters in the Stdlib)

2020-04-29 Thread Mark Shannon

Hi,

On 29/04/2020 4:02 am, Eric Snow wrote:

On Tue, Apr 21, 2020 at 10:42 AM Mark Shannon  wrote:

I'm generally in favour of PEP 554, but I don't think it is ready to be
accepted in its current form.


Yay(ish)! :)


My main objection is that without per-subinterpeter GILs (SILs?) PEP 554
provides no value over threading or multi-processing.
Multi-processing provides true parallelism and threads provide shared
memory concurrency.


I disagree. :)  I believe there are merits to the kind of programming
one can do via subinterpreter + channels (i.e. threads with opt-in
sharing).  I would also like to get broader community exposure to the
subinterpreter functionality sooner rather than later.  Getting the
Python API out there now will help folks get ready sooner for the
(later?) switch to per-interpreter GIL.  As Antoine put it, it allows
folks to start experimenting.  I think there is enough value in all
that to warrant landing PEP 554 in 3.9 even if per-interpreter GIL
only happens in 3.10.


You can already do CSP with multiprocessing, plus you get true parallelism.
The question the PEP needs to answer is "what do sub-interpreters offer 
that other forms of concurrency don't offer".


https://gist.github.com/markshannon/79cace3656b40e21b7021504daee950c

This table summarizes the core features of various approaches to 
concurrency and compares them to "ideal" CSP. There are lot of question 
marks in the PEP 544 column. The PEP needs to address those.


As it stands, multiprocessing a better fit for CSP than PEP 554.

IMO, sub-interpreters only become a useful option for concurrency if 
they allow true parallelism and are not much more expensive than threads.





If per-subinterpeter GILs are possible then, and only then,
sub-interpreters will provide true parallelism and (limited) shared
memory concurrency.

The problem is that we don't know whether we can implement
per-subinterpeter GILs without too large a negative performance impact.
I think we can, but we can't say so for certain.


I think we can as well, but I'd like to hear more about what obstacles
you think we might run into.


As an example, accessing common objects like `None` and `int` will need 
extra indirection.

That *might* be an acceptable cost, or it might not. We don't know.

I can't tell you about the unknown unknowns :)




So, IMO, we should not accept PEP 554 until we know for sure that
per-subinterpeter GILs can be implemented efficiently.



Detailed critique
-

I don't see how `list_all()` can be both safe and accurate. The Java
equivalent makes no guarantees of accuracy.
Attempting to lock the list is likely to lead to deadlock and not
locking it will lead to races; potentially dangerous ones.
I think it would be best to drop this.

`list_all_channels()`. See `list_all()` above.

[out of order] `Channel.interpreters` see `list_all()` and 
`list_all_channels()` above.


I'm not sure I understand your objection.  If a user calls the
function then they get a list.  If that list becomes outdated in the
next minute or the next millisecond, it does not impact the utility of
having that list.  For example, without that list how would one make
sure all other interpreters have been destroyed?


Do you not see the contradiction?
You say that it's OK if the list is outdated immediately, and then ask 
how one would make sure all other interpreters have been destroyed.


With true parallelism, the list could be out of date before it is even 
completed.





`.destroy()` is either misleading or unsafe.
What does this do?

  >>> is.destroy()
  >>> is.run()

If `run()` raises an exception then the interpreter must exist. Rename
to `close()` perhaps?


I see what you mean.  "Interpreter" objects are wrappers rather than
the actual interpreters, but that might not stop folks from thinking
otherwise.  I agrree that "close" may communicate that nature better.
I guess so would "finalize", which is what the C-API calls it.  Then
again, you can't tell an object to "destroy" itself, can you?  It just
isn't clear what you are destroying (nor why we're so destructive
).

So "close" aligns with other similarly purposed methods out there,
while "finalize" aligns with the existing C-API and also elevates the
complex nature of what happens.  If we change the name from "destroy"
then I'd lean toward "finalize".


I don't see why C-API naming conventions would take precedence over 
Python naming conventions for naming a Python method.




FWIW, in your example above, the is.run() call would raise a
RuntimeError saying that it couldn't find an interpreter with "that"
ID.


How does `is_shareable()` work? Are you proposing some mechanism to
transfer an object from one sub-interpreter to another? How would that
work?


The PEP purposefully does not proscribe how "is_shareable()" works.
That depends on the implementation for channels, for which there could
be several, and which will likely differ based on the Python
implementation.  Likewise the

[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-29 Thread Julien Salort

Le 29/04/2020 à 03:18, Eric Snow a écrit :


My (honest) question is, how many folks using subinterpreters are
going to want to use numpy (or module X) enough to get mad about it
before the extension supports subinterpreters?  What will user
expectations be when it comes to subinterpreters?

We will make the docs as clear as we can, but there are plenty of
users out there that will not pay enough attention to know that most
extension modules will not support subinterpreters at first.  Is there
anything we can do to mitigate this impact?  How much would it help if
the ImportError for incompatible modules give a clear (though
lengthier) explanation of the situation?


For what it's worth, I can give you the feedback of a simple user. It 
happens that I tried some time ago to use Numpy in a Flask project which 
was deployed with mod_wsgi on an Apache server. Basically, the page was 
dynamically generating some plots. And I got weird unreliable behaviour, 
which took me some time to debug.


I had to look it up on the internet to figure out the problem was that 
Numpy cannot reliably work with mod_wsgi. I originally thought that I 
had made a mistake somewhere in my code instead. So, I rewrote the code 
to remove the dependency on Numpy. I had used Numpy in the first place, 
because, as a physicist, this is what I am used to, but it was clearly 
very possible to rewrite this particular code without Numpy.


If your proposal leads to an intelligible actual error, and a clear 
warning in the documentation, instead of a silent crash, this sounds 
like progress, even for those packages which won't work on 
subinterpreters anytime soon...



Cheers,


Julien
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/OZQGOK5AI7BG7TJGVEJ4PPCQXZSQ4VGW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Moving tzdata into the python organization

2020-04-29 Thread Paul Ganssle
Hi all,

PEP 615 specifies that we will add a first party tzdata package
. I
created this package at pganssle/tzdata
, but I believe the final home for
it should be python/tzdata. Are there any objections to me moving it
into the python org as is?

Thanks!
Paul



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/UGCDORJB4662XZ7U7CW2K7KLLC3NXYW6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [RELEASE] Python 3.9.0a6 is now available for testing

2020-04-29 Thread robin
While testing 3.9a6 in the reportlab package I see this difference from 3.8.2; 
I built from source using the standard configure make dance. Is this a real 
change?

robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
$ python 
Python 3.8.2 (default, Apr  8 2020, 14:31:25) 
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
>>> 
robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
$ python39
Python 3.9.0a6 (default, Apr 29 2020, 07:46:29) 
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
  File "", line 1
norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
 ^
SyntaxError: invalid string prefix
>>> 
robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
$ python39 -X oldparser
Python 3.9.0a6 (default, Apr 29 2020, 07:46:29) 
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
  File "", line 1
norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
   ^
SyntaxError: invalid string prefix
>>> 
robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/PCQD2REYQ7GT6GVY2FLYEASVKRS756HO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [RELEASE] Python 3.9.0a6 is now available for testing

2020-04-29 Thread Eric V. Smith

Hi, [email protected].

That looks like a real error. Thanks for the detailed report. Can you 
open a ticket on bugs.python.org?


Eric

On 4/29/2020 10:34 AM, [email protected] wrote:

While testing 3.9a6 in the reportlab package I see this difference from 3.8.2; 
I built from source using the standard configure make dance. Is this a real 
change?

robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
$ python
Python 3.8.2 (default, Apr  8 2020, 14:31:25)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')


robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
$ python39
Python 3.9.0a6 (default, Apr 29 2020, 07:46:29)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')

   File "", line 1
 norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
  ^
SyntaxError: invalid string prefix
robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
$ python39 -X oldparser
Python 3.9.0a6 (default, Apr 29 2020, 07:46:29)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')

   File "", line 1
 norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
^
SyntaxError: invalid string prefix
robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/PCQD2REYQ7GT6GVY2FLYEASVKRS756HO/
Code of Conduct: http://python.org/psf/codeofconduct/

___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/XYXI577T5ZWLKHU7G4XCR2P27UKTJCUG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [RELEASE] Python 3.9.0a6 is now available for testing

2020-04-29 Thread robin
Sorry for noise, but obviously most of my pasted text went wrong; not sure how 
to use this modern mailman. I see a syntax error in 3.9a6 with the code 
norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/JXRGQW4MYZ7ACA4TRHGNXFOFKVJTENEG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [RELEASE] Python 3.9.0a6 is now available for testing

2020-04-29 Thread Lysandros Nikolaou
This is a know issue and there is already a discussion on bpo-40246 (
https://bugs.python.org/issue40246) on how to resolve it.

On Wed, Apr 29, 2020 at 5:54 PM Eric V. Smith  wrote:

> Hi, [email protected].
>
> That looks like a real error. Thanks for the detailed report. Can you
> open a ticket on bugs.python.org?
>
> Eric
>
> On 4/29/2020 10:34 AM, [email protected] wrote:
> > While testing 3.9a6 in the reportlab package I see this difference from
> 3.8.2; I built from source using the standard configure make dance. Is this
> a real change?
> >
> > robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
> > $ python
> > Python 3.8.2 (default, Apr  8 2020, 14:31:25)
> > [GCC 9.3.0] on linux
> > Type "help", "copyright", "credits" or "license" for more information.
>  norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
> 
> > robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
> > $ python39
> > Python 3.9.0a6 (default, Apr 29 2020, 07:46:29)
> > [GCC 9.3.0] on linux
> > Type "help", "copyright", "credits" or "license" for more information.
>  norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
> >File "", line 1
> >  norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
> >   ^
> > SyntaxError: invalid string prefix
> > robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
> > $ python39 -X oldparser
> > Python 3.9.0a6 (default, Apr 29 2020, 07:46:29)
> > [GCC 9.3.0] on linux
> > Type "help", "copyright", "credits" or "license" for more information.
>  norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
> >File "", line 1
> >  norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
> > ^
> > SyntaxError: invalid string prefix
> > robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
> > ___
> > Python-Dev mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
> > https://mail.python.org/mailman3/lists/python-dev.python.org/
> > Message archived at
> https://mail.python.org/archives/list/[email protected]/message/PCQD2REYQ7GT6GVY2FLYEASVKRS756HO/
> > Code of Conduct: http://python.org/psf/codeofconduct/
> ___
> Python-Dev mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/[email protected]/message/XYXI577T5ZWLKHU7G4XCR2P27UKTJCUG/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/4CQCL7JXX2KD4BPPKTQIUXS23HHRYVEB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [RELEASE] Python 3.9.0a6 is now available for testing

2020-04-29 Thread Petr Viktorin

On 2020-04-29 16:34, [email protected] wrote:

While testing 3.9a6 in the reportlab package I see this difference from 3.8.2; 
I built from source using the standard configure make dance. Is this a real 
change?


Hi,
This is a known issue, currently discussed in 
https://bugs.python.org/issue40246


Thanks for reporting it, though!


robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
$ python
Python 3.8.2 (default, Apr  8 2020, 14:31:25)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')


robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
$ python39
Python 3.9.0a6 (default, Apr 29 2020, 07:46:29)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')

   File "", line 1
 norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
  ^
SyntaxError: invalid string prefix



robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
$ python39 -X oldparser
Python 3.9.0a6 (default, Apr 29 2020, 07:46:29)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')

   File "", line 1
 norm=lambda m: m+(m and(m[-1]!='\n'and'\n'or'')or'\n')
^
SyntaxError: invalid string prefix



robin@minikat:~/devel/reportlab/REPOS/reportlab/tests
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/PCQD2REYQ7GT6GVY2FLYEASVKRS756HO/
Code of Conduct: http://python.org/psf/codeofconduct/


___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/QLQXGGOHFQXISPXZONYBLWN4VPCJN3BA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-29 Thread Ronald Oussoren via Python-Dev


> On 29 Apr 2020, at 03:50, Eric Snow  wrote:
> 
> On Wed, Apr 22, 2020 at 2:43 AM Ronald Oussoren  
> wrote:
>> My mail left out some important information, sorry about that.
> 
> No worries. :)
> 
>> PyObjC is a two-way bridge between Python and Objective-C. One half of this 
>> is that is bridging Objective-C classes (and instances) to Python. This is 
>> fairly straightforward, although the proxy objects are not static and can 
>> have methods defined in Python (helper methods that make the Objective-C 
>> classes nicer to use from Python, for example to define methods that make it 
>> possible to use an NSDictionary as if it were a regular Python dict).
> 
> Cool.  (also fairly straightforward!)

Well… Except that the proxy classes are created dynamically and the list of 
methods is updated dynamically as well both for performance reasons and because 
ObjC classes can be changed at runtime (similar to how you can add methods to 
Python classes).  But in the end this part is fairly straightforward and 
comparable to something like gobject introspection in the glib/gtk bindings.

And every Cocoa class is proxied with a regular class and a metaclass (with 
parallel class hierarchies). That’s needed to mirror Objective-C behaviour, 
where class- and instance methods have separate namespaces and some classes 
have class and instance methods with the same name.  

> 
>> The other half is that it is possible to implement Objective-C classes in 
>> Python:
>> 
>>   class MyClass (Cocoa.NSObject):
>>   def anAction_(self, sender): …
>> 
>> This defines a Python classes named “MyClass”, but also an Objective-C class 
>> of the same name that forwards Objective-C calls to Python.
> 
> Even cooler! :)
> 
>> The implementation for this uses PyGILState_Ensure, which AFAIK is not yet 
>> useable with sub-interpreters.
> 
> That is correct.  It is one of the few major subinterpreter
> bugs/"bugs" remaining to be addressed in the CPython code.  IIRC,
> there were several proposed solutions (between 2 BPO issues) that
> would fix it but we got distracted before the matter was settled.

This is not a hard technical problem, although designing a future proof API 
might be harder.

> 
>> PyObjC also has Objective-C proxy classes for generic Python objects, making 
>> it possible to pass a normal Python dictionary to an Objective-C API that 
>> expects an NSDictionary instance.
> 
> Also super cool.  How similar is this to Jython and IronPython?

I don’t know, I guess this is similar to how those projects proxy between their 
respective host languages and Python. 

> 
>> Things get interesting when combining the two with sub-interpreters: With 
>> the current implementation the Objective-C world would be a channel for 
>> passing “live” Python objects between sub-interpreters.
> 
> +1
> 
>> The translation tables for looking up existing proxies (mapping from Python 
>> to Objective-C and vice versa) are currently singletons.
>> 
>> This is probably fixable with another level of administration, by keeping 
>> track of the sub-interpreter that owns a Python object I could ensure that 
>> Python objects owned by a different sub-interpreter are proxied like any 
>> other Objective-C object which would close this loophole.  That would 
>> require significant changes to a code base that’s already fairly complex, 
>> but should be fairly straightforward.
> 
> Do you think there are any additions we could make to the C-API (more
> than have been done recently, e.g. PEP 573) that would make this
> easier.  From what I understand, this pattern of a cache/table of
> global Python objects is a relatively common one.  So anything we can
> do to help transition these to per-interpreter would be broadly
> beneficial.  Ideally it would be done in the least intrusive way
> possible, reducing churn and touch points.  (e.g. a macro to convert
> existing tables,etc. + an init func to call during module init.)

I can probably fix this entirely on my end:
- Use PEP 573 to move the translation tables to per-interpreter storage
- Store the sub-interpreter in the ObjC proxy object for Python objects, both 
to call back into the right subinterpreter in upcalls and to tweak the way 
“foreign” objects are proxied into a different subinterpreter. 

But once again, that’s without trying to actually do the work. As usual the 
devil’s in the details.

> 
> Also, FWIW, I've been thinking about possible approaches where the
> first/main interpreter uses the existing static types, etc. and
> further subinterpreters use a heap type (etc.) derived mostly
> automatically from the static one.  It's been on my mind because this
> is one of the last major hurdles to clear in the CPython code before
> we can make the GIL per-interpreter.
> 
>>> What additional API would be needed?
>> 
>> See above, the main problem is PyGILState_Ensure.  I haven’t spent a lot of 
>> time thinking about this though, I might find other issues when I try to 
>> support sub-inte

[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-29 Thread Sebastian Berg
On Tue, 2020-04-28 at 19:20 -0600, Eric Snow wrote:
> On Tue, Apr 21, 2020 at 11:17 AM Sebastian Berg
>  wrote:
> > Maybe one of the frustrating points about this criticism is that it
> > does not belong in this PEP. And that is actually true! I
> > wholeheartedly agree that it doesn't really belong in this PEP
> > itself.
> > 
> > *But* the existence of a document detailing the "state and vision
> > for
> > subinterpreters" that includes these points is probably a
> > prerequisite
> > for this PEP. And this document must be linked prominently from the
> > PEP.
> > 
> > So the suggestion should maybe not be to discuss it in the PEP, but
> > to
> > to write it either in the documentation on subinterpreters or as an
> > informational PEP. Maybe such document already exists, but then it
> > is
> > not linked prominently enough probably.
> 
> That is an excellent point.  It would definitely help to have more
> clarity about the feature (subinterpreters).  I'll look into what
> makes the most sense.  I've sure Victor has already effectively
> written something like this. :)
> 

I will note one more time that I want to back up almost all that
Nathaniel said (I simply cannot judge the technical side though).

While I still think it is probably not part of PEP 554 as such, I guess
it needs a full blown PEP on its own. Saying that Python should
implement subinterpreters. (I am saying "implement" because I believe
you must consider subinterpreters basically a non-feature at this time.
It has neither users nor reasonable ecosystem support.)

In many ways I assume that a lot of the ground work for subinterpreters
was useful on its own. But please do not underestimate how much effort
it will take to make subinterpreters first class citizen in the
language!

Take PyPy for example, it took years for PyPy support in NumPy (and the
PyPy people did pretty much _all_ the work).
And PyPy provides a compatibility layer that makes the support orders
of magnitude simpler than supporting subinterpreters.

And yet, I am sure there are many many C-Extensions out there that will
fail on PyPy.  So unless the potential subinterpreter userbase is
magnitudes larger than PyPy's the situation will be much worse.  With
the added frustration because PyPy users probably expect
incompatibilities, but Python users may get angry if they think
subinterpreters are a language feature.


There have been points made about e.g. just erroring on import for
modules which do not choose to support subinterpreters. And maybe in
the sum of saying:

* We warn that most C-extensions won't work
   -> If you error, at least it won't crash silently (some mitigation)

* Nobody must expect any C-extension to work until subinterpreters
  have proven useful *and* a large userbase!

* In times, we are taking here about, what?
  - Maybe 3 years until proven useful and a potentially large userbase?
  - Some uncertain amount longer until the user-base actually grows
  - Maybe 5 years until fairly widespread support for some central
libraries after that? (I am sure Python 2 to 3 took that long)

* Prototyping the first few years (such as an external package, or
  even a fork!) are not really very good, because... ?
  Or alternatively the warnings will be so penetrant that prototyping
  within cpython is acceptable. Maybe you have to use:

 python 
--subinterpreters-i-know-this-is-only-a-potential-feature-which-may-be-removed-in-future-python-versions
 myscript.py

  This is not in the same position as most "experimental" APIs, which
  are almost settled but you want to be careful. This one has a real
  chance of getting ripped out entirely!?

is good enough, maybe it is not. Once it is written down I am confident
the Python devs and steering council will make the right call.

Again, I do not want to hinder the effort. It takes courage and a good
champion to go down such a long and windy road.
But Nathaniel is right that putting in the effort puts you into the
trap of thinking that now that we are 90% there from a Python
perspective, we should go 100%.
100% is nice, but you may have to reach 1000++% (i.e. C-extension
modules/ecysystem support) to actually have a fully functional feature.

You should get the chance to prove them useful, there seems enough
positivity around the idea.  But the question is within what framework
that can reasonably happen.

Simply pushing in PEP 554 with a small warning in the documentation is
not the right framework.
I hope you can find the right framework to push this on. But
unfortunately it is the difficult job of the features champion. And
Nathaniel, etc. are actually trying to help you with it (but in the end
are not champions for it, so you cannot expect too much). E.g. by
asking if it cannot be developed outside of cpython and pointing out
how other similar project approached the same impassible mountain of
work.

Believe me, I have been there and its tough to write these documents
and then get feedback which you are not immed

[Python-Dev] Re: Moving tzdata into the python organization

2020-04-29 Thread Barry Warsaw
No objections here.

-Barry

> On Apr 29, 2020, at 06:05, Paul Ganssle  wrote:
> 
> Signed PGP part
> Hi all,
> 
> PEP 615 specifies that we will add a first party tzdata package. I created 
> this package at pganssle/tzdata, but I believe the final home for it should 
> be python/tzdata. Are there any objections to me moving it into the python 
> org as is?
> 
> Thanks!
> Paul
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/NSPWRCK6BZLQ25XZKWCCWGLB4ZKMQQRS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-29 Thread Raymond Hettinger

> On Apr 29, 2020, at 12:55 AM, Tom Forbes  wrote:
> 
> Hey Raymond,
> Thanks for your input here! A new method wouldn’t be worth adding purely for 
> performance reasons then, but there is still an issue around semantics and 
> locking.

Right.


> it doesn’t actually ensure the function is called once.

Let's be precise about this.  The lru_cache() logic is:

1) if the function has already been called and result is known, return the 
prior result  :-)
2) call the underlying function
3) add the question/answer pair to the cache dict. 

You are correct that a lru_cache() wrapped function can be called more than 
once if before step three happens, the wrapped function is called again, either 
by another thread or by a reentrant call.  This is by design and means that 
lru_cache() can be wrapped around almost anything, reentrant or not.  Also 
calls to lru_cache() don't block across the function call, nor do they fail 
because another call is in progress.  This makes lru_cache() easy to use and 
reliable, but it does allow the possibility that the function is called more 
than once.

The call_once() decorator would need different logic:

1) if the function has already been called and result is known, return the 
prior result  :-)
2) if function has already been called, but the result is not yet known, either 
block or fail  :-(
3) call the function, this cannot be reentrant :-(
4) record the result for future calls.

The good news is that call_once() can guarantee the function will not be called 
more than once.  The bad news is that task switches during step three will 
either get blocked for the duration of the function call or they will need to 
raise an exception.Likewise, it would be a mistake use call_once() when 
reentrancy is possible.

> The reason I bring this up is that I’ve seen several ad-hoc `call_once` 
> implementations recently, and creating one is surprisingly complex for 
> someone who’s not that experienced with Python.


Would it fair to describe call_once() like this?

call_once() is just like lru_cache() but:

1) guarantees that a function never gets called more than once
2) will block or fail if a thread-switch happens during a call
3) only works for functions that take zero arguments
4) only works for functions that can never be reentrant
5) cannot make the one call guarantee across multiple processes
6) does not have instrumentation for number of hits
7) does not have a clearing or reset mechanism


Raymond


___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/CTAGWXD7WRU3NAHLP5IZ75PM2E3TQTG2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-29 Thread Gregory P. Smith
On Wed, Apr 29, 2020 at 5:40 AM Julien Salort  wrote:

> Le 29/04/2020 à 03:18, Eric Snow a écrit :
>
> > My (honest) question is, how many folks using subinterpreters are
> > going to want to use numpy (or module X) enough to get mad about it
> > before the extension supports subinterpreters?  What will user
> > expectations be when it comes to subinterpreters?
> >
> > We will make the docs as clear as we can, but there are plenty of
> > users out there that will not pay enough attention to know that most
> > extension modules will not support subinterpreters at first.  Is there
> > anything we can do to mitigate this impact?  How much would it help if
> > the ImportError for incompatible modules give a clear (though
> > lengthier) explanation of the situation?
>
> For what it's worth, I can give you the feedback of a simple user. It
> happens that I tried some time ago to use Numpy in a Flask project which
> was deployed with mod_wsgi on an Apache server. Basically, the page was
> dynamically generating some plots. And I got weird unreliable behaviour,
> which took me some time to debug.
>
> I had to look it up on the internet to figure out the problem was that
> Numpy cannot reliably work with mod_wsgi. I originally thought that I
> had made a mistake somewhere in my code instead. So, I rewrote the code
> to remove the dependency on Numpy. I had used Numpy in the first place,
> because, as a physicist, this is what I am used to, but it was clearly
> very possible to rewrite this particular code without Numpy.
>
> If your proposal leads to an intelligible actual error, and a clear
> warning in the documentation, instead of a silent crash, this sounds
> like progress, even for those packages which won't work on
> subinterpreters anytime soon...
>
>
+10 to this, the mysterious failures of today are way worse than a clear
"this module doesn't support this execution environment" ImportError.

I'm not worried at all about someone going and filing an issue in a project
saying "please support this execution environment".  Someone may eventually
come along and decide they want to be the one to make that happen and do
the work because they have a reason.

-gps
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/YMHYG7B6U3ACMOSB3A3A7VJXQA5AMJL5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Comments on PEP 554 (Multiple Interpreters in the Stdlib)

2020-04-29 Thread Eric Snow
Thanks, Mark.  Responses are in-line below.

-eric

On Wed, Apr 29, 2020 at 6:08 AM Mark Shannon  wrote:
> You can already do CSP with multiprocessing, plus you get true parallelism.
> The question the PEP needs to answer is "what do sub-interpreters offer
> that other forms of concurrency don't offer".
>
> https://gist.github.com/markshannon/79cace3656b40e21b7021504daee950c
>
> This table summarizes the core features of various approaches to
> concurrency and compares them to "ideal" CSP. There are lot of question
> marks in the PEP 544 column. The PEP needs to address those.
>
> As it stands, multiprocessing a better fit for CSP than PEP 554.
>
> IMO, sub-interpreters only become a useful option for concurrency if
> they allow true parallelism and are not much more expensive than threads.

While I have a different opinion here, especially if we consider
trajectory, I really want to keep discussion focused on the proposed
API in the PEP.  Honestly I'm considering taking up the recommendation
to add a new PEP about making subinterpreters official.  I never meant
for that to be more than a minor point for PEP 554.

> > I think we can as well, but I'd like to hear more about what obstacles
> > you think we might run into.
>
> As an example, accessing common objects like `None` and `int` will need
> extra indirection.
> That *might* be an acceptable cost, or it might not. We don't know.

ack

> I can't tell you about the unknown unknowns :)

:)

> > I'm not sure I understand your objection.  If a user calls the
> > function then they get a list.  If that list becomes outdated in the
> > next minute or the next millisecond, it does not impact the utility of
> > having that list.  For example, without that list how would one make
> > sure all other interpreters have been destroyed?
>
> Do you not see the contradiction?
> You say that it's OK if the list is outdated immediately, and then ask
> how one would make sure all other interpreters have been destroyed.
>
> With true parallelism, the list could be out of date before it is even
> completed.

I don't see why that would be a problem in practice.  Folks already
have to deal with that situation in many other venues in Python (e.g.
threading.enumerate()).  Not having the list at all would more
painful.

> > So "close" aligns with other similarly purposed methods out there,
> > while "finalize" aligns with the existing C-API and also elevates the
> > complex nature of what happens.  If we change the name from "destroy"
> > then I'd lean toward "finalize".
>
> I don't see why C-API naming conventions would take precedence over
> Python naming conventions for naming a Python method.

Naming conventions aren't as important if we focus just on
communicating intent.  Maybe it's just me, but "close" does not
reflect the complexity that "finalize" does.

Regardless, if it is called "close" then folks can use
contextlib.closing() with it.  That's enough to sell me on it.

> Ok, let me rephrase. What does "is_shareable()" do?
> Is `None` sharable? What about `int`?

It's up to the Python implementation to decide if something is
shareable or not.  In the case of CPython, PEP 554 says: "Initially
this will include None, bytes, str, int, and channels. Further types
may be supported later."

> Its not an implementation detail. The user needs to know the *exact* set
> of objects that can be communicated. Using marshal or pickle provides
> that information.

The point of is_shareable() is to expose that information, though not
as a list.  Why would users want that full list?  It could be huge,
BTW.  If you are talking about documentation then yeah, we would
definitely document which types CPython considers shareable.
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/IEMXNKSOZT23OEXFWF3VNJMYSRV7OCUU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-29 Thread Eric Snow
On Wed, Apr 29, 2020 at 1:52 AM Paul Moore  wrote:
> One thing that isn't at all clear to me here is that when you say
> "Subinterpreters run all Python code", do you *just* mean the core
> language? Or the core language plus all builtins? Or the core
> language, builtins and the standard library? Because I think that the
> vast majority of users would expect a core/stdlib function like
> subinterpreters to support the full core+stdlib language.

Agreed.

> So my question would be, do all of the stdlib C extension modules
> support subinterpreters[1]? If they don't, then I think it's very
> reasonable to expect that to be fixed, in the spirit of "eating our
> own dogfood" - if we aren't willing or able to make the stdlib support
> subinterpreters, it's not exactly reasonable or fair to expect 3rd
> party extensions to do so.

That is definitely the right question. :)  Honestly I had not thought
of it that way (nor checked of course).  While many stdlib modules
have been updated to use heap types (see PEP 384) and support PEP 489
(Multi-phase Extension Module Initialization), there are still a few
stragglers.  Furthermore, I expect that there are few modules that
would give us trouble (maybe ssl, cdecimal).  It's all about global
state that gets shared inadvertently between subinterpreters.

Probably the best way to find out is to run the entire test suite in a
subinterpreter.  I'll do that as soon as I can.

> If, on the other hand, the stdlib *is* supported, then I think that
> "all of Python and the stdlib, plus all 3rd party pure Python
> packages" is a significant base of functionality, and an entirely
> reasonable starting point for the feature.

Yep, that's what I meant.  I just need to identify modules where we
need fixes.  Thanks for bringing this up!

> It certainly still excludes
> big parts of the Python ecosystem (notably scientific / data science
> users) but that seems fine to me - big extension users like those can
> be expected to have additional limitations. It's not really that
> different from the situation around C extension support in PyPy.

Agreed.

-eric
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/TGW2OESYUGMMRVU6JIXQXWEP3VMH7WPL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-29 Thread Eric Snow
On Wed, Apr 29, 2020 at 6:27 AM Julien Salort  wrote:
> If your proposal leads to an intelligible actual error, and a clear
> warning in the documentation, instead of a silent crash, this sounds
> like progress, even for those packages which won't work on
> subinterpreters anytime soon...

That's helpful.  Thanks!

-eric
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/BURRXBY24URPSZXRIB7OHYCEBY2G4U67/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-29 Thread Eric Snow
Thanks for the great insights into PyObjC!

On Wed, Apr 29, 2020 at 9:02 AM Ronald Oussoren  wrote:
> I don’t know how much the move of global state to per-interpreter state 
> affects extensions, other than references to singletons and static types.

That's the million dollar question. :)

FYI, one additional challenge is when an extension module depends on a
third-party C library which itself keeps global state which might leak
between subinterpreters.  The Cryptography project ran into this
several years ago with OpenSSL and they were understandably grumpy
about it.

> But with some macro trickery that could be made source compatible for 
> extensions.

Yeah, that's one approach that we've discussed in the past (e.g. at
the last core sprint).

-eric
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/H3RCS47ZUITKKXR3BVYOPXNXBZYF5ZN4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-29 Thread Eric Snow
Thanks for the thoughtful post!  I'm going to address some of your
comments here and some in a separate discussion in the next few days.

On Wed, Apr 29, 2020 at 10:36 AM Sebastian Berg
 wrote:
> While I still think it is probably not part of PEP 554 as such, I guess
> it needs a full blown PEP on its own. Saying that Python should
> implement subinterpreters. (I am saying "implement" because I believe
> you must consider subinterpreters basically a non-feature at this time.
> It has neither users nor reasonable ecosystem support.)

FWIW, at this point it would be hard to justify removing the existing
public subinterpreters C-API.  There are several large public projects
using it and likely many more private ones we do not know about.

That's not to say that alone justifies exposing the C-API, of course. :)

> In many ways I assume that a lot of the ground work for subinterpreters
> was useful on its own.

There has definitely been a lot of code health effort related to the
CPython runtime code, partly motivated by this project. :)

> But please do not underestimate how much effort
> it will take to make subinterpreters first class citizen in the
> language!

If you are talking about on the CPython side, most of the work is
already done.  The implementation of PEP 554 is nearly complete and
subinterpreter support in the runtime has a few rough edges to buff
out.  The big question is the effort it will demand of the Python
community, which is the point Nathaniel has been emphasizing
(understandably).

> Believe me, I have been there and its tough to write these documents
> and then get feedback which you are not immediately sure what to make
> of.
> Thus, I hope those supporting the idea of subinterpreters will help you
> out and formulate a better framework and clarify PEP 554 when it comes
> to the fuzzy long term user-impact side of the PEP.

FYI, I started working on this project in 2015 and proposed PEP 554 in
2017.  This is actually the 6th round of discussion since then. :)

-eric
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/SIKH5NK6B67BLLVHDRAMK64PMO6EZ5VI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] [RELEASE] Python 3.8.3rc1 is now ready for testing

2020-04-29 Thread Łukasz Langa
Python 3.8.3rc1 is the release candidate of the third maintenance release of 
Python 3.8. Go get it here:

https://www.python.org/downloads/release/python-383rc1/ 
 

Assuming no critical problems are found prior to 2020-05-11, the scheduled 
release date for 3.8.3, no code changes are planned between this release 
candidate and the final release.

That being said, please keep in mind that this is a pre-release and as such its 
main purpose is testing.

Maintenance releases for the 3.8 series will continue at regular bi-monthly 
intervals, with 3.8.4 planned for mid-July 2020.

What’s new?

The Python 3.8 series is the newest feature release of the Python language, and 
it contains many new features and optimizations. See the “What’s New in Python 
3.8  ” document for more 
information about features included in the 3.8 series.

Detailed information about all changes made in version 3.8.3 specifically can 
be found in its change log 
.

We hope you enjoy Python 3.8!

Thanks to all of the many volunteers who help make Python Development and these 
releases possible! Please consider supporting our efforts by volunteering 
yourself or through organization contributions to the Python Software 
Foundation.

Your friendly release team,
Ned Deily @nad 
Steve Dower @steve.dower 
Łukasz Langa @ambv ___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/XGZBXYQAJFZOLYJPJ7UOLK3O4GZNQDSF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-29 Thread Antoine Pitrou
On Wed, 29 Apr 2020 12:01:24 -0700
Raymond Hettinger  wrote:
> 
> The call_once() decorator would need different logic:
> 
> 1) if the function has already been called and result is known, return the 
> prior result  :-)
> 2) if function has already been called, but the result is not yet known, 
> either block or fail  :-(

It definitely needs to block.

> 3) call the function, this cannot be reentrant :-(

Right.  The typical use for such a function is lazy initialization of
some resource, not recursive computation.

> 4) record the result for future calls.
> 
[...]
> 
> Would it fair to describe call_once() like this?
> 
> call_once() is just like lru_cache() but:
> 
> 1) guarantees that a function never gets called more than once
> 2) will block or fail if a thread-switch happens during a call

Definitely block.

> 3) only works for functions that take zero arguments
> 4) only works for functions that can never be reentrant
> 5) cannot make the one call guarantee across multiple processes
> 6) does not have instrumentation for number of hits
> 7) does not have a clearing or reset mechanism

Clearly, instrumentation and a clearing mechanism are not necessary.
They might be "nice to have", but needn't hinder initial adoption of
the API.

Regards

Antoine.

___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/NZ2VDB4WDPXP44NGZEXW2ACE424ODFBH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-29 Thread Raymond Hettinger



> On Apr 29, 2020, at 4:20 PM, Antoine Pitrou  wrote:
> 
> On Wed, 29 Apr 2020 12:01:24 -0700
> Raymond Hettinger  wrote:
>> 
>> The call_once() decorator would need different logic:
>> 
>> 1) if the function has already been called and result is known, return the 
>> prior result  :-)
>> 2) if function has already been called, but the result is not yet known, 
>> either block or fail  :-(
> 
> It definitely needs to block.

Do you think it is safe to hold a non-reentrant lock across an arbitrary user 
function?

Traditionally, the best practice for locks was to acquire, briefly access a 
shared resource, and release promptly.

>> 3) call the function, this cannot be reentrant :-(
> 
> Right.  The typical use for such a function is lazy initialization of
> some resource, not recursive computation.

Do you have some concrete examples we could look at?   I'm having trouble 
visualizing any real use cases and none have been presented so far.

Presumably, the initialization function would have to take zero arguments, have 
a useful return value, must be called only once, not be idempotent, wouldn't 
fail if called in two different processes, can be called from multiple places, 
and can guarantee that a decref, gc, __del__, or weakref callback would never 
trigger a reentrant call.

Also, if you know of a real world use case, what solution is currently being 
used.  I'm not sure what alternative call_once() is competing against.

>> 
>> 6) does not have instrumentation for number of hits
>> 7) does not have a clearing or reset mechanism
> 
> Clearly, instrumentation and a clearing mechanism are not necessary.
> They might be "nice to have", but needn't hinder initial adoption of
> the API.

Agreed.  It is inevitable that those will be requested, but they are incidental 
to the core functionality.

Do you have any thoughts on what the semantics should be if the inner function 
raises an exception?  Would a retry be allowed?  Or does call_once() literally 
mean "can never be called again"?



Raymond
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/Y2MUKYDCV53PBWRRBU4ZAKB5XED4X4HX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 comments

2020-04-29 Thread Greg Ewing

On 29/04/20 2:12 pm, Eric Snow wrote:

One of the main inspirations for the
proposed channels is CSP (and somewhat relatedly, my in-depth
experience with Go).  Channels are more than just a thread-safe data
transport between interpreters.


It's a while since I paid attention to the fine details
of CSP. I'll have to do some research on that.


Furthermore, IMHO "release" is better at communicating the
per-interpreter nature than "close".


Channels are a similar enough concept to pipes that I think
it would be confusing to have "close" mean "close for all
interpreters". Everyone understands that "closing" a pipe
only means you're closing your reference to one end of it,
and they will probably assume closing a channel means the
same.

Maybe it would be better to have a different name such as
"destroy" for a complete shutdown.


With pipes the
main difference is how many actors are involved.  Pipes involve one
sender and one receiver, right?


Not necessarily. Mostly they're used that way, but there's
nothing to stop multiple processes having a handle on the
reading or writing end of a pipe simultaneously. Of course
you have to be careful about how you interleave the reads
and writes if you do that.

--
Greg
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/PTI5W4SYNV4C73D3XJGOQCFGGL6EVLUI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 comments

2020-04-29 Thread Eric Snow
On Wed, Apr 29, 2020, 22:05 Greg Ewing  wrote:

> > Furthermore, IMHO "release" is better at communicating the
> > per-interpreter nature than "close".
>
> Channels are a similar enough concept to pipes that I think
> it would be confusing to have "close" mean "close for all
> interpreters". Everyone understands that "closing" a pipe
> only means you're closing your reference to one end of it,
> and they will probably assume closing a channel means the
> same.
>

FWIW, I'd compare channels more closely to queues than to pipes.

-eric
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/BTOEUU7VAF55KZYPYHJJE4ZVWIEMNZNK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-29 Thread Greg Ewing

On 30/04/20 3:29 pm, Raymond Hettinger wrote:

Do you think it is safe to hold a non-reentrant lock across an
arbitrary user function?


I think what's wanted here is to block if it's locked by
a different thread, but raise an exception if it's locked
by the same thread.

--
Greg
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/3T5Z7CDDY7D5Y4OKQ2GOPCNLQ7TQ6TCA/
Code of Conduct: http://python.org/psf/codeofconduct/