[Python-Dev] Re: PEP 684: A Per-Interpreter GIL

2022-03-14 Thread Jim J. Jewett
> That sounds like a horrible idea. The GIL should never be held during an
> I/O operation.

For a greenfield design, I agree that it would be perverse.  But I thought we 
were talking about affordances for transitions from code that was written 
without consideration of multiple interpreters.  In those cases, the GIL can be 
a way of saying "OK, this is the part where I haven't thought things through 
yet."  Using a more fine-grained lock would be better, but would take a lot 
more work and be more error-prone.

For a legacy system, I'm seen plenty of situations where a blunt (but simple) 
hammer like "Grab the GIL" would still be a huge improvement from the status 
quo.  And those situations tend to occur with the sort of clients where 
"Brutally inefficient, but it does work because the fragile parts are 
guaranteed by an external tool" is the right tradeoff.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AAWSCUNVS2NUXRHVATO736KM6I5M6RK5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Defining tiered platform support

2022-03-14 Thread Brett Cannon
On Fri, Mar 11, 2022 at 5:17 PM Victor Stinner  wrote:

> It would be great to have the list of supported platforms per Python
> version!
>

I could see the table in PEP 11 being copied into the release PEPs.


>
> Maybe supporting new platforms and dropping support for a platform
> should be document in What's New in Python x.y. GCC does that for
> example. It also *deprecates* support for some platforms. Example:
> https://gcc.gnu.org/gcc-9/changes.html
>
> --
>
> It's always hard for me to know what is the minimum supported Windows
> version. PEP 11 refers to Windows support:
> https://peps.python.org/pep-0011/#microsoft-windows
>
> But I don't know how to get this info from the Microsoft
> documentation. I usually dig into Wikipedia articles to check which
> Windows version is still supported or not, but I'm confused between
> "mainstream support" and "extended support".
>

It's "free with purchase" and "pay us more and we will keep supporting
you". You can think of it as standard versus extended warranties.

https://docs.microsoft.com/en-us/lifecycle/policies/fixed


>
> For example, which Python version still support Windows 7? Wikipedia
> says that Windows 7 mainstream support ended in 2015, and extended
> support ended in 2020. But Python still has a Windows 7 SP1 buildbot
> for Python 3.8: https://buildbot.python.org/all/#/builders/60


Just because we have a buildbot does not mean we support it. All it means
is someone in the community cares enough about Windows 7 to want to know
when CPython no longer works.


>
>
> What is the minimum Windows supported by Python 3.10?
>

I believe it's Windows 8.

https://docs.microsoft.com/en-us/lifecycle/faq/windows
https://docs.microsoft.com/en-us/lifecycle/products/windows-81

-Brett


>
> Victor
>
> On Mon, Mar 7, 2022 at 8:06 PM Christian Heimes 
> wrote:
> >
> > On 07/03/2022 18.02, Petr Viktorin wrote:
> > >> Why the devguide? I view the list of platforms as important for public
> > >> consumption as for the core dev team to know what to (not) accept PRs
> > >> for.
> > >
> > > So, let's put it in the main docs?
> > > Yes, I guess the devguide is a weird place to check for this kind of
> > > info. But a Python enhancement proposal is even weirder.
> >
> >
> > +1 for our main docs (cpython/Doc/)
> >
> > Platform support is Python versions specific. Python 3.10 may support
> > different version than 3.11 or 3.12. It makes sense to keep the support
> > information with the code.
> >
> > Christian
> > ___
> > Python-Dev mailing list -- python-dev@python.org
> > To unsubscribe send an email to python-dev-le...@python.org
> > https://mail.python.org/mailman3/lists/python-dev.python.org/
> > Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/LJID7Y7RFSCRUYLJS3E56WBGJU2R44E4/
> > Code of Conduct: http://python.org/psf/codeofconduct/
>
>
>
> --
> Night gathers, and now my watch begins. It shall not end until my death.
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GCWULFTKFZNJFV7FWDMFBQVBMY5QBJJQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount" (round 3)

2022-03-14 Thread Petr Viktorin

On 12. 03. 22 2:45, Eric Snow wrote:

responses inline


I'll snip some discussion for a reason I'll get to later, and get right 
to the third alternative:



[...]

"Special-casing immortal objects in tp_dealloc() for the relevant types
(but not int, due to frequency?)" sounds promising.

The "relevant types" are those for which we skip calling incref/decref
entirely, like in Py_RETURN_NONE. This skipping is one of the optional
optimizations, so we're entirely in control of if/when to apply it.


We would definitely do it for those types.  NoneType and bool already
have a tp_dealloc that calls Py_FatalError() if triggered.  The
tp_dealloc for str & tuple have special casing for some singletons
that do likewise.  In PyType_Type.tp_dealloc we have a similar assert
for static types.  In each case we would instead reset the refcount to
the initial immortal value.  Regardless, in practice we may only need
to worry (as noted above) about the problem for the most commonly used
global objects, so perhaps we could stop there.

However, it depends on what the level of risk is, such that it would
warrant incurring additional potential performance/maintenance costs.
What is the likelihood of actual crashes due to pathological
de-immortalization in older stable ABI extensions?  I don't have a
clear answer to offer on that but I'd only expect it to be a problem
if such extensions are used heavily in (very) long-running processes.


How much would it slow things back down if it wasn't done for ints at all?


I'll look into that.  We're talking about the ~260 small ints, so it
depends on how much they are used relative to all the other int
objects that are used in a program.


Not only that -- as far as I understand, it's only cases where we know 
at compile time that a small int is being returned. AFAIK, that would be 
fast branches of aruthmetic code, but not much else.


If not optimizing small ints is OK performance-wise, then everything 
looks good: we say that the “skip incref/decref” optimization can only 
be done for types whose instances are *all* immortal, leave it to future 
discussions to relax the requirement, and PEP 683 is good to go!


With that I mind I snipped your discussion of the previous alternative. 
Going with this one wouldn't prevent us from doing something more clever 
in the future.




Some more reasoning for not worrying about de-immortalizing in types
without this optimization:
These objects will be de-immortalized with refcount around 2^29, and
then incref/decref go back to being paired properly. If 2^29 is much
higher than the true reference count at de-immortalization, this'll just
cause a memory leak at shutdown.
And it's probably OK to assume that the true reference count of an
object can't be anywhere near 2^29: most of the time, to hold a
reference you also need to have a pointer to the referenced object, and
there ain't enough memory for that many pointers. This isn't a formally
sound assumption, of course -- you can incref a million times with a
single pointer if you pair the decrefs correctly. But it might be why we
had no issues with "int won't overflow", an assumption which would fail
with just 4× higher numbers.


Yeah, if we're dealing with properly paired incref/decref then the
worry about crashing after de-immortalization is mostly gone.  The
problem is where in the runtime we would simply not call Py_INCREF()
on certain objects because we know they are immortal.  For instance,
Py_RETURN_NONE (outside the older stable ABI) would no longer incref,
while the problematic stable ABI extension would keep actually
decref'ing until we crash.

Again, I'm not sure what the likelihood of this case is.  It seems
very unlikely to me.


Of course, the this argument would apply to immortalization and 64-bit
builds as well. I wonder if there are holes in it :)


With the numbers involved on 64-bit the problem is super unlikely due
to the massive numbers we're talking about, so we don't need to worry.
Or perhaps I misunderstood your point?


That's true. However, as we're adjusting incref/decref documentation for 
this PEP anyway, it looks like we could add “you should keep a pointer 
around for each reference you hold”, and go from  “super unlikely” to 
“impossible in well-behaved code” :)



Oh, and if the "Special-casing immortal objects in tp_dealloc()" way is
valid, refcount values 1 and 0 can no longer be treated specially.
That's probably not a practical issue for the relevant types, but it's
one more thing to think about when applying the optimization.


Given the low chance of the pathological case, the nature of the
conditions where it might happen, and the specificity of 0 and 1
amongst all the possible values, I wouldn't consider this a problem.


+1. But it's worth mentioning that it's not a problem.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org

[Python-Dev] SC accepted PEP 594: Removing dead batteries from the standard library

2022-03-14 Thread Victor Stinner
Hi,

Oh, the Steering Council accepted PEP 594 "Removing dead batteries
from the standard library"! I just saw the announcement on Discourse.
Congratulations Christian and Brett! This PEP, first proposed in 2019,
wasn't an easy one.

https://peps.python.org/pep-0594/

Gregory P. Smith's message on Discourse:

"""
On behalf of the Python Steering Council,

We are accepting PEP-594 32 Removing dead batteries from the standard library.

It removes a non-controversial set of very old unmaintained or
obsolete libraries from the Python standard library. We expect this
PEP to be a one time event, and for future deprecations to be handled
differently.

One thing we’d like to see happen while implementing it: Document the
status of the modules being deprecated and removed and backport those
deprecation updates to older CPython branch documentation (at least
back to 3.9). That gets the notices in front of more people who may
use the docs for their specific Python version.

Particular care should also be taken during the pre-release cycles
that remove deprecated modules. If it turns out the removal of a
module proves to be a problem in practice despite the clear
deprecation, deferring the removal of that module should be considered
to avoid disruption.

Doing a “mass cleanup” of long obsolete modules is a sign that we as a
project have been ignoring rather than maintaining parts of the
standard library, or not doing so with the diligence being in the
standard library implies they deserve. Resolving ongoing discussions
around how we define the stdlib for the long term does not block this
PEP. It seems worthwhile for us to conduct regular reviews of the
contents of the stdlib every few releases so we can avoid accumulating
such a large pile of dead batteries, but this is outside the scope of
this particular PEP.

– Greg for the PSC
"""
https://discuss.python.org/t/pep-594-take-2-removing-dead-batteries-from-the-standard-library/13508/21

Victor
-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HZXXAHW6K65UTNI2BXWBF5G4XNM644YM/
Code of Conduct: http://python.org/psf/codeofconduct/