[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount" (round 2)

2022-02-23 Thread Sebastian Berg
On Thu, 2022-02-24 at 00:21 +0100, Antonio Cuni wrote:
> On Mon, Feb 21, 2022 at 5:18 PM Petr Viktorin 
> wrote:
> 
> Should we care about hacks/optimizations that rely on having the only
> > reference (or all references), e.g. mutating a tuple if it has
> > refcount
> > 1? Immortal objects shouldn't break them (the special case simply
> > won't
> > apply), but this wording would make them illegal.
> > AFAIK CPython uses this internally, but I don't know how
> > prevalent/useful it is in third-party code.
> > 
> 
> FWIW, a real world example of this is numpy.ndarray.resize(...,
> refcheck=True):
> https://numpy.org/doc/stable/reference/generated/numpy.ndarray.resize.html#numpy.ndarray.resize
> https://github.com/numpy/numpy/blob/main/numpy/core/src/multiarray/shape.c#L114
> 
> When refcheck=True (the default), numpy raises an error if you try to
> resize an array inplace whose refcnt > 2 (although I don't understand
> why >
> 2 and not > 1, and the docs aren't very clear about this).
> 
> That said, relying on the exact value of the refcnt is very bad for
> alternative implementations and for HPy, and in particular it is
> impossible
> to implement ndarray.resize(refcheck=True) correctly on PyPy. So from
> this
> point of view, a wording which explicitly restricts the "legal" usage
> of
> the refcnt details would be very welcome.

Yeah, NumPy resizing is a bit of an awkward point, I would be on-board
for just replacing resize for non

NumPy does also have a bit of magic akin to the "string concat" trick
for operations like:

a + b + c

where it will try do magic and use the knowledge that it can
mutate/reuse the temporary array, effectively doing:

tmp = a + b
tmp += c

(which requires some stack walking magic additionally to the refcount!)

Cheers,

Sebastian


> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ACJIER45M6XLKUWT6TCLB6QXVZSB74EH/
> Code of Conduct: http://python.org/psf/codeofconduct/



signature.asc
Description: This is a digitally signed message part
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HSCF5XPQMWRX45Y2PVNPVSCDT4GC6PTB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount" (round 2)

2022-02-23 Thread Antonio Cuni
On Mon, Feb 21, 2022 at 5:18 PM Petr Viktorin  wrote:

Should we care about hacks/optimizations that rely on having the only
> reference (or all references), e.g. mutating a tuple if it has refcount
> 1? Immortal objects shouldn't break them (the special case simply won't
> apply), but this wording would make them illegal.
> AFAIK CPython uses this internally, but I don't know how
> prevalent/useful it is in third-party code.
>

FWIW, a real world example of this is numpy.ndarray.resize(...,
refcheck=True):
https://numpy.org/doc/stable/reference/generated/numpy.ndarray.resize.html#numpy.ndarray.resize
https://github.com/numpy/numpy/blob/main/numpy/core/src/multiarray/shape.c#L114

When refcheck=True (the default), numpy raises an error if you try to
resize an array inplace whose refcnt > 2 (although I don't understand why >
2 and not > 1, and the docs aren't very clear about this).

That said, relying on the exact value of the refcnt is very bad for
alternative implementations and for HPy, and in particular it is impossible
to implement ndarray.resize(refcheck=True) correctly on PyPy. So from this
point of view, a wording which explicitly restricts the "legal" usage of
the refcnt details would be very welcome.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ACJIER45M6XLKUWT6TCLB6QXVZSB74EH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Require a C compiler supporting C99 to build Python 3.11

2022-02-23 Thread h . vetinari
> And it seems like we still care about support Visual Studio 2017, even
> if Visual Studio 2019 and 2022 are available.

Can we make this more concrete? Do we know of affected parties? Numbers
of affected users? Or are we just conservatively guesstimating the thickness
of the long tail of support?

On top of that, I think the formulation you chose does not map correctly to
actual compiler capabilities. C99 (minus C11 optionals) only arrived in VS2019,
there were still gaps in VS2017.

I would advocate bumping the requirement to VS2019. Yes, there is a built-in
reluctance to update toolchains, but MSFT has been _extremely_ conservative
w.r.t. ABI-compatibility (e.g. the whole story with 
[[msvc::no_unique_address]]),
and I just don't see the argument why a non-negligible portion of users cannot
upgrade to the fully-compatible-with-everything-previously VS2019.

> "Python 3.11 and newer versions use C99 in the public C API and use a
> subset of C11 in Python internals. The public C API should be
> compatible with C++. The C11 subset are features supported by GCC 8.5,
> clang 8.0, and MSVC of Visual Studio 2017."

If we were to require VS2019, then the formulation could be much nicer (IMO):
"""
Python 3.11 and newer versions use C99 in the public C API (without those
features that became optional in C11), while Python internals may use C11
(again without optional features). The public C API should be compatible with 
C++.
"""
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PAIVGORIDRIGLPCSQI5KN6U6CKFTBHPX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount" (round 2)

2022-02-23 Thread Brett Cannon
On Wed, Feb 23, 2022 at 8:19 AM Petr Viktorin  wrote:

> On 23. 02. 22 2:46, Eric Snow wrote:
>
>
[SNIP]


>
> > So it seems like the bar should be pretty low for this one (assuming
> > we get the performance penalty low enough).  If it were some massive
> > or broadly impactful (or even clearly public) change then I suppose
> > you could call the motivation weak.  However, this isn't that sort of
> > PEP.


Yes, but PEPs are not just about complexity, but also impact on users. And
"impact" covers backwards-compatibility which includes performance
regressions (i.e. making Python slower means it may no longer be a viable
for someone with specific performance requirements). So with the initial 4%
performance regression it made sense to write a PEP.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Z4SXVNRLHFWRPLB4UQZQVW7SKDUJH6GY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Version 2 of PEP 670 – Convert macros to functions in the Python C API

2022-02-23 Thread Victor Stinner
On Wed, Feb 23, 2022 at 7:11 PM Petr Viktorin  wrote:
> I did realize there's one more issue when converting macros or static
> inline functions to regular functions.
> Regular functions' bodies aren't guarded by limited API #ifdefs, so if
> they are part of the limited API it's easy to forget to think about it
> when changing them.
> If a macro in the limited API is converted to a regular function, then a
> test should be added to ensure the old implementation of the macro (i.e.
> what's compiled into stable ABI extensions) still works.

Does it problem really belongs to PEP 670 "Convert macros to functions
in the Python C API", or is it more something for PEP 652 "Maintaining
the Stable ABI"?

I don't think that Python 3.11 should keep a copy of Python 3.10
macros: it would increase the maintenance burden, each function would
have 2 implementations (3.11 function and 3.10 macro). Also, there
would be no warranty that the copied 3.10 macros would remain exactly
the same than 3.10 code if someone changes them by mistake directly or
indirectly (by changing code used by this macro, changing a compiler
flag, etc).

Maybe such stable ABI test belongs to an external project building a C
extension with the Python 3.10 limited C API (or an older version) and
then test it on Python 3.11. IMO it's the reliable way to test the
stable ABI: a functional test.

Victor
-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HTO2BIVD2SIJGXY3HC7OFG3YW7PXXTT6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Version 2 of PEP 670 – Convert macros to functions in the Python C API

2022-02-23 Thread Victor Stinner
On Wed, Feb 23, 2022 at 7:11 PM Petr Viktorin  wrote:
> In the PEP, the "Performance and inlining" section seems unnecessary. It
> talks about attributes that aren't used in the implementation. Or are
> they? How does the section relate to the rest of the PEP?
> The "Benchmark comparing macros and static inline functions" section at
> the end should be enough to explain the impact.

I added this section to the PEP because some core devs believe that C
compilers don't inline static inline functions sometimes and that it
causes performance regressions. This section is an answer to that: I
checked that static inline functions *are* inlined as expected in
practice. The "Benchmark comparing macros and static inline functions"
section uses regular benchmarks to confirm that.

Forcing the compiler to inline or asking the compiler to not inline
has also been discussed multiple times when it was proposed to convert
some macros to static inline functions. So I prefer to put it in the
PEP to avoid people having to dig into old discussions to have
scattered information about that.

You may want to dig into links from the PEP to see old discussions.
See for example https://bugs.python.org/issue35059 for the first
discussion on forcing inline or not.

Victor
-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YVW7ULTTC6FNTOLL2BPNQOOV3NMCH4BK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Version 2 of PEP 670 – Convert macros to functions in the Python C API

2022-02-23 Thread Petr Viktorin

On 22. 02. 22 13:41, Victor Stinner wrote:

Hi,

Since Erlend and me posted PEP 670 on python-dev last October, we took
all feedback (python-dev and Steering Council) in account to clarify
the intent of the PEP and to reduce its scope (remove *any* risk of
backward compatibility).

PEP 670: https://python.github.io/peps/pep-0670/



I did realize there's one more issue when converting macros or static 
inline functions to regular functions.
Regular functions' bodies aren't guarded by limited API #ifdefs, so if 
they are part of the limited API it's easy to forget to think about it 
when changing them.
If a macro in the limited API is converted to a regular function, then a 
test should be added to ensure the old implementation of the macro (i.e. 
what's compiled into stable ABI extensions) still works.



In the PEP, the "Performance and inlining" section seems unnecessary. It 
talks about attributes that aren't used in the implementation. Or are 
they? How does the section relate to the rest of the PEP?
The "Benchmark comparing macros and static inline functions" section at 
the end should be enough to explain the impact.



[...]> Following the SC decision, we already modified PEP 7 to add:


"static inline functions should be preferred over macros in new code."
https://www.python.org/dev/peps/pep-0007/#c-dialect


Nice! Thanks!
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MAWV2EGOEKO22IRIUZD66QD2ZAOA55OW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Require a C compiler supporting C99 to build Python 3.11

2022-02-23 Thread Victor Stinner
Hi,

I updated my PEP 7 PR to use C99 in the public C API and "a subset of"
C11 in Python internals:

"Python 3.11 and newer versions use C99 in the public C API and use a
subset of C11 in Python internals. The public C API should be
compatible with C++. The C11 subset are features supported by GCC 8.5,
clang 8.0, and MSVC of Visual Studio 2017."

https://github.com/python/peps/pull/2309/files

GCC 8.5 is the version chosen by RHEL 8. It should provide C11
features that we care about.

I pickled clang 8.0 because it's had been released in 2019 and so
should be available on most operating systems. FreeBSD uses clang by
default. FreeBSD 13 uses clang 11.

And it seems like we still care about support Visual Studio 2017, even
if Visual Studio 2019 and 2022 are available.

I chose to not require supporting AIX XLC. Inada-san wrote: "xlclang
fully supports C89/C99/C11. xlc fully supports C89/C99, and partially
supports C11." I guess that in practice, we can test a PR on buildbots
when trying "new shiny" C11 feature.

Moreover, if a C11 feature is missing, it's usually not too
complicated to use a workaround for C99 and older.

Victor
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5IUV6IEB2ARZ4R6MCVK7FZFKRLH7M5NV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: SC reply to PEP 674 -- Disallow using macros as l-values

2022-02-23 Thread Petr Viktorin

On 22. 02. 22 15:10, Victor Stinner wrote:

Hi,

Thanks for looking into my PEP 674!

I don't understand well why Py_SIZE() cannot be changed until
Py_SET_SIZE() is available on all supported Python versions (so
affected projects don't have to add a compatibility code), whereas
it's ok to require a compatibility code to keep support for Python 3.8
and older for the approved Py_TYPE() change (Py_SET_TYPE was added to
Python 3.9).


See the exception for Py_Type, 
https://github.com/python/steering-council/issues/79#issuecomment-982702725 
:



This decision only applies to Py_SET_TYPE



We mean that in general (not for this change as we have made an exception per 
our previous argument), there should be a deprecation notice in some way or 
form (in these cases must be in the docs/what's new as there is no way to make 
the preprocessor warn only when we need it). We just wanted to reiterate that 
in general, we still want two releases with some deprecation notice for any 
deprecation, unless exempt.


The SC didn't want to revert the earlier SC's decision.
But this PEP lists the dozens of new macros, and no clear reason why the 
general policy shouldn't apply.




Many projects affected by the Py_SIZE() change are already affected by
the Py_TYPE() change, and so need a compatibility code anyway: 33%
(5/15) of affected projects are affected by Py_SIZE() and Py_TYPE()
changes (see below).

Not changing Py_SIZE() now avoids adding a compatibility code to the
33% (5/15) of affected projects only affected by the Py_SIZE() change.

Either changing Py_TYPE() *and* Py_SIZE(), or none, would make more
sense to me. Well, I would prefer to change both, since the work is
already done. The change is already part of Python 3.11 and all
projects known to be afffected are already fixed. And the Py_TYPE()
change was already approved.


IMO it's still possible to change none of them *now* and go through the 
a deprecation-in-docs period for everything, if you want that.




--

The Py_TYPE() change requires a compatibility code to get
Py_SET_TYPE() on Python 3.8 and older, use pythoncapi_compat.h or add:

#if PY_VERSION_HEX < 0x030900A4
# define Py_SET_TYPE(obj, type) ((Py_TYPE(obj) = (type)), (void)0)
#endif

The Py_SIZE() change requires a similar compatibility code. For
example, boost defines Py_SET_TYPE() *and* Py_SET_SIZE():

#if PY_VERSION_HEX < 0x030900A4
# define Py_SET_TYPE(obj, type) ((Py_TYPE(obj) = (type)), (void)0)
# define Py_SET_SIZE(obj, size) ((Py_SIZE(obj) = (size)), (void)0)
#endif


Right. But on 3.9+ you don't need any of that, and as since there is no 
need to do the change *right* now, it can wait until 3.8 becomes history.




--

Affected projects from PEP 674.

Projects affected by Py_SIZE() and Py_TYPE() changes (5):

* guppy3: Py_SET_TYPE(), Py_SET_SIZE(), Py_SET_REFCNT(), use pythoncapi_compat.h
* bitarray: Py_SET_TYPE(), Py_SET_SIZE(), use pythoncapi_compat.h
* mypy: Py_SET_TYPE(), Py_SET_SIZE(), use pythoncapi_compat.h
* numpy: Py_SET_TYPE(), Py_SET_SIZE(), custom compatibility code
* boost: Py_SET_TYPE(), Py_SET_SIZE(), custom compatibility code

Projects only affected by the Py_SIZE() change (5):

* python-snappy: Py_SET_SIZE(), use pythoncapi_compat.h
* recordclass: use custom py_refcnt() and py_set_size() macros
* Cython: Py_SET_SIZE(), Py_SET_REFCNT(), custom compatibility code
* immutables: Py_SET_SIZE(), use pythoncapi_compat.h
* zstd: Py_SET_SIZE(), use pythoncapi_compat.h

Projects only affected by Py_TYPE() change (5):

* datatable: Py_SET_TYPE(), Py_SET_REFCNT(), use pythoncapi_compat.h
* mercurial: Py_SET_TYPE(), use pythoncapi_compat.h
* pycurl: Py_SET_TYPE(), custom compatibility code
* duplicity: Py_SET_TYPE(), test PY_MAJOR_VERSION and
PY_MINOR_VERSION, or use Py_TYPE() as l-value
* gobject-introspection: Py_SET_TYPE(), custom compatibility code

These examples don't count the larger number of affected projects
using Cython which only need to re-run Cython to use Py_SET_REFCNT(),
Py_SET_TYPE() and Py_SET_SIZE().

I would like to add that 100% of the top 5000 PyPI projects are
already fixed for PEP 674, but 26 projects need a release including a
fix (which will likely happend before Python 3.11 final release).


Thank you for surveying and fixing popular public projects. As you know, 
there are many more that are not public, popular, nor fixed.

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VISOEN6UOAXSSQ4QU7QNAILMQBYWF3I2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 683: "Immortal Objects, Using a Fixed Refcount" (round 2)

2022-02-23 Thread Petr Viktorin

On 23. 02. 22 2:46, Eric Snow wrote:

Thanks for the responses.  I've replied inline below.


Same here :)



Immortal Global Objects
---

All objects that we expect to be shared globally (between interpreters)
will be made immortal.  That includes the following:

* singletons (``None``, ``True``, ``False``, ``Ellipsis``, ``NotImplemented``)
* all static types (e.g. ``PyLong_Type``, ``PyExc_Exception``)
* all static objects in ``_PyRuntimeState.global_objects`` (e.g. identifiers,
small ints)

All such objects will be immutable.  In the case of the static types,
they will be effectively immutable.  ``PyTypeObject`` has some mutable
start (``tp_dict`` and ``tp_subclasses``), but we can work around this
by storing that state on ``PyInterpreterState`` instead of on the
respective static type object.  Then the ``__dict__``, etc. getter
will do a lookup on the current interpreter, if appropriate, instead
of using ``tp_dict``.


But tp_dict is also public C-API. How will that be handled?
Perhaps naively, I thought static types' dicts could be treated as
(deeply) immutable, and shared?


They are immutable from Python code but not from C (due to tp_dict).
Basically, we will document that tp_dict should not be used directly
(in the public API) and refer users to a public getter function.  I'll
note this in the PEP.


What worries me is that existing users of the API haven't read the new 
documentation. What will happen if users do use it?

Or worse, add things to it?

(Hm, the current docs are already rather confusing -- 3.2 added a note 
that "It is not safe to ... modify tp_dict with the dictionary C-API.", 
but above that it says "extra attributes for the type may be added to 
this dictionary [in some cases]")



[...]

And from the other thread:

On 17. 02. 22 18:23, Eric Snow wrote:
  > On Thu, Feb 17, 2022 at 3:42 AM Petr Viktorin  wrote:
   Weren't you planning a PEP on subinterpreter GIL as well? Do you
want to
   submit them together?
  >>>
  >>> I'd have to think about that.  The other PEP I'm writing for
  >>> per-interpreter GIL doesn't require immortal objects.  They just
  >>> simplify a number of things.  That's my motivation for writing this
  >>> PEP, in fact. :)
  >>
  >> Please think about it.
  >> If you removed the benefits for per-interpreter GIL, the motivation
  >> section would be reduced to is memory savings for fork/CoW. (And lots of
  >> performance improvements that are great in theory but sum up to a 4%
loss.)
  >
  > Sounds good.  Would this involve more than a note at the top of the PEP?

No, a note would work great. If you read the motivation carefully, it's
(IMO) clear that it's rather weak without the other PEP. But that
realization shouldn't come as a surprise to the reader.


Having thought about it some more, I don't think this PEP should be
strictly bound to per-interpreter GIL.  That is certainly my personal
motivation.  However, we have a small set of users that would benefit
significantly, the change is relatively small and simple, and the risk
of breaking users is also small.  In fact, we regularly have more
disruptive changes to internals that do not require a PEP.


Right, with the recent performance improvements it's looking like it 
might stand on its own after all.



So it seems like the bar should be pretty low for this one (assuming
we get the performance penalty low enough).  If it were some massive
or broadly impactful (or even clearly public) change then I suppose
you could call the motivation weak.  However, this isn't that sort of
PEP.  Honestly, it might not have needed a PEP in the first place if I
had been a bit more clear about the idea earlier.


Maybe it's good to have a PEP to clear that up :)
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZTON72YXUUFV5MX5KIEM3DDNAUAZT4M6/
Code of Conduct: http://python.org/psf/codeofconduct/