On Fri, 10 Apr 2020 23:33:28 +0100
Steve Dower <steve.do...@python.org> wrote:
> On 10Apr2020 2055, Antoine Pitrou wrote:
> > On Fri, 10 Apr 2020 19:20:00 +0200
> > Victor Stinner <vstin...@python.org> wrote:  
> >>
> >> Note: Cython and cffi should be preferred to write new C extensions.
> >> This PEP is about existing C extensions which cannot be rewritten with
> >> Cython.  
> > 
> > Using Cython does not make the C API irrelevant.  In some
> > applications, the C API has to be low-level enough for performance.
> > Whether the application is written in Cython or not.  
> 
> It does to the code author.
> 
> The point here is that we want authors who insist on coding against the 
> C API to be aware that they have fewer compatibility guarantees [...]

Yeah, you missed the point of my comment here.  Cython *does* call into
the C API, and it's quite insistent on performance optimizations too.
Saying "just use Cython" doesn't make the C API unimportant - it just
hides it from your own sight.

> - maybe 
> even to the point of needing to rebuild for each minor version if you 
> want to insist on using macros (i.e. anything starting with "_Py").

If there's still a way for C extensions to get at now-private APIs,
then the PEP fails to convey that, IMHO.

> >> **Backward compatibility:** backward incompatible on purpose. Break the
> >> limited C API and the stable ABI, with the assumption that `Most C
> >> extensions don't rely directly on CPython internals`_ and so will remain
> >> compatible.  
> > 
> > The problem here is not only compatibility but potential performance
> > regressions in C extensions.  
> 
> I don't think we've ever guaranteed performance between releases. 
> Correctness, sure, but not performance.

That's a rather weird argument.  Just because you don't guarantee
performance doesn't mean it's ok to introduce performance regressions.

It's especially a weird argument to make when discussing a PEP where
most of the arguments are distant promises of improved performance.

> >> Fork and "Copy-on-Read" problem
> >> ...............................
> >>
> >> Solve the "Copy on read" problem with fork: store reference counter
> >> outside ``PyObject``.  
> > 
> > Nowadays it is strongly recommended to use multiprocessing with the
> > "forkserver" start method:
> > https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
> > 
> > With "forkserver", the forked process is extremely lightweight and
> > there are little savings to be made in the child.  
> 
> Unfortunately, a recommendation that only applies to a minority of 
> Python users. Oh well.

Which "minority" are you talking about?  Neither of us has numbers, but
I'm quite sure that the population of Python users calling into
multiprocessing (or a third-party library relying on multiprocessing,
such as Dask) is much larger than the population of Python users
calling fork() directly and relying on copy-on-write for optimization
purposes.

But if you have a different experience to share, please do so.

> Separating refcounts theoretically improves cache locality, specifically 
> the case where cache invalidation impacts multiple CPUs (and even the 
> case where a single thread moves between CPUs).

I'm a bit curious why it would improve, rather than degrade, cache
locality. If you take the typical example of the eval loop, an object
is incref'ed and decref'ed just about the same time that it gets used.

I'll also note that the PEP proposes to remove APIs which return
borrowed references... yet increasing the number of cases where
accessing an object implies updating its refcount.

Therefore I'm unconvinced that stashing refcounts in a separate memory
area would provide any CPU efficiency benefit.

> >> Debug runtime and remove debug checks in release mode
> >> .....................................................
> >>
> >> If the C extensions are no longer tied to CPython internals, it becomes
> >> possible to switch to a Python runtime built in debug mode to enable
> >> runtime debug checks to ease debugging C extensions.  
> > 
> > That's the one convincing feature in this PEP, as far as I'm concerned.  
> 
> Eh, this assumes that someone is fully capable of rebuilding CPython and 
> their own extension, but not one of their dependencies, [...]

You don't need to rebuild CPython if someone provides a binary debug
build (which would probably happen if such a build were compatible with
regular packages).  You also don't need to rebuild your own extension
to take advantage of the interpreter's internal correctness checks, if
the interpreter's ABI hasn't changed.

This is the whole point: being able to load an unmodified extension
(and unmodified dependencies) on a debug-checks-enabled interpreter.

> and this code 
> that they're using doesn't have any system dependencies that differ in 
> debug builds (spoiler: they do).

Are you talking about Windows?  On non-Windows systems, I don't think
there are "system dependencies that differ in debug builds".

Regards

Antoine.

_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BPKZRVCDF5ZA65JKITP2CGGJTNN2Q5FT/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to