[Python-Dev] Re: Fwd: Python 3.11 performance with frame pointers

2023-01-04 Thread Gregory P. Smith
I suggest re-posting this on discuss.python.org as more engaged active core
devs will pay attention to it there.

On Wed, Jan 4, 2023 at 11:12 AM Daan De Meyer 
wrote:

> Hi,
>
> As part of the proposal to enable frame pointers by default in Fedora
> (https://fedoraproject.org/wiki/Changes/fno-omit-frame-pointer), we
> did some benchmarking to figure out the expected performance impact.
> The performance impact was generally minimal, except for the
> pyperformance benchmark suite where we noticed a more substantial
> difference between a system built with frame pointers and a system
> built without frame pointers. The results can be found here:
> https://github.com/DaanDeMeyer/fpbench (look at the mean difference
> column for the pyperformance results where the percentage is the
> slowdown compared to a system built without frame pointers). One of
> the biggest slowdowns was on the scimark_sparse_mat_mult benchmark
> which slowed down 9.5% when the system (including python) was built
> with frame pointers. Note that these benchmarks were run against
> Python 3.11 on a Fedora 37 x86_64 system (one built with frame
> pointers, another built without frame pointers). The system used to
> run the benchmarks was an Amazon EC2 machine.
>
> We did look a bit into the reasons behind this slowdown. I'll quote
> the investigation by Andrii on the Fesco issue thread here
> (https://pagure.io/fesco/issue/2817):
>
> > So I did look a bit at Python with and without frame pointers trying to
> > understand pyperformance > regressions.
>
> > First, perf data suggests that big chunk of CPU is spent in
> _PyEval_EvalFrameDefault,
> > so I looked specifically  into it (also we had to use DWARF mode for
> perf for apples-to-apples
> > comparison, and a bunch of stack traces weren't symbolized properly,
> which just again
> > reminds why having frame pointers is important).
>
> > perf annotation of _PyEval_EvalFrameDefault didn't show any obvious hot
> spots, the work
> > seemed to be distributed pretty similarly with or without frame
> pointers. Also scrolling through
> > _PyEval_EvalFrameDefault disassembly also showed that instruction
> patterns between fp
> > and no-fp versions are very similar.
>
> > But just a few interesting observations.
>
> > The size of _PyEval_EvalFrameDefault function specifically (and all the
> other functions didn't
> > change much in that regard) increased very significantly from 46104 to
> 53592 bytes, which is a
> > considerable 15% increase. Looking deeper, I believe it's all due to
> more stack spills and
> > reloads due to one less register available to keep local variables in
> registers instead of on the stack.
>
> > Looking at _PyEval_EvalFrameDefault C code, it is a humongous one
> function with gigantic switch
> > statement that implements Python instruction handling logic. So the
> function itself is big and it has
> > a lot of local state in different branches, which to me explained why
> there is so much stack spill/load.
>
> > Grepping for instruction of the form mov -0xf0(%rbp),%rcx or mov
> 0x50(%rsp),%r10 (and their reverse
> > variants), I see that there is a substantial amount of stack spill/load
> in _PyEval_EvalFrameDefault
> > disassembly already in default no frame pointer variant (1870 out of
> 11181 total instructions in that
> > function, 16.7%), and it just increases further in frame pointer version
> (2341 out of 11733 instructions, 20%).
>
> > One more interesting observation. With no frame pointers, GCC generates
> stack accesses using %rsp
> > with small positive offsets, which results in pretty compact binary
> instruction representation, e.g.:
>
> > 0x001cce40 <+44160>: 4c 8b 54 24 50  mov
> 0x50(%rsp),%r10
>
> > This uses 5 bytes. But if frame pointers are enabled, GCC switches to
> using %rbp-relative offsets,
> > which are all negative. And that seems to result in much bigger
> instructions, taking now 7 bytes instead of 5:
>
> > 0x001d3969 <+53065>: 48 8b 8d 10 ff ff ffmov
> -0xf0(%rbp),%rcx
>
> > I found it pretty interesting. I'd imagine GCC should be capable to keep
> using %rsp addressing just fine
> > regardless of %rbp and save on instruction sizes, but apparently it
> doesn't. Not sure why. But this instruction
> > increase, coupled with increase of number of spills/reloads, actually
> explains huge increase in byte size of
> > _PyEval_EvalFrameDefault: (2341 - 1870) * 7 + 1870 * 2 = 7037 (2 extra
> bytes for existing 1870 instructions
> > that were switched from %rsp+positive offset to %rbp + negative offset,
> plus 7 bytes for each of new 471 instructions).
> > I'm no compiler expert, but it would be nice for someone from GCC
> community to check this as well (please CC
> > relevant folks, if you know them).
>
> > In summary, to put it bluntly, there is just more work to do for CPU
> saving/restoring state to/from stack. But I don't
> > think _PyEval_EvalFrameDefault example is typical of how application
> code is written, 

[Python-Dev] Re: Switching to Discourse

2022-12-02 Thread Gregory P. Smith
On Thu, Dec 1, 2022 at 8:37 AM Victor Stinner  wrote:

>
> Should we *close* the python-dev mailing list?
>

I'd be in favor of this. Or at least setting up an auto-responder
suggesting people post on discuss.python.org instead.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JMVE4S3LGMWDLJW2Z5T73K6Z23IZLHYQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: RFC: expose ssl internals for use by ctypes/cffi

2022-11-30 Thread Gregory P. Smith
On Wed, Nov 30, 2022 at 12:47 PM Steve Dower  wrote:

> On 11/30/2022 4:52 PM, chris...@weinigel.se wrote:
> > Does this seem like a good idea?  As I said, I feel that it is a bit
> ugly, but it does mean that if someone wants to use some
> SSL_really_obscure_function in libcrypto or libssl they can do that without
> having to rebuild all of CPython themselves.
>
> Broadly, no, I don't think it's a good idea. We don't like encouraging
> users to do things that make it hard to support them in the future.
>
> Nonetheless, it's one that I've had to do, and so realistically I think
> it's okay to *enable* the hack without endorsing it. This is one of the
> reasons I switched the Windows builds to dynamically linked OpenSSL
> builds (they used to be statically linked, which meant there was no way
> to get at the unused exports). So now you can use `import _ssl;
> ctypes.CDLL("libssl-1_1")` to get at other exports from the module if
> you need them, and there's a similar trick to get the raw handle that I
> don't recall off the top of my head.
>
> But the only reason I'd ever want to document this is to tell people not
> to rely on it. If you control your environment well enough that you can
> guarantee it'll work for you, that's great. Nobody else should ever
> think they're doing the "right thing".
>

+1 ... and in general if you want access to other OpenSSL APIs not already
in the ssl module, getting them via non-stdlib packages on PyPI would be a
better idea.

https://pypi.org/project/cryptography/ is very well supported.
https://pypi.org/project/oscrypto/ exists and is quite interesting.
the old https://pypi.org/project/M2Crypto/ package still exists and seems
to be maintained (wow).

More context: We don't like the ssl module in the standard library - it is
already too tightly tied to OpenSSL:
https://discuss.python.org/t/our-future-with-openssl/21486

So if you want specific OpenSSL APIs that are not exposed, seeking to see
them added to the standard library where they would then become features
that need to be supported for a very long time, is going to be the most
difficult approach as there'd need to be a very good reason to have them in
the stdlib. Third party libraries that can provide what you need, or
rolling your own libssl API wrappings however you choose to implement them,
are better bets.

-Greg
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5E62GYXUJLTPQWJAPEOLQDHY3F2IDRER/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [CVE-2022-37454] SHA3 vulnerability and upcoming Python patches for 3.7 - 3.10

2022-11-07 Thread Gregory P. Smith
The patches to 3.6-3.10 have been merged, which means they will go out in
the next Python patch release for those updates. ie:
https://github.com/python/cpython/issues/98517

You can see the planned schedule of those on
https://peps.python.org/pep-0619/ and related similar peps for older
python versions (i never remember pep numbers, i just google for "python
3.8 release schedule" to get to such a PEP myself). For 3.9 and earlier
which are in "as-needed" security mode and no longer ship binaries I expect
we'll get to it within the next month. The patches are public and anyone
can apply them on their own. We've got other fixes for other issues coming,
so rolling one source only security release for every single issue isn't a
great use of time.

I personally didn't feel this one was urgent enough to ask anyone to spend
time doing an emergency security release as triggering the crash requires
someone sending a multi-gigabyte amount of data into a sha3 hash function
in a single .update() method call. That seems like a rare code pattern. How
many applications ever do that vs doing I/O in smaller chunks with more
frequent .update() calls?

Linux or other OS distros that bundle Python manage their own patches to
their runtimes and their security folks may need to be reminded to pick
these patches up if they have not already done so. Their schedules are
independent, they're likely to rapidly patch anything they agree with as
being a security issue. Distros are also likely the only ones who would
apply a backport to 3.6. Anything shipping Python <=3.8 or earlier *should*
have an interest in patching.

fwiw the forum I would've used for this is the github issue itself as that
is the canonical discussion.

-gps

On Mon, Nov 7, 2022 at 11:52 AM mark_topham--- via Python-Dev <
python-dev@python.org> wrote:

> I’m looking for help understanding how Python will release fixes related
> to the SHA3 critical security vulnerability (CVE-2022-37454).  I’ve tried
> to figure this out myself, but I’m far from a Python expert and I’m not
> sure where else I should look.
>
> Apologies in advance if this is the wrong place to ask - if it is, a
> redirect to the correct place would be most appreciated.
>
> Here’s what I’ve found so far:
>
> * Python versions 3.6 through 3.10 appear to be affected
>* 3.6 is end of life, so no fix is expected
>* A code fix appears to have been applied to 3.7 through 3.10
> https://github.com/python/cpython/issues/98517
>* 3.9 and 3.10 by default use OpenSSL1.1.1+ if it’s available,
> appearing to default to the builtin, vulnerable SHA3 implementation if
> OpenSSL is not found (if there’s an exception)
>   * 3.9 introduced this change via bpo-37630 in release 3.9.0 beta1
>   * 3.10 appears to have had this functionality since it was
> originally released
> * 3.11 uses tiny_sha3 and AFAICT was never affected by the CVE
>
> But what I’m having trouble figuring out is when/how these fixes will
> become generally available and ready for users of Python to download.
>
>
> * When will patched releases for Python 3.7-3.10 be released?
> * If pending releases are not in the release pipeline, what other patching
> opportunities exist?
>
> Ultimately I’m trying to set patching expectations for my company’s
> engineering teams who are still running vulnerable versions of Python.
>
> More notes around what i’ve found, in case it helps clarify my questions:
> From the Python project GitHub I can see gh-98517 to fix the buffer
> overflow in Python’s internal _sha3 module (CVE-2022-37454) has been
> committed to the Python 3.7 - 3.10 branches.  I understand that for Python
> releases 3.9 and 3.10 if one is using the OpenSSL 1.1.1+ sha3 modules
> instead of the internal _sha3 module that is already a mitigation.  I also
> understand that Python 3.11 and later has switched to using tiny_sha3, and
> no longer relies on the vulnerable _sha3 module.
>
> Any information you could point me at would be most helpful.  If there is
> a more ideal forum to raise this question, please redirect me there.
>
> Thank you in advance
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/K7IYZUGOOLCGKZOLCZ27RSWZ7OWIP575/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2XGSIIHFMO4QPQB5ZXIER5DDSRF6VMQB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Embedded Python Memory Leaks

2022-10-06 Thread Gregory P. Smith
On Thu, Oct 6, 2022 at 12:48 PM  wrote:

> Hi Python team
>
> Is any plan to try and fix memory leaks when embedding Python in a C/C++
> application?
>

Please file issues on github with details if you find problems.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HE27VQ4CKLXSA3EUAQCN57LAABMUFWJ3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [Python-Help] Unable to bootstrap Python 3 install

2022-06-15 Thread Gregory P. Smith
-cc:help (bcc)

FWIW the thing to do to move this forward is to open a new PR. Patches
attached to an email are lost like tears in the rain.

What you describe of cross compilation where host and target triples appear
the same but use different libraries is a valid cross compilation case. But
doesn't surprise me that there are bugs given most people never have such a
setup. Even in an "every build should be a cross build" world (which we
sadly don't have) it being broken may not show up.

-gps

On Wed, Jun 15, 2022 at 8:28 AM Victor Stinner  wrote:

> Hi,
>
> While this problem is causing you a lot of troubles, I never had to
> cross compile Python, and I guess that it's the case for most people.
> Changing the Python build system and distutils is stressful since it
> can break Python for the majority of users, rather than leaving the
> minority of users with an old bug. So far, nobody was brave enough to
> "break" the system for cross compilation.
>
> Moreover, as far as I know, cross compiling Python works for the
> common case: different platform triplet. Only the corner case of the
> same triple is causing troubles.
>
> https://github.com/python/cpython/issues/66913 doesn't explain how to
> reproduce the issue, it only gives some info about what doesn't work.
> I don't know how to reproduce the issue. Please comment the issue.
>
> To cross compile Python, I found these documentations:
>
> * https://docs.python.org/dev/using/configure.html#cross-compiling-options
> * WASM: https://github.com/python/cpython/blob/main/Tools/wasm/README.md
>
> Currently, setup.py checks for:
>
> CROSS_COMPILING = ("_PYTHON_HOST_PLATFORM" in os.environ)
>
> Victor
>
>
> On Tue, Jun 14, 2022 at 1:49 AM Dave Blanchard  wrote:
> > Here's what's happening. This is a very old problem reported ages ago
> which has never been fixed. If I set the target arch to i686 (on an x86_64
> build system) Python will cross-compile and bootstrap just fine. But if the
> host and build triple are the same, then the problem will occur. Python
> seems to be ASSuming that if the build and host triple match (in this case,
> both being 'x86_64-linux-gnu') therefore the host and build libraries are
> binary-compatible--which is NOT actually the case when one is
> cross-compiling to a different libc, for example. Actually it's kind of
> brain dead how this is implemented.
> >
> > Some prior discussions/years-old bug reports:
> >
> > https://bugs.python.org/issue39399
> > https://bugs.python.org/issue22724
> > https://bugs.gentoo.org/705970
> >
> > In those links, various hacks are attempted with various degrees of
> success, but then Xavier de Gaye reworked the build system, apparently
> fixing the problem with this pull request in Dec. 2019:
> >
> > https://github.com/python/cpython/pull/17420
> >
> > Unfortunately he became annoyed in the comments, seemingly mostly due to
> the lack of interest from Python to actually do anything about this
> problem, and cancelled his pull request. His fix was never implemented, and
> Python cross-compiling remains broken to this day.
> >
> > I downloaded his patches, made a minor fix or two, and merged them all
> together into the one patch attached to this email. When applied to both my
> build system and target Python, this problem seems to be resolved, for me
> at least. I'll know more later tonight as I get further into the distro
> build process.
> >
> > It's surprising to hear that cross-compiling Python would be any kind of
> "unusual" thing, considering this is something that *always* has to be done
> any time one is bootstrapping anything on a new or different system. It
> surprises me further to see that Python actually requires the *minor*
> version number to be the same for bootstrapping, also. So not only is
> Python 3 essentially a different language from Python 2, each point release
> is different and incompatible also. Amazing.
> >
> > I guess this shouldn't be surprising, after all of the other
> questionable design decisions this project has made over the years. I
> probably also shouldn't be surprised to see such an incredibly important
> bug going unfixed after all this time. It's Python--by far the #1 biggest
> annoyance and pain in the ass out of the 1,000+ packages on my distro,
> ranking just above CUPS and Texlive.
> >
> > Dave
> >
> >
> > On Mon, 13 Jun 2022 16:12:26 -0500 (CDT)
> > Matthew Dixon Cowles  wrote:
> >
> > > Dear Dave,
> > >
> > > > Hello, I'm trying to cross compile a new version of my Linux system
> > > > with some upgraded packages, including a new Glibc.
> > >
> > > I've never had to do that and I am beginning to suspect, from the
> > > lack of replies here better than this one, that nobody else here has
> > > had to either.
> > >
> > > > I've hit a major snag with Python 3.7 (also tried 3.9 with same
> > > > result) which makes it through the compile OK, but then bombs when
> > > > running ensurepip at the end.
> > >
> > > If it were me, I'd set --with-ensurepip to no, 

[Python-Dev] Re: [OT] Re: Raw strings ending with a backslash

2022-05-28 Thread Gregory P. Smith
On Sat, May 28, 2022 at 12:55 PM Guido van Rossum  wrote:

>
> On Sat, May 28, 2022 at 12:11 MRAB
>
> Names in Python are case-sensitive, yet the string prefixes are
>> case-/insensitive/.
>>
>> Why?
>
>
> IIRC we copied this from C for numeric suffixes  (0l and 0L are the same;
> also hex digits and presumably 0XA == 0xa) and then copied that for string
> prefixes without thinking about it much. I guess it’s too late to change.
>

Given that 99.99% of code uses lower case string prefixes we *could* change
it, it'd just take a longer deprecation cycle - you'd probably want a few
releases where the upper case prefixes become an error in files without a
`from __future__ import case_sensitive_quote_prefixes` rather than jumping
straight from parse time DeprecationWarning to repurposing the uppercase to
have a new meaning.  The inertia behind doing that over the course of 5+
years is high.  Implying that we'd need a compelling reason to orchestrate
it.  None has sprung up.

-gps


> —Guido
> --
> --Guido (mobile)
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/27HLMPDURSAN2YCTFWN6LETWQNY4POX7/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CCK3UGMOBQKHP5R25UX777EFZCQQE5CL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Starting a new thread

2022-05-10 Thread Gregory P. Smith
On Tue, May 10, 2022 at 10:34 AM Barney Stratford <
barney_stratf...@fastmail.fm> wrote:

> > 1. Does t = target(...) start the thread? I think it does.
> I think it does too. In the commonest use case, immediately after creating
> a thread, you start it. And if you want to delay the thread but still use
> the decorator, then you can do that explicitly with some locks. In fact,
> it’s probably better to explicitly delay execution than have hidden
> requirements concerning the timing of thread creation and startup.
>
> > 2. Is it possible to create daemon threads?
> Not at the moment. I did think about this, but felt that simpler is
> better. Like you say, it’d be easy to add. In fact, I might just go ahead
> and add it to the PR in a bit. The simplest way to do it is probably to
> define a second decorator for daemonic threads.
>

If this is even to be added (i personally lean -1 on it), I suggest
intentionally not supporting daemon threads. We should not encourage them
to be used, they were a misfeature that in hindsight we should never have
created. Daemon threads can lead to very bad surprises upon interpreter
finalization - an unfixable problem given how daemon threads are defined to
behave.

> 3. Can you create multiple threads for the same function? I assume t1,
> > t2, t3 = target(arg1), target(arg2), target(arg3) would work.
> That’s exactly what I had in mind. Make it so that thread creation and
> function call look exactly alike. You can call a function as many times as
> you want with whatever args you want, and you can create threads as often
> as you want with whatever args you want.
>
> There isn’t a single use case where the decorator is particularly
> compelling; rather, it’s syntactic sugar to hide the mechanism of thread
> creation so that code reads better.
>

This is my take as well. I don't like calling code to hide the fact that a
thread is being spawned. Use this decorator and if you fail to give the
callable a name communicating that it spawns and returns a thread, you will
have surprised readers of the calling code.

A nicer design pattern is to explicitly manage threads. Use
concurrent.futures.ThreadPoolExecutor. Or use the async stuff that Joao
mentioned or similar libraries. I think we already provide decent batteries
with the threading APIs.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/3I7GBDGMHU4BHHR57EFOL6B4JMURFSGE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: What is Modules/rotatingtree.c for?

2022-04-24 Thread Gregory P. Smith
On Sun, Apr 24, 2022 at 9:24 AM Patrick Reader <_...@pxeger.com> wrote:

> I've just noticed Modules/rotatingtree.{h,c}, which don't seem to be
> used anywhere. Are they just dead code? If so, is there a reason they
> haven't been removed?
>

grep -R rotatingtree ; grep -R _lsprof

rotatingtree is used by the _lsprof module which is the internal
implementation behind cProfile.

-gps


>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/46D6DD3OEZEOH4CUPFKOSLVUG6MHV7BU/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/W45DZ2RU75HOKELTMRZY7YL3JH7KS7V5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proposal to deprecate mailcap

2022-04-14 Thread Gregory P. Smith
+1 add it to the 3.11 deprecations and proactively reach out to the
mitmproxy owners.

(internal code search: aside from mitmproxy I only see a _single_ use of
this in our codebase and it was simply convenient but has a clear simpler
alternative assuming that ~2008 era code is even still in use)

-gps


On Thu, Apr 14, 2022 at 11:49 AM Brett Cannon  wrote:

> A CVE has been opened against mailcap (see
> https://github.com/python/cpython/issues/68966 for details). I'm not
> aware of anyone trying to maintain the module and Victor did a search
> online and didn't find any use of the module in the top 5000 projects on
> PyPI (see the issue). The module is also under 300 lines of Python code
> that only  (https://github.com/python/cpython/blob/main/Lib/mailcap.py),
> so vendoring wouldn't be burdensome.
>
> As such, I'm proposing we deprecate mailcap in 3.11 and remove it in 3.13.
> Any explicit objections?
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/EB2BS4DBWSTBIOPQL5QTBSIOBORWSCMJ/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GG2JGVJPKEGS4P5ASSLJGGWUOO2JBCBJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 678: Enriching Exceptions with Notes

2022-04-11 Thread Gregory P. Smith
On Thu, Jan 27, 2022 at 10:10 AM Zac Hatfield Dodds <
zac.hatfield.do...@gmail.com> wrote:

> Hi all,
>
> I've written PEP 678, proposing the addition of a __note__ attribute
> which can be used to enrich displayed exceptions.  This is particularly
> useful with ExceptionGroup, or where exception chaining is unsuitable, and
> ensures that 'orphan' output is not produced if the annotated exception is
> caught.
>
> Link to the PEP: https://www.python.org/dev/peps/pep-0678/
>
> *Please, redirect all discussions to:*
> https://discuss.python.org/t/pep-678-enriching-exceptions-with-notes/13374
>

FYI - The Steering Council reviewed your updates and is **Accepting** the
latest iteration of *PEP 678: Enriching Exceptions with Notes*. :grin:

A short and sweet reply, we don't have more to add.

-gps on behalf of the Python Steering Council
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QFOWI4WOUOVPSERRYADQNZWJJTKTOR3A/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: About PEPs being discussed on Discourse

2022-04-07 Thread Gregory P. Smith
On Thu, Apr 7, 2022 at 4:31 PM Jean Abou Samra  wrote:

>
> I'm only a lurker here, but find the split between this mailing list
> and various Discourse categories a little confusing from the outside.
> As far as I understand, the Discourse server was originally an experiment.
> Today, it has grown far past the size of an experiment though. Are there
> any plans to retire either Discourse or the mailing list and use a
> unified communication channel? This is a curiosity question.
>

We feel it too. We've been finding Discourse more useful from a community
moderation and thread management point of view as well as offering markdown
text and code rendering. Ideal for PEP discussions. Many of us expect
python-dev to wind up obsoleted by Discourse as a result. I encourage
everyone to use https://discuss.python.org/ first for Dev audience
communications. And for lurkers and subscribers here to enable email
notifications for categories of interest over there.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2TPHCHVB7NEMMTEWWYBYFZNIFBEE2S4W/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: C API: Move PEP 523 "Adding a frame evaluation API to CPython" private C API to the internal C API

2022-04-04 Thread Gregory P. Smith
On Fri, Apr 1, 2022 at 2:06 AM Victor Stinner  wrote:

> Hi,
>
> Update on this issue: I merged my 2 PRs.
> https://bugs.python.org/issue46850
>
> The following APIs have been moved to the internal C API:
>
> - _PyFrameEvalFunction type
> - _PyInterpreterState_GetEvalFrameFunc()
> - _PyInterpreterState_SetEvalFrameFunc()
> - _PyEval_EvalFrameDefault()
>
> If you use any of these API in your debugger/profiler project, you
> have do add something like the code below to your project:
> ---
> #ifndef Py_BUILD_CORE_MODULE
> #  define Py_BUILD_CORE_MODULE
> #endif
> #include 
> #if PY_VERSION_HEX >= 0x030B00A7
> #  include  //
> _PyInterpreterState_SetEvalFrameFunc()
> #  include   // _PyEval_EvalFrameDefault()
> #endif
> ---
>
> Contact me if you need help to update your affected projects.
>
> IMO PEP 523 doesn't have to be updated since it already says that the
> APIs are private.
>

Thanks for bringing this up on python-dev, Victor. That was good. But the
point of the discussion should've been to continue working with people
based on the replies rather than proceeding to merge removals of the APIs
after people said they used them.  (echoing Steve and Petr here...)

We discussed this on the steering council today. These APIs were in a weird
state and despite past decisions at the time of PEP-523
 in 2016 they should be treated as
public-ish rather than entirely private. Because we published a document
saying "here they are, use them!" and multiple projects have done so to
good effect.

For 3.11 we'd like those PRs reverted.  We see the following as the better
way forward for these APIs:

Add a new #define that can be set before the #include  that
exposes non-limited but stable within a bugfix/patch releases APIs (ie:
Petr's earlier suggestion).
These would be the first to fall within. To do so we should give these,
behind that #define, non _-prefixed "public style" names as these are
quasi-public and cannot be changed as readily as other true internals.  We
still, per the PEP, reserve the right to turn these into no-op potentially
warning setting APIs in a future release (likely 3.12?) as they are at
least documented as being unstable/private in the PEP.

So in 3.11 these should continue to exist as in 3.6-3.10:
- _PyFrameEvalFunction type
- _PyInterpreterState_GetEvalFrameFunc()
- _PyInterpreterState_SetEvalFrameFunc()
- _PyEval_EvalFrameDefault()

AND in 3.11:
 - #define protected versions of those without the leading _ become
available.
 - (i'm intentionally not suggesting a #define name, y'all can pick
something)

In 3.12:
 - the _ prefixed versions can go away.  People using the APIs should've
updated their code to use the new #define and new names when building
against >=3.11 by then.
 - Whether the APIs continue to be as useful and act as originally claimed
in PEP 523 is up to the 3.12 implementors (out of scope for this thread).
They occupy a newly defined middle ground between the "forever" style
limited API and the "can break even on bugfix/patch release" internal API
that wasn't a concept for us in 2016 when PEP 523 was written.

Why? Being conservative with things in active use that weren't *really*
private, and similar to what Mark Shannon and Petr said, we *do not* want
people to #define Py_BUILD_CORE_MODULE and start poking at internals in
arbitrary ways. That exposes a whole pile of other things for (ab)use that
are even more unstable. Avoiding that helps avoid temptation to go wild and
helps us identify users.

-gps (steering council hat on)


> Since these APIs were added by PEP 523, I documented these changes in
> What's New in Python 3.11 > C API > Porting to Python 3.11,even if
> these APIs are private.
>
> Victor
> --
> Night gathers, and now my watch begins. It shall not end until my death.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/DNJC6U36CDA7S7ATEGAMUPABBSEIYHC4/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GFOMU7LP63JUVFMWNJNZJLUMZDRPTUYJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Changing unittest verbose output.

2022-03-27 Thread Gregory P. Smith
On Sun, Mar 27, 2022 at 2:50 PM Guido van Rossum  wrote:

> Hopefully it only prints the first line of the docstring?
>

Indeed, it's just the first line (traditionally a one line summary per
style).

https://github.com/python/cpython/blob/f6b3a07b7df60dc04d0260169ffef6e9796a2124/Lib/unittest/runner.py#L46


>
> On Sun, Mar 27, 2022 at 2:42 PM Gregory P. Smith  wrote:
>
>> For many of us, this isn't a nuisance. It is a desirable feature.
>>
>> test_xxx functions and methods typically don't have docstrings at all as
>> the majority of tests tend to be concise and obvious with the function name
>> itself describes what its purpose is.  When we added printing the
>> docstring, it was useful, as you can expand upon the purpose of the test
>> more than you can in a reasonable name within the docstring.  We do this
>> all the many times in our own test suite.  When running the tests, you see
>> the docstring and are given more context as to what the test is about.
>> This can be useful when triaging a failure before you've even loaded the
>> source.
>>
>> I don't doubt that someone writes thesis defenses and stores them in
>> their TestCase.test_method docstrings. I'm just saying that is not the norm.
>>
>> I'd accept a PR adding another command line switch for unittest to
>> disable docstring printing in verbose mode.
>>
>> -gps
>>
>>
>> On Sun, Mar 27, 2022 at 12:59 PM Barry Warsaw  wrote:
>>
>>> On Mar 26, 2022, at 17:48, Itay Yeshaya  wrote:
>>> >
>>> > When running unittest with the -v flag, if a test has errors, and has
>>> a docstring, the test name is shown on one line, and the docstring is shown
>>> on the next line with the `ERROR` word.
>>>
>>> This has been a long-standing nuisance, but I’m like Guido.  I pretty
>>> much use pytest for everything these days, except for maybe
>>> unittest.mock.patch.
>>>
>>> -Barry
>>>
>>>
>>> ___
>>> Python-Dev mailing list -- python-dev@python.org
>>> To unsubscribe send an email to python-dev-le...@python.org
>>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>>> Message archived at
>>> https://mail.python.org/archives/list/python-dev@python.org/message/ZIMXSRQMWFOE4U3C3MBK6SG5TH26PDRL/
>>> Code of Conduct: http://python.org/psf/codeofconduct/
>>>
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/RJOWQWMMUMBJAZZTWXWEAJSRDVARA2XL/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
> --
> --Guido van Rossum (python.org/~guido)
> *Pronouns: he/him **(why is my pronoun here?)*
> <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YULIAATNZVWHHJEASJY3P5UMC4G7ZIPC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Changing unittest verbose output.

2022-03-27 Thread Gregory P. Smith
For many of us, this isn't a nuisance. It is a desirable feature.

test_xxx functions and methods typically don't have docstrings at all as
the majority of tests tend to be concise and obvious with the function name
itself describes what its purpose is.  When we added printing the
docstring, it was useful, as you can expand upon the purpose of the test
more than you can in a reasonable name within the docstring.  We do this
all the many times in our own test suite.  When running the tests, you see
the docstring and are given more context as to what the test is about.
This can be useful when triaging a failure before you've even loaded the
source.

I don't doubt that someone writes thesis defenses and stores them in their
TestCase.test_method docstrings. I'm just saying that is not the norm.

I'd accept a PR adding another command line switch for unittest to disable
docstring printing in verbose mode.

-gps


On Sun, Mar 27, 2022 at 12:59 PM Barry Warsaw  wrote:

> On Mar 26, 2022, at 17:48, Itay Yeshaya  wrote:
> >
> > When running unittest with the -v flag, if a test has errors, and has a
> docstring, the test name is shown on one line, and the docstring is shown
> on the next line with the `ERROR` word.
>
> This has been a long-standing nuisance, but I’m like Guido.  I pretty much
> use pytest for everything these days, except for maybe unittest.mock.patch.
>
> -Barry
>
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ZIMXSRQMWFOE4U3C3MBK6SG5TH26PDRL/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RJOWQWMMUMBJAZZTWXWEAJSRDVARA2XL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Accepting PEP 655 - Marking individual TypedDict items as required or potentially-missing

2022-03-21 Thread Gregory P. Smith
On behalf of the Python Steering Council, we are pleased to accept PEP 655
- Marking individual TypedDict items as required or potentially-missing
.

Thanks for considering the potential for confusion with Optional during the
design and recommending best practices in the “how to teach” section.

A couple SC members have tried using TypedDict and found it painful, this
PEP helps.

-gps for the steering council
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AJEDNVC3FXM5QXNNW5CR4UCT4KI5XVUE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Accepting PEP 675 - Arbitrary Literal String Type

2022-03-21 Thread Gregory P. Smith
On behalf of the Python Steering Council, we are accepting PEP 675 -
Arbitrary Literal String Type .

TL;DR - PEP 675 allows type checkers to help prevent bugs allowing
attacker-controlled data to be passed to APIs that declare themselves as
requiring literal, in-code strings.

This is a very thorough PEP with a compelling and highly relevant set of
use cases. If I tried to call out all the things we like about it, it’d
turn into a table of contents. It is long, but everything has a reason to
be there. :)

Once implemented, we expect it to be a challenge to tighten widely used
existing APIs that accept str today to be LiteralString for practical
reasons of what existing code calling unrestricted APIs naturally does. The
community would benefit from anyone who attempts to move a widely used
existing str API to LiteralString sharing their experiences, successful or
not.

-gps for the steering council
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XEOOSSPNYPGZ5NXOJFPLXG2BTN7EVRT5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Need Help

2022-02-25 Thread Gregory P. Smith
The embedded copy of expat was recently upgraded to 2.4.6 in
https://bugs.python.org/issue46794 including on the 3.9 branch.  That will
wind up in 3.9.11 per https://www.python.org/dev/peps/pep-0596/.

If you are using 3.9.5 you may also have a host of other potential security
issues that updating to a recent 3.9.x will address. If you are using 3.9.5
as provided by a Linux or similar OS distribution, I'd expect the OS distro
packager to be applying relevant patches to it themselves (some distros
link to their own managed libexpat instead of using the embedded version)
even if they don't change the version number.

-gps

On Fri, Feb 25, 2022 at 11:43 AM Prasad, PCRaghavendra <
pcraghavendra.pra...@dell.com> wrote:

> Hi All,
>
> we are using the python 3.9.5 version in our application.
>
>
>
> In 3.9.5 it is using libexpat 2.2.8 version, as part of the Black duck
> scan, it is showing critical vulnerabilities in libexpat 2.2.8.
>
>
>
> (CVE-2022-22824
>
> CVE-2022-23990
>
> CVE-2022-23852
>
> CVE-2022-25236
>
> CVE-2022-22823)
>
>
> when there are any issues ( security issues ) in external modules like
> OpenSSL, bzip2, and zlib we were able to get the latest code and build as
> it is straightforward, but libexpat is an internal module to the python and
> we don't see how we can upgrade libexpat alone in python 3.9.5
>
> So is there a way we can build python (ex 3.9.5) which is already carrying
> libexpat 2.2.8 so that it will link to the latest libexpat version (2.4.6 -
> fixed security issues).
>
> Another solution when we searched over the net and from the mails what we
> came to know is we need to wait for Python 3.9.11 where this will be linked
> to libexpat 2.4.6.
>
> Any inputs on this will be helpful.
>
> Thanks,
>
> Raghu
>
> Internal Use - Confidential
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/2JHZTKQVVYR67KQRIFF5XEMXDY3FZLMN/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/64FLSLO7KN2Q6UDFXAJEX5LPOUJ32NKL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Require a C compiler supporting C99 to build Python 3.11

2022-02-24 Thread Gregory P. Smith
On Thu, Feb 24, 2022 at 3:27 PM Victor Stinner  wrote:

> On Thu, Feb 24, 2022 at 11:10 PM Barry  wrote:
> > > "Python 3.11 and newer versions use C11 without optional features. The
> > > public C API should be compatible with C++."
> > > https://github.com/python/peps/pull/2309/files
> >
> > Should is often read as meaning optional when writing specs.
> >  Can you say “must be compatible with C++”.
>
> I plan to attempt to write an actual test for that, rather than a
> vague sentence in a PEP. For now, "should" is a deliberate choice: I
> don't know exactly which C++ version should be targeted and if it's
> really an issue or not.
>

Agreed.  "should" is good because we're not even clear if we currently
actually comply with C++ standards.  i.e. https://bugs.python.org/issue40120
suggests we technically may not for C++ (it is not strictly a superset of C
as we all like to pretend), though for practical purposes regardless of
standards compilers tend to allow that.

We're likely overspecifying in any document we create about what we require
because the only definition any of us are actually capable of making for
what we require is "does it compile with this compiler on this platform? If
yes, then we appear to support it. can we guarantee that? only with
buildbots or other CI" - We're generally not versed in specific language
standards (aside from compiler folks, who is?), and compilers don't comply
strictly with all the shapes of those anyways for either practical or
hysterical reasons. So no matter what we claim to aspire to, reality is
always murkier.  A document about requirements is primarily useful to give
guidance to what we expect to be aligned with and what is or isn't allowed
to be used in new code.  Our code itself always has the final say.

-gps


> For example, C++20 reserves the "module" keyword, whereas Python uses
> it in its C API. Example:
>
> PyAPI_FUNC(int) PyModule_AddType(PyObject *module, PyTypeObject *type);
>
> See:
>
> * https://bugs.python.org/issue39355
> * https://github.com/pythoncapi/pythoncapi_compat/issues/21
>
> --
>
> I made a change in the datatable project to add Python 3.11 support
> using the pythoncapi_compat.h header file. Problem: this *C* header
> file produced new warnings in datatable extension module which with
> built with a C++ compiler:
> https://github.com/h2oai/datatable/pull/3231#issuecomment-1032864790
>
> Examples:
>
> | src/core/lib/pythoncapi_compat.h:272:52: warning: zero as null
> pointer constant [-Wzero-as-null-pointer-constant]
> ||| tstate->c_profilefunc != NULL);
> |^~~~
> |nullptr
>
> and
>
> | src/core/lib/pythoncapi_compat.h:170:12: warning: use of old-style
> cast [-Wold-style-cast]
> | return (PyCodeObject *)_Py_StealRef(PyFrame_GetCode(frame));
> |^   
>
> I made pythoncapi_compat.h compatible with C++ (fix C++ compiler
> warnings) by using nullptr and reinterpret_cast(EXPR) cast if
> the __cplusplus macro is defined, or NULL and ((TYPE)(EXPR)) cast
> otherwise.
>
> datatable also uses #include "Python.h". I don't know there were only
> C++ compiler warnings on "pythoncapi_compat.h". Maybe because
> datatable only uses static inline functions from
> "pythoncapi_compat.h", but it may also emit the same warnings if
> tomorrow some static inline functions of "Python.h" are used.
>
> For now, I prefer to put a reminder in PEP 7 that the "Python.h" C API
> is consumed by C++ projects.
>
> Victor
> --
> Night gathers, and now my watch begins. It shall not end until my death.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/RGNBM5CSUPBQSTZND4PHEV3WUEKS36TP/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XUPAVKB7S2NCOGQY2JUMDBSTJADIOBPY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 677 (Callable Type Syntax): Rejection notice.

2022-02-10 Thread Gregory P. Smith
On Thu, Feb 10, 2022 at 3:32 AM Shantanu Jain 
wrote:

> Hello!
>
>
> Thanks for the decision, the points raised mostly make sense to me.
> However, I find myself and a few others are a little confused by point 2. I
> can read it as saying the following perhaps slightly contradictory things:
>
>
> "It's good that PEP 677 is conservative and sticks to things Callable can
> do"
>
> "PEP 677 isn't necessary, since Callable can do everything currently
> proposed"
>
> "PEP 677 could be a slippery slope for further syntax expansions that can
> do things Callable cannot"
>
>
> Would the case for new syntax have been stronger if it was proposing
> something Callable couldn't do? E.g., is the concern something like "The
> cost of new syntax is best paid by expanding the realm of what is
> expressible. While we see how PEP 677 could lead to such expansion in the
> future, the merits of future expansion are currently uncertain and the
> current proposal is too costly discussed on its own merits"?
>
>
> Or is the concern forward compatibility in the eventuality of further
> syntax expansions? PEP 677 did discuss "extended syntax", which the
> proposed syntax would be forward compatible with.
> https://www.python.org/dev/peps/pep-0677/#extended-syntax-supporting-named-and-optional-arguments
>
>
> Or something else entirely! Would appreciate any clarity :-)
>

At least we're as consistent as the zen of python itself? ;)  We
collectively struggled with how to word that one. Sorry if it is confusing.

>From my individual perspective, agreeing with point 2 came down to a belief
that if we accepted the simple syntax, a use would arise in the future
where it'd be desirable to gain something more complicated. Even if only
"3%" of callable annotations use those *today*. So we expected to wind up
in future releases with a proposal to extend the 677 callable syntax to
support parsing a more complete def. Good design in that it appears
technically feasible, but perceived bad in that the decision then would be
to go forward into an even more complicated, even harder to parse, world or
to back off and go another direction entirely which would leave 677
callable shorthand as a partial expressive dead end. Think of it as
*attempting* not to accumulate too many ways to do the same thing in the
long term.

If the 677 proposed syntax had been expanded to include more def features
instead of being conservative I think it would've been an even easier
rejection for many of us as that would've triggered more "point 4" feelings
of visual clutter and cognitive load.

-gps



>
> Thank you!
>
> On Wed, 9 Feb 2022 at 19:45, Gregory P. Smith  wrote:
>
>> Thank you PEP authors for producing a very well written and laid out PEP.
>> That made it easy to understand the proposal, rationale, and alternatives
>> considered. We agree that the existing typing.Callable syntax, while
>> capable, is less than ideal for humans. That said, we deliberated last week
>> and ultimately decided to reject PEP-677 Callable Type Syntax.
>>
>> Why? Several considerations led us here:
>>
>> 1. We feel we need to be cautious when introducing new syntax. Our new
>> parser presents understandably exciting opportunities but we don’t want its
>> existence to mean we add new syntax easily. A feature for use only in a
>> fraction of type annotations, not something every Python user uses, did not
>> feel like a strong enough reason to justify the complexity needed to parse
>> this new syntax and provide meaningful error messages. Not only code
>> complexity, humans are also parsers that must look forwards and backwards.
>>
>> 2. While the current Callable[x, y] syntax is not loved, it does work.
>> This PEP isn’t enabling authors to express anything they cannot already.
>> The PEP explicitly chose be conservative and not allow more syntax to
>> express features going beyond what Callable supports. We applaud that
>> decision, starting simple is good. But we can imagine a future where the
>> syntax would desire to be expanded upon.
>>
>> 3. In line with past SC guidance, we acknowledge challenges when syntax
>> desires do not align between typing and Python itself. Each time we add
>> syntax solely for typing it shifts us further in the direction of typing
>> being its own mini-language so we aim to tread lightly in what gets added
>> here. Adopting PEP 677 would lock us into its syntax forever, potentially
>> preventing other future syntax directions.
>>
>> 4. We did not like the visual and cognitive consequence of multiple `->`
>> tokens in a def. Especially when code is not formatted nicely. Th

[Python-Dev] PEP 677 (Callable Type Syntax): Rejection notice.

2022-02-09 Thread Gregory P. Smith
Thank you PEP authors for producing a very well written and laid out PEP.
That made it easy to understand the proposal, rationale, and alternatives
considered. We agree that the existing typing.Callable syntax, while
capable, is less than ideal for humans. That said, we deliberated last week
and ultimately decided to reject PEP-677 Callable Type Syntax.

Why? Several considerations led us here:

1. We feel we need to be cautious when introducing new syntax. Our new
parser presents understandably exciting opportunities but we don’t want its
existence to mean we add new syntax easily. A feature for use only in a
fraction of type annotations, not something every Python user uses, did not
feel like a strong enough reason to justify the complexity needed to parse
this new syntax and provide meaningful error messages. Not only code
complexity, humans are also parsers that must look forwards and backwards.

2. While the current Callable[x, y] syntax is not loved, it does work. This
PEP isn’t enabling authors to express anything they cannot already. The PEP
explicitly chose be conservative and not allow more syntax to express
features going beyond what Callable supports. We applaud that decision,
starting simple is good. But we can imagine a future where the syntax would
desire to be expanded upon.

3. In line with past SC guidance, we acknowledge challenges when syntax
desires do not align between typing and Python itself. Each time we add
syntax solely for typing it shifts us further in the direction of typing
being its own mini-language so we aim to tread lightly in what gets added
here. Adopting PEP 677 would lock us into its syntax forever, potentially
preventing other future syntax directions.

4. We did not like the visual and cognitive consequence of multiple `->`
tokens in a def. Especially when code is not formatted nicely. Though we
admit the correlation between Python typing and formatter users is high.

## Recommendation for future syntax enhancements:

When proposing a syntax change, low complexity is better. While not always
possible, it’s ideal if it could still be described using our old <=3.8
parser. It is important to have in mind that adding syntax that only our
modern PEG parser can handle could lead to greater cognitive load and
external tooling implementation costs.

This should not be read as a ban on PEG-only syntax, we just think it
should be used for broadly applicable features or else be relatively
unintrusive.

Thanks,
The 3.11 Python Steering Council
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NHCLHCU2XCWTBGF732WESMN42YYVKOXB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Steering Council reply to PEP 670 -- Convert macros to functions in the Python C API

2022-02-09 Thread Gregory P. Smith
On Wed, Feb 9, 2022 at 8:54 AM Victor Stinner  wrote:

> On Wed, Feb 9, 2022 at 1:04 PM Petr Viktorin  wrote:
> > > Right now, a large number of macros cast their argument to a type. A
> > > few examples:
> > >
> > > #define PyObject_TypeCheck(ob, type)
> > > PyObject_TypeCheck(_PyObject_CAST(ob), type)
> > > #define PyTuple_GET_ITEM(op, i) (_PyTuple_CAST(op)->ob_item[i])
> > > #define PyDict_GET_SIZE(mp)  (assert(PyDict_Check(mp)),((PyDictObject
> > > *)mp)->ma_used)
> >
> > When I look at the Rationale points, and for the first three of these
> > macros I didn't find any that sound very convincing:
> > - Functions don't have macro pitfalls, but these simple macros don't
> > fall into the pits.
> > - Fully defining the argument types means getting rid of the cast,
> > breaking some code that uses the macro
> > - Debugger support really isn't that useful for these simple macros
> > - There are no new variables
>
> Using a static inline function, profilers like Linux perf can count
> the CPU time spend in static inline functions (on each CPU instruction
> when using annotated assembly code). For example, you can see how much
> time (accumulated time) is spent in Py_INCREF(), to have an idea of
> the cost of reference counting in Python. It's not possible when using
> macros.
>
> For debuggers, you're right that Py_INCREF() and PyTuple_GET_ITEM()
> macros are very simple and it's not so hard to guess that the debugger
> is executing their code in the C code or the assembly code. But the
> purpose of PEP 670 is to convert way more complex macros. I wrote a PR
> to convert unicodeobject.h macros, IMO there are one of the worst
> macros in Python C API:
> https://github.com/python/cpython/pull/31221
>
> I always wanted to convert them, but some core devs were afraid of
> performance regressions. So I wrote a PEP to prove that there is no
> impact on performance.
>
> IMO the new unicodeobject.h code is way more readable. I added two
> "kind" variables which have a well defined scope. In macros, no macro
> is used currently to avoid macro pitfalls: name conflict if there is
> already a "kind" macro where the macro is used.
>
> The conversion to static inline macros also detected a bug with "const
> PyObject*": using PyUnicode_READY() on a const Unicode string is
> wrong, this function modifies the object if it's not ready (WCHAR
> kind). It also detected bugs on "const void *data" used to *write*
> into string characters (PyUnicode_WRITE).
>

Nice example PR.  For now what we're suggesting is to proceed with changes
where it cannot lead to compilation warnings (failures in the widely used
-Werror mode) being introduced.  Yes, that may still leave a lot of
desirable cleanup to be done.  But we don't want to block everything while
discussing the rest. Incremental improvement is good even if not everything
is being done yet.

Two differing examples from that PR 31221:

Hold off as unsafe for now: Macros that do things like
(PyWhateverObject*)(op) such as PyUnicode_CHECK_INTERNED(op) should not
have the casting part of macros replaced yet. Doing so could result in a
warning or failure at -Werror compile time if someone was not using a
PyObject*.  Code is *supposed* to, but the cast means anyone could've used
PyUnicodeObject or whatnot itself.  Perhaps use a hybrid approach when
feasible similar to:
  #define PyUnicode_CHECK_INTERNED(op)
_PyUnicode_CHECK_INTERNED((PyASCIIObject *)(op))

That should make it safe.  And I believe you do mention this technique as
something to be used in the PEP.

Safe: PyUnicode_WRITE() in that PR. At first glance that is full of casts
on its data and value parameters so it raises suspicion. But further
analysis shows that data becomes a void* so there is no casting issue there
unless someone were passing a non-pointer in which isn't rightfully
something code should *ever* have done. And value would only be an issue if
someone were passing a non-integer type in, that also seems exceedingly
unlikely as there'd never be a point in writing code like that. So that
kind of change is fine to proceed with.

> The "massive change to working code" part is important. Such efforts
> > tend to have unexpected issues, which have an unfortunate tendency to
> > surface before release and contribute to release manager burnout.
>
> Aren't you exaggerating a bit? Would you mind to elaborate? Do you
> have examples of issues caused by converting macros to static inline
> functions?
>

Not quite the same but I've got a related example similar to what macros
casting pointer becoming a function accepting PyObject* without a cast
*could* do:

Between 3.6 and 3.7 we added const to a number of our public Python C APIs.

Migration to 3.7 required updating all sorts of C and C++ extension module
code to be pedantically correct, up through its call chain in some cases
with matching const declarations on types. (including conditional
compilation based on the Python version to support compilation under both
during 

[Python-Dev] Re: Should we require IEEE 754 floating-point for CPython?

2022-02-08 Thread Gregory P. Smith
On Tue, Feb 8, 2022 at 2:41 PM Steven D'Aprano  wrote:

> On Mon, Feb 07, 2022 at 06:23:52PM +, Mark Dickinson wrote:
>
> > - Should we require IEEE 754 floating-point for
> CPython-the-implementation?
> > - Should we require IEEE 754 floating-point for Python-the-language?
>
> If the answer to those questions are Yes, that rules out using Unums,
> posits, sigmoid numbers etc as the builtin float. (The terminology is a
> bit vague, sorry.) Do we want that?
>

It does not rule anything else out should they become viable.  This is just
a statement that to build cpython we require ieee754 support.  It does not
say anything about how our Python float type is implemented internally.

Should a meaningful large-os platform come along that promotes the use of a
different format available from C we could make use of that and loosen the
policy as needed.

What updating our requirement for CPython would mean is that the likely
unexercised outside our own __set_format__ using test suite
"unknown_format" code in
https://github.com/python/cpython/blob/main/Objects/floatobject.c could go
away until such time as a platform with an actual different format springs
into existence.

Driveby floatobject.c code inspection: It is odd that we do
ieee_be/ieee_le/unknown conditionals as a runtime check rather than
configure time check as that means we compile the code for three
implementations into our float implementation on all platforms despite them
each using only one - I guess that was done for testing purposes presumably
in the 1.x era when viable platforms were weirder as standards traction
grew - today I'd call that dead code bloat.

-gps


>
> https://ieeexplore.ieee.org/document/808
>
> https://en.wikipedia.org/wiki/Unum_%28number_format%29
>
> https://github.com/interplanetary-robot/SigmoidNumbers
>
> Posits are hyped as "better than IEEE-754", I have no idea if it is all
> hype or if they actually are better or just different.
>
>
> --
> Steve
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/7LUAQ32ZAHKWBJHLHUYEB7I5BZNDXGB7/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/N3Z5PFOALORZ4Z7KGNSHJ7QL47D4SYRJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we require IEEE 754 floating-point for CPython?

2022-02-08 Thread Gregory P. Smith
On Tue, Feb 8, 2022 at 2:25 PM Steven D'Aprano  wrote:

> On Mon, Feb 07, 2022 at 05:35:17PM -0800, Gregory P. Smith wrote:
>
> > CPython: yes.  we use a double.
> > Python the language: no.  (float is single precision on many micropython
> > platforms as it saves precious ram and performance, plus microcontroller
> > fpu hardware like an M4 is usually single precision 32bit)
>
> If we are to *officially* support non-double floats, it would be nice if
> sys.float_info were to tell us explicitly how wide the floats are rather
> than having to try to reverse engineer it from the other information
> there.
>
> A floating point expert can probably look at this:
>
> sys.float_info(max=1.7976931348623157e+308, max_exp=1024,
> max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021,
> min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16,
> radix=2, rounds=1)
>
> and immediately recognise that those values imply a 64-bit float, but I
> expect most people will not. If Python the language is going to support
> single, double, quad precision floats, and maybe even minifloats with
> just 16 or even fewer bits, then can we please add a field to float_info
> telling us how many bits the floats have?
>

There is no need to know how many bits it is. The meaningful information
about precision and accuracy from a math point of view is already expressed
in float_info.  the size in bits isn't relevant.  You can derive the size
from that if you'd like and are willing to assume a binary format.
binary_bits = ~mant_dig+log2(max_exp)... But that tells you less than
sys.float_info.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CRWZFO5RV2FLXIBH6GV6QGSDXWUZOZCR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Replace debug runtime checks in release mode with assertions in debug mode

2022-02-08 Thread Gregory P. Smith
What does pyperformance say about --enable-optimizations builds with all of
those compiled out vs today?

I like the runtime safety checking for purposes of making the lives of
Python C API users easier. But without the use of assertion enabled CPython
builds being the norm during development (we should try to shift that norm
in the community and make it the popular CI system default regardless), I'm
concerned this would hurt the developers who need it most.

Having some idea of what it actually gains us performance wise would be
useful.

-gps

On Mon, Feb 7, 2022 at 3:30 PM Brett Cannon  wrote:

>
>
> On Mon, Feb 7, 2022 at 8:59 AM Victor Stinner  wrote:
>
>> On Mon, Feb 7, 2022 at 5:48 PM Guido van Rossum  wrote:
>> > So you're proposing to completely get rid of those three?
>>
>> I don't propose to remove them, but only call them if Python is built
>> in debug mode. Or remove them from the release build, unless
>> ./configure --with-assertions is used.
>>
>>
>> > And you're sure that each and every single call to any of those is
>> better off being an assert()?
>>
>> For many years, many C extensions raised an exception *and* returned a
>> result: that's a bug. The strange part is that in some cases, the
>> exceptions is somehow ignored and the program continues running fine.
>>
>> That's why I added _Py_CheckFunctionResult() and _Py_CheckSlotResult()
>> which helped to catch such bugs. But before that, these programs were
>> running fine :-)
>>
>> So it's not fully clear to me there was really a bug or it's just that
>> Python became more pedantic :-)
>>
>>
>> About PyErr_BadInternalCall(): in 10 years, I saw a few SystemError
>> raised by this function, but usually when I hacked on Python. It's
>> really rare to hit such bug.
>>
>>
>> > (I still haven't gotten into the habit of building in debug mode by
>> default, in part because it *isn't* the default when you invoke ./configure
>> or PCbuild/build.bat.)
>>
>> If you don't develop C extensions, the release mode is faster and enough
>> ;-)
>>
>> Ah. I don't know if CIs like GitHub Actions and Azure Pipelines
>> provide Python debug builds. If if it's not the case, it would be nice
>> to have the choice :-)
>>
>
> They do not:
> https://github.com/actions/python-versions/blob/797eb71c41e47d194f563c7ef01790d734534788/builders/ubuntu-python-builder.psm1#L35-L38
> .
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/VRISRYZVU47PQRWKHN77CV5545SKUI5O/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XNB4ZUTFJFRCDCG2HJ36INGH3PMLFMP6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we require IEEE 754 floating-point for CPython?

2022-02-07 Thread Gregory P. Smith
On Mon, Feb 7, 2022 at 4:52 PM Christopher Barker 
wrote:

> From the perspective of some that writes a lot of computational code:
>
> On Mon, Feb 7, 2022 at 10:25 AM Mark Dickinson  wrote:
>
>> - Should we require the presence of NaNs in order for CPython to build?
>> - Should we require IEEE 754 floating-point for
>> CPython-the-implementation?
>>
>
> Yes, and yes, together, as Mark suggests
>
> - Should we require IEEE 754 floating-point for Python-the-language?
>>
>
> I would say no, but it should be recommended -- see Victor's example of
> Micro Python -- though does anyone have an authority over that? IIUC, Micro
> Python already has a few differences -- but is anyone saying they shouldn't
> call it Python? Though, with a quick perusal of teh docs, micro Python does
> seem to support NaN and Inf, at least -- not sure about the rest of 754.
>

PEP 7 and PEP 11 are only about CPython. Other Python VMs full or partial
are free to do what they want with how they implement floats if at all.
(they're build time optional in MicroPython as are many Python language
syntax features)

While we're at it are  64 bit floats required for either cPython or Python
> the language?
>

CPython: yes.  we use a double.
Python the language: no.  (float is single precision on many micropython
platforms as it saves precious ram and performance, plus microcontroller
fpu hardware like an M4 is usually single precision 32bit)
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/X5LWPTTQHS7BZFARSJ3BO5PDO27CRKWO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we require IEEE 754 floating-point for CPython?

2022-02-07 Thread Gregory P. Smith
On Mon, Feb 7, 2022 at 11:06 AM Victor Stinner  wrote:

> Hi Mark,
>
> Aha, good, you posted an email to python-dev, good :-) Last days, I
> was trying to collect more data about this topic, especially find
> platforms which *don't* support IEEE 754, before posting to
> python-dev.
>
> Nowadays, outside museums, it's hard to find computers which don't
> implement IEEE 754.
>
> * Since 1998, IBM S/390 supports IEEE 754.
> * Compaq/DEC VAX didn't use IEEE 754, but are no longer built since
> 2000 (you cant still use Python 3.10 if it can be built on it!)
> * Some Cray computers like Cray SV1 (1998) don't implement IEEE 754,
> but other Cray computers like Cray T90 (1995) implement IEEE 754.
>
> There are embedded devices with no hardware FPU: in this case, the FPU
> is implemented in software. I expect it to implement IEEE 754. Is
> CPython a good target for such small CPUs which have no hardware FPU?
> MicroPython may better fit their needs.
>
> On the other side, all moderns CPU architectures support IEEE 754:
> Intel x86, ARM, IBM Power and PowerPC, Compaq/DEC Alpha, HP PA-RISC,
> Motorola 68xxx and 88xxx, SGI R-, Sun SPARC.
>
> Sources:
>
> *
> https://en.wikipedia.org/wiki/Floating-point_arithmetic#IEEE_754:_floating_point_in_modern_computers
> *
> https://stackoverflow.com/questions/2234468/do-any-real-world-cpus-not-use-ieee-754
>
>
> On Mon, Feb 7, 2022 at 7:25 PM Mark Dickinson  wrote:
> > - Should we require the presence of NaNs in order for CPython to build?
> > - Should we require IEEE 754 floating-point for
> CPython-the-implementation?
>
> In the past, when we deprecated the support for an old platform, we
> didn't suddenly remove the code. We made sure that it's no longer
> possible to build on it. So if anyone notices, it's easy to revert
> (ex: remove the few lines in configure).
>
> Would it make sense to trigger a build error on plaforms which don't
> support IEEE 754 in Python 3.11, and later decide if it's time to
> remove the code in in Python 3.12?
>
> Well. If you ask *me*, I'm in favor of removing the code right now. If
> someone needs to support a platform which doesn't support IEEE 754,
> the support can be maintained *outside* the Python source code, as
> external patches or as a Git fork, no?
>
> Honestly, I never got access to any machine which doesn't support IEEE
> 754 (or nobody told me!).
>
>
> > - Should we require IEEE 754 floating-point for Python-the-language?
>
> Right now, I have no opinion on this question.
>
>
> > Note that on the current main branch there's a Py_NO_NAN macro that
> builders can define to indicate that NaNs aren't supported, but the Python
> build is currently broken if Py_NO_NAN is defined (see
> https://bugs.python.org/issue46656). If the answer to the first question
> is "No", then we need to fix the build under Py_NO_NAN. That's not a big
> deal - perhaps a couple of hours of work.
>
> My comment on bpo-46656: "Python uses Py_NAN without "#ifdef Py_NAN"
> guard since 2008. Building Python with Py_NO_NAN never worked. Nobody
> reported such build failure in the last 14 years..."
>
> If anyone would try building/using Python on a platform without NAN, I
> would expect that we would get a bug report or an email. I'm not aware
> of anything like that, so it seems like nobody uses Python on such
> platform.
>
> Victor
>

It's 2022. We should just require both NaN and IEEE-754.

By the time a system is large enough to build and run CPython, it is
exceedingly unlikely that it will not be able to do that. Even if it means
software emulation of IEEE-754 floating point on something large yet odd
enough not to have a FPU.

The places where non-IEEE-754 and non-NaN floating point are likely to
exist are in specialized parallelized coprocessor hardware. Not the general
purpose CPU running CPython.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/UQXG7E6ONOQFCIES4PQIMJFZDXNGQJTM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: It's now time to deprecate the stdlib urllib module

2022-02-06 Thread Gregory P. Smith
On Sun, Feb 6, 2022 at 9:13 AM Paul Moore  wrote:

> On Sun, 6 Feb 2022 at 16:51, Christian Heimes 
> wrote:
>
> > The urllib package -- and to some degree also the http package -- are
> > constant source of security bugs. The code is old and the parsers for
> > HTTP and URLs don't handle edge cases well. Python core lacks a true
> > maintainer of the code. To be honest, we have to admit defeat and be up
> > front that urllib is not up to the task for this decade. It was designed
> > written during a more friendly, less scary time on the internet.
> >
> > If I had the power and time, then I would replace urllib with a simpler,
> > reduced HTTP client that uses platform's HTTP library under the hood
> > (WinHTTP on Windows, NSURLSession (?) on macOS, Web API for Emscripten,
> > maybe curl on Linux/BSD). For non-trivial HTTP requests, httpx or
> > aiohttp are much better suited than urllib.
> >
> > The second best option is to reduce the feature set of urllib to core
> > HTTP (no ftp, proxy, HTTP auth) and a partial rewrite with stricter,
> > more standard conform parsers for urls, query strings, and RFC 2822
> > instead of RFC 822 for headers.
>
> I'd likely be fine with either of these two options. I'm not worried
> about supporting "advanced" uses. But having no way of getting a file
> from the internet without relying on 3rd party packages seems like a
> huge gap in functionality for a modern language. And having to use a
> 3rd party library to parse URLs will simply push more people to use
> home-grown regexes rather than something safe and correct. Remember
> that a lot of Python users are not professional software developers,
> but scientists, data analysts, and occasional users, for whom the
> existence of something in the stdlib is the *only* reason they have
> any idea that URLs need specialised parsing in the first place.
>
> And while we all like to say 3rd party modules are great, the reality
> is that they provide a genuine problem for many of these
> non-specialist users - and I say that as a packaging specialist and
> pip maintainer. The packaging ecosystem is *not* newcomer-friendly in
> the way that core Python is, much as we're trying to improve that
> situation.
>
> I've said it previously, but I'll reiterate - IMO this *must* have a
> PEP, and that PEP must be clear that the intention is to *remove*
> urllib, not simply to "deprecate and then think about it". That could
> be making it part of PEP 594, or a separate PEP, but one way or
> another it needs a PEP.
>

This would need to be it's own PEP.  urllib et. al. are used by virtually
everybody.  They're highly used batteries.

I'm -1 on deprecating it for that reason alone.

Christian proposes that having a simpler scope rewrite of it might be nice,
but I think disruption to the world and loss of trust in Python would be
similar either way.

-gps


>
> Paul
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/KT6TGUBTLLETHES2OVVGZWSGYC5JCEKC/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QMSFZBQJFWKFFE3LFQLQE2AT6WKMLPGL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Moving away from _Py_IDENTIFIER().

2022-02-03 Thread Gregory P. Smith
On Wed, Feb 2, 2022 at 2:48 PM Eric Snow 
wrote:

> I'm planning on moving us to a simpler, more efficient alternative to
> _Py_IDENTIFIER(), but want to see if there are any objections first
> before moving ahead.  Also see https://bugs.python.org/issue46541.
>
> _Py_IDENTIFIER() was added in 2011 to replace several internal string
> object caches and to support cleaning up the cached objects during
> finalization.  A number of "private" functions (each with a
> _Py_Identifier param) were added at that time, mostly corresponding to
> existing functions that take PyObject* or char*.  Note that at present
> there are several hundred uses of _Py_IDENTIFIER(), including a number
> of duplicates.
>
> My plan is to replace our use of _Py_IDENTIFIER() with statically
> initialized string objects (as fields under _PyRuntimeState).  That
> involves the following:
>
> * add a PyUnicodeObject field (not a pointer) to _PyRuntimeState for
> each string that currently uses _Py_IDENTIFIER() (or
> _Py_static_string())
> * statically initialize each object as part of the initializer for
> _PyRuntimeState
> * add a macro to look up a given global string
> * update each location that currently uses _Py_IDENTIFIER() to use the
> new macro instead
>
> Pros:
>
> * reduces indirection (and extra calls) for C-API functions that need
> the strings (making the code a little easier to understand and
> speeding it up)
> * the objects are referenced from a fixed address in the static data
> section instead of the heap (speeding things up and allowing the C
> compiler to optimize better)
> * there is no lazy allocation (or lookup, etc.) so there are fewer
> possible failures when the objects get used (thus less error return
> checking)
> * saves memory (at little, at least)
> * if needed, the approach for per-interpreter is simpler
> * helps us get rid of several hundred static variables throughout the code
> base
> * allows us to get rid of _Py_IDENTIFIER() and a bunch of related
> C-API functions
> * "deep frozen" modules can use the global strings
> * commonly-used strings could be pre-allocated by adding
> _PyRuntimeState fields for them
>
> Cons:
>
> * a little less convenient: adding a global string requires modifying
> a separate file from the one where you actually want to use the string
> * strings can get "orphaned" (I'm planning on checking in CI)
> * some strings may never get used for any given ./python invocation
> (not that big a difference though)
>
> I have a PR up (https://github.com/python/cpython/pull/30928) that
> adds the global strings and replaces use of _Py_IDENTIFIER() in our
> code base, except for in non-builtin stdlib extension modules.  (Those
> will be handled separately if we proceed.)  The PR also adds a CI
> check for "orphaned" strings.  It leaves _Py_IDENTIFIER() for now, but
> disallows any Py_BUILD_CORE code from using it.
>
> With that change I'm seeing a 1% improvement in performance (see
> https://github.com/faster-cpython/ideas/issues/230).
>
> I'd also like to actually get rid of _Py_IDENTIFIER(), along with
> other related API including ~14 (private) C-API functions.  Dropping
> all that helps reduce maintenance costs.  However, at least one PyPI
> project (blender) is using _Py_IDENTIFIER().  So, before we could get
> rid of it, we'd first have to deal with that project (and any others).
>

datapoint: an internal code search turns up blender, multidict, and
typed_ast as open source users of _Py_IDENTIFIER .  Easy to clean up as
PRs.  There are a couple of internal uses as well, all of which are
similarly easy to address and are only in code that is expected to need API
cleanup tweaks between CPython versions.

Overall I think addressing the broader strategy question among
the performance focused folks is worthwhile though.

-gps


>
> To sum up, I wanted to see if there are any objections before I start
> merging anything.  Thanks!
>
> -eric
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/DNMZAMB4M6RVR76RDZMUK2WRLI6KAAYS/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BCQJ6AZMPTI2DGFQPC27RUIFJQGDIOQD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [Steering-council] Re: PEP 651, Robust Stack Overflow Handling, Rejection notice

2022-01-30 Thread Gregory P. Smith
-cc: python-steering-council

On Fri, Mar 5, 2021 at 4:26 PM Guido van Rossum  wrote:

> On Fri, Mar 5, 2021 at 11:11 AM Brett Cannon  wrote:
>
>> Speaking for myself ...
>>
>
> Ditto ...
>
> On Fri, Mar 5, 2021 at 7:04 AM Mark Shannon  wrote:
>> [...]
>>
>>> In some cases, the PEP would have improved the situation.
>>>
>>> For example:
>>> sys.setrecursionlimit(5000)
>>> def f():
>>>  f()
>>>
>>> Currently, it raises a RecursionError on linux, but crashes the
>>> interpreter on Windows.
>>> With PEP 651 it would have raised a RecursionError on both platforms.
>>>
>>> Am I missing something here?
>>>
>>
>> So your example shows a user already comfortable in raising their
>> recursion limit to work around needing more stack space to reach
>> completion. What is stopping the user from continuing to raise the limit
>> until they still reach their memory limit even with PEP 651? If you're
>> worried about runaway recursion you will very likely hit that with the
>> default stack depth already, so I personally don't see how a decoupled
>> stack counter from the C stack specifically makes it any easier/better to
>> detect runaway recursion. And if I need more recursion than the default,
>> you're going to bump the recursion depth anyway, which weakens the
>> protection in either the C or decoupled counter scenarios. Sure, it's
>> currently platform-specific, but plenty of people want to push that limit
>> based on their machine anyway and don't need consistency on platforms they
>> will never run on, i.e. I don't see a huge benefit to being able to say
>> that an algorithm consistently won't go past 5000 calls on all platforms
>> compared to what the C stack protection already gives us (not to say
>> there's zero benefit, but it isn't massive or widespread either IMO). I
>> personally just don't see many people saying, "I really want to limit my
>> program to an exact call stack depth of 5000 on all platforms which is
>> beyond the default, but anything under is fine and anything over --
>> regardless of what the system can actually handle -- is unacceptable".
>>
>> Tack on the amount of changes required to give a cross-platform stack
>> count and limit check compared to the benefit being proposed, and to me
>> that pushes what the PEP is proposing into net-negative payoff.
>>
>
> To me, the point of that example is as a reminder that currently fiddling
> with the recursion limit can cause segfaults.
>
> Mark's PEP proposes two, somewhat independent, changes: (1) don't consume
> C stack on pure Python-to-Python (pp2p) function calls; (2) implement
> fool-proof C stack overflow checks.
>
> Change (2) makes it safe for users to mess with the stack overflow limit
> however they see fit. Despite (1), the limit for pp2p calls remains at 1000
> so that users who unintentionally write some naively recursive code don't
> have to wait until they fill up all of memory before they get a traceback.
> (Of course they could also write a while-True loop that keeps adding an
> item to a list and they'd end up in the same situation. But in my
> experience that situation is less painful to deal with than accidental
> stack overflow, and I'd shudder at the thought of a traceback of a million
> lines.)
>
> Given that we have (1), why is (2) still needed? Because there are ways to
> recursively call Python code that aren't pp2p calls. By a pp2p (pure
> Python-to-Python) call, I mean any direct call, e.g. a method call or a
> function call. But what about other operations that can invoke Python code?
> E.g. if we have a dict d and a class C, we could create an instance of C
> and use it to index d, e.g. d[C()]. This operation is not a p2pp call --
> the BINARY_SUBSCR opcode calls the dict's `__getitem__` method, and that
> calls the key's `__hash__` method. Here's a silly demonstration:
> ```
> def f(c):
> d = {}
> return d[c]
>
> class C:
> def __hash__(self):
> return f(self)
>
> f(C())
> ```
> Note that the "traceback compression" implemented for simple recursive
> calls fails here -- I just ran this code and got 2000 lines of output.
>
> The way I imagine Mark wants to implement pp2p calls means that in this
> case each recursion step *does* add several other C stack frames, and this
> would be caught by the limit implemented in (2). I see no easy way around
> this -- after all the C code involved in the recursion could be a piece of
> 3rd party C code that itself is not at fault.
>
> So we could choose to implement only (2), robust C stack overflow checks.
> This would require a bunch of platform-specific code, and there might be
> platforms where we don't know how to implement this (I vaguely recall a
> UNIX version where main() received a growable stack but each thread only
> had a fixed 64kb stack), but those would be no worse off than today.
>
> Or we could choose to implement only (1), eliminating C stack usage for
> pp2p calls. But in that case we'd still need a recursion limit for non-pp2p
> calls. 

[Python-Dev] Re: How about using modern C++ in development of CPython ?

2022-01-22 Thread Gregory P. Smith
On Thu, Jan 20, 2022 at 8:16 PM Dan Stromberg  wrote:

>
> On Fri, Apr 16, 2021 at 11:13 AM Christian Heimes 
> wrote:
>
>> On 16/04/2021 19.14, redrad...@gmail.com wrote:
>> > My personal stop of contributing in CPython is that it is written in
>> pure C !!
>> > I wrote code in both: pure C and C++, but I like writing code in C++,
>> because it simplifies things without losing perfomance
>>
>> There are plenty of Open Source projects that could use more capable C++
>> developers. :)
>>
>> I'm not a fan of C++. It has its use cases, e.g. in UI. Python core
>> isn't the best fit. AFAIK most core devs are not fluent in C++. Despite
>> it's name, C++ is really a different language than C. It has a different
>> ABI and stdlib than C, too. In my personal opinion C++ won't give us any
>> net benefits. I'd much rather go for Rust than C++ to gain memory safety.
>>
> Agreed.
>
> Rust would be much better than C++.
>

At least one Python runtime has been written in Rust already:
https://github.com/RustPython/RustPython

The fundamental reason we haven't embarked on new language things in
CPython is compatibility and complexity.  If there were clear demonstrable
wins to adopting a language in addition to C for the CPython runtime, we
could consider it (ultimately as a PEP with reasoning, alternatives, and
rejected ideas all laid out). But it isn't going to be done on a whim as it
a new language doesn't magically solve problems, it just adds more problems.

Of the three things listed at the start of this thread, the first two don't
exist. From huge codebase experience with C++, it does not cause
significantly better (1) Readabillity or (2) Maintainability on its own
compared to C - both of those are entirely up to the software engineering
practices adopted within the project no matter what the language is.  (3)
RAII is the number one thing I would enjoy having in the CPython
internals.  Manually managing Py_DECREFs is inhumane.  As a result we have
built up infrastructure to detect leaks in our test suite so new refcount
leak problems *usually* don't happen - so long as test coverage is in place
(ex: 3.10.2...). And RAII isn't magic, even it can be used wrong. It's just
that the location and spelling of the wrong changes.

Rewrites might sound "easy" (emphasis on the air quotes! nobody should
think it's easy)... but only until you actually have to match API and ABI
and bug for bug compatibility with the entire corpus of existing real world
code. Just ask PyPy. ;)

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LTYWGK7FS7P62BK5OB4XVUCXLN2YFOJS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Request to revert unittest and configparser incompatible changes in Python 3.11

2022-01-18 Thread Gregory P. Smith
On Tue, Jan 18, 2022 at 6:24 AM Victor Stinner  wrote:

> Hi,
>
> My colleagues Tomáš Hrnčiar and Miro Hrončok made good progress on
> updating Python 3.10 to Python 3.11 in Fedora, but some specific
> Python 3.11 incompatible changes are causing more troubles than
> others:
> https://discuss.python.org/t/experience-with-python-3-11-in-fedora/12911
>
> We propose to revert the following 2 changes in Python 3.11 and
> postpone them in a later Python version, once most projects will be
> compatible with these changes:
>
> * Removal of unittest aliases (bpo-45162): it broke 61 Fedora packages
> * Removals from configparser module (bpo-45173) - broke 28 Fedora packages
>
> --
>
> We reported the issue to many affected projects, or when possible, we
> also wrote a pull request to make the code compatible (lot of those
> were made by others, e.g. Hugo van Kemenade, thank you!).
>

+1 to rolling both of these back for 3.11.  Deprecation removals are hard.
Surfacing these to the impacted upstream projects to provide time for those
to integrate the changes is the right way to make these changes stick in
3.12 or later.  Thanks for doing a significant chunk of that work!

As you've done the work to clean up a lot of other OSS projects, I suggest
we defer this until 3.12 with the intent that we won't defer it again. That
doesn't mean we can't hold off on it, just that we believe pushing for this
now and proactively pushing for a bunch of cleanups has improved the state
of the world such that the future is brighter.  That's a much different
strategy than our passive aggressive DeprecationWarnings.

-gps


>
> The problem is that fixing a Fedora package requires multiple steps:
>
> * (1) Propose a pull request upstream
> * (2) Get the pull request merged upstream
> * (3) Wait for a new release upstream
> * (4) Update the Fedora package downstream, or backport the change in
> Fedora (only needed by Fedora)
>
> Identifying the Python incompatible changes causing most troubles took
> us a lot of time, but we did this work. Reverting the two Python 3.11
> incompatible changes (causing most troubles) will save us "bug triage"
> time, to get more time on updating projects to Python 3.11 for the
> other remaining incompatible changes.
>
> We are not saying that these incompatible changes are bad, it's just a
> matter of getting most projects ready before Python 3.11 final will be
> released.
>
> --
>
> By the way, before making a change known to be incompatible, it would
> be nice to run a code search on PyPI top 5000 projects to estimate how
> many projects will be broken, and try to update these projects
> *before* making the change.
>
> For example, only introduce an incompatible change into Python once
> less than 15 projects are affected. Getting a pull request merged is
> nice, but a release including the fix is way better for practical
> reasons: people use "pip install ".
>
> --
>
> Fedora work on Python 3.11 is public and tracked at:
> https://bugzilla.redhat.com/show_bug.cgi?id=PYTHON3.11
>
> Click on "depends on" to see current issues:
>
> https://bugzilla.redhat.com/buglist.cgi?bug_id=2016048_id_type=anddependson=tvp
>
> Example of bz #2025600: mom fails to build with Python 3.11:
> AttributeError: module 'configparser' has no attribute
> 'SafeConfigParser'.
>
>
> Victor Stinner -- in the name of the Python Red Hat / Fedora maintenance
> team
> --
> Night gathers, and now my watch begins. It shall not end until my death.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/GJTREADEXYAETECE5JDTPYWK4WMTKYGR/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/3J3VKNTKGPWACFVDWPRCS7FNED2A34R4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Request to revert unittest and configparser incompatible changes in Python 3.11

2022-01-18 Thread Gregory P. Smith
On Tue, Jan 18, 2022 at 10:58 PM Christopher Barker 
wrote:

> On Tue, Jan 18, 2022 at 10:30 AM Brett Cannon  wrote:
>
>>  I remember that "noisy by default" deprecation warnings were widely
>>> despised.
>>>
>>> One thought, what if they were off by default UNLESS you were doing unit
>>> tests?
>>>
>>
>> I believe pytest already does this.
>>
>
> Indeed it does, at least in recent versions (1-2 yrs ago?)
>
> And even that is pretty darn annoying. It's really helpful for my code,
> but they often get lost in the noise of all the ones I get from upstream
> packages.
>
> I suppose I need to take the time to figure out how to silence the ones I
> don't want.
>
> And it does prompt me to make sure that the upstream packages are working
> on it.
>
> Now we just need to get more people to use pytest :-)
>

Our stdlib unittest already enables warnings by default per
https://bugs.python.org/issue10535.

Getting the right people to pay attention to them is always the hard part.


>
> -CHB
>
> --
> Christopher Barker, PhD (Chris)
>
> Python Language Consulting
>   - Teaching
>   - Scientific Software Development
>   - Desktop GUI and Web Development
>   - wxPython, numpy, scipy, Cython
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/LZAF3TOOIAAFXXK2KPAXA5V5SRBOSIIP/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5YOEMAIVXMVWCORAY54LZQO62755HQGX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Is anyone using 15-bit PyLong digits (PYLONG_BITS_IN_DIGIT=15)?

2022-01-16 Thread Gregory P. Smith
On Sun, Jan 16, 2022 at 1:51 PM Mark Dickinson  wrote:

> On Sun, Jan 16, 2022 at 9:28 PM Guido van Rossum  wrote:
>
>> Does the optimization for //10 actually help in the real world? [...]
>>
>
> Yep, I don't know. If 10 is *not* the most common small divisor in real
> world code, it must at least rank in the top five. I might hazard a guess
> that division by 2 would be more common, but I've no idea how one would go
> about establishing that.
>

All of the int constants relating to time and date calculations show up
frequently as well.  But I'd assume -fprofile-values isn't likely to pick
many to specialize on to avoid adding branches so maybe 10 is ironically
it.  --enable-optimizations with clang doesn't trigger value specialization
(I'm pretty sure they support the concept, but I've never looked at how).

>
> The reason that the divisor of 10 is turning up from the PGO isn't a
> particularly convincing one - it looks as though it's a result of our
> testing the builtin int-to-decimal-string conversion by comparing with an
> obviously-correct repeated-division-by-10 algorithm.
>
> Then again I'm not sure what's *lost* even if this optimization is
>> pointless -- surely it doesn't slow other divisions down enough to be
>> measurable.
>>
>
> Agreed. That at least is testable. I can run some timings (but not
> tonight).
>

BTW, I am able to convince clang 11 and higher to produce a 64:32 divide
instruction with a modified version of the code. Basically just taking your
assembly divl variant as an example and writing that explicitly as the
operations in C:

https://godbolt.org/z/63eWPczjx

Taking that code and turning it into an actual test within CPython itself,
it appears to deliver the desired speedup in gcc9 as well.
https://github.com/python/cpython/pull/30626 for
https://bugs.python.org/issue46406.

20% faster microbenchmarking with x//1 or x//17 or other non-specialized
divide values.  similar speedup even in --enable-optimizations builds.
with both gcc9 and clang13.

The compilers seem happier optimizing that code.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FJRKRXSXPF24C3NHGYZMVPB3ZZPCBI6A/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Is anyone using 15-bit PyLong digits (PYLONG_BITS_IN_DIGIT=15)?

2022-01-14 Thread Gregory P. Smith
On Fri, Jan 14, 2022 at 9:50 AM Mark Dickinson  wrote:

> On Sun, Jan 2, 2022 at 10:35 AM Mark Dickinson  wrote:
>
>> Division may still be problematic.
>>
>
> On that note: Python divisions are somewhat crippled even on x64. Assuming
> 30-bit digits, the basic building block that's needed for multi-precision
> division is a 64-bit-by-32-bit unsigned integer division, emitting a 32-bit
> quotient (and ideally also a 32-bit remainder). And there's an x86/x64
> instruction that does exactly that, namely DIVL. But without using inline
> assembly, current versions of GCC and Clang apparently can't be persuaded
> to emit that instruction from the longobject.c source - they'll use DIVQ (a
> 128-bit-by-64-bit division, albeit with the top 64 bits of the dividend set
> to zero) on x64, and the __udivti3 or __udivti4 intrinsic on x86.
>
> I was curious to find out what the potential impact of the failure to use
> DIVL was, so I ran some timings. A worst-case target is division of a large
> (multi-digit) integer by a single-digit integer (where "digit" means digit
> in the sense of PyLong digit, not decimal digit), since that involves
> multiple CPU division instructions in a fairly tight loop.
>
> Results: on my laptop (2.7 GHz Intel Core i7-8559U, macOS 10.14.6,
> non-optimised non-debug Python build), a single division of 10**1000 by 10
> takes ~1018ns on the current main branch and ~722ns when forced to use the
> DIVL instruction (by inserting inline assembly into the inplace_divrem1
> function). IOW, forcing use of DIVL instead of DIVQ, in combination
> with getting the remainder directly from the DIV instruction instead of
> computing it separately, gives a 41% speedup in this particular worst case.
> I'd expect the effect to be even more marked on x86, but haven't yet done
> those timings.
>
> For anyone who wants to play along, here's the implementation of the
> inplace_divrem1 (in longobject.c) that I was using:
>
> static digit
> inplace_divrem1(digit *pout, digit *pin, Py_ssize_t size, digit n)
> {
> digit remainder = 0;
>
> assert(n > 0 && n <= PyLong_MASK);
> while (--size >= 0) {
> twodigits dividend = ((twodigits)remainder << PyLong_SHIFT) | pin[size];
> digit quotient, high, low;
> high = (digit)(dividend >> 32);
> low = (digit)dividend;
> __asm__("divl %2\n"
> : "=a" (quotient), "=d" (remainder)
> : "r" (n), "a" (low), "d" (high)
> );
> pout[size] = quotient;
> }
> return remainder;
> }
>
>
> I don't know whether we *really* want to open the door to using inline
> assembly for performance reasons in longobject.c, but it's interesting to
> see the effect.
>

That only appears true in default boring -O2 builds.  Use `./configure
--enable-optimizations` and the C version is *much* faster than your asm
one...

250ns for C vs 370ns for your asm divl one using old gcc 9.3 on my zen3
when compiled using --enable-optimizations.

tested using ` -m timeit -n 150 -s 'x = 10**1000; r=x//10; assert r ==
10**999, r' 'x//10' `

Use of __asm__ appears to have prevented the compiler from being able to
fully optimize that code in PGO/FDO mode.

I trust the compiler toolchain to know what's close to best here.
Optimizing use of a div instruction isn't entirely straight forward as on
many microarchitectures the time required varies based on the inputs as
they'll internally implement looping when values exceed the bits with which
their hw operates.  there's probably interesting ways to optimize bignum
division using opencl and vector hardware as well - for the case when you
know you've got over a dozen digits; but that's what numpy et. al. are for.
Bignums in python are a convenience. Most values normal code deals with are
less than 2**100.

-Greg


>
> --
> Mark
>
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ZWGPO3TMCI7WNLC3EMS26DIKI5D3ZWMK/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WAZGSHPS7LBN4QNYVVSUG2RC26322L5D/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Minor inconvenience: f-string not recognized as docstring

2022-01-11 Thread Gregory P. Smith
On Tue, Jan 11, 2022 at 10:29 AM Guido van Rossum  wrote:

> I personally think F-strings should not be usable as docstrings. If you
> want a dynamically calculated docstring you should assign it dynamically,
> not smuggle it in using a string-like expression. We don't allow "blah {x}
> blah".format(x=1) as a docstring either, not "foo %s bar" % x.
>

Agreed.  If we wanted to remove the wart of constant f-strings happening to
work as an implementation detail in this context, that *could* be made into
a warning.  But that kind of check may be best left to a linter for *all*
of these dynamic situations that don't wind up populating __doc__.

-gps


>
> On Tue, Jan 11, 2022 at 8:12 AM Antoine Pitrou  wrote:
>
>> On Tue, 11 Jan 2022 10:58:03 -0500
>> "Eric V. Smith"  wrote:
>> > Constant f-strings (those without substitutions) as doc strings used to
>> > work, since the compiler turns them into normal strings.
>> >
>> > I can't find exactly where it was removed, but there was definitely
>> > discussion about it. See https://bugs.python.org/issue28739 for at
>> least
>> > part of the discussion.
>>
>> Ah, sorry for the misunderstanding.  While the example I showed doesn't
>> have any substitutions, I'm interested in the non-trivial (non-constant)
>> case actually :-)
>>
>> Regards
>>
>> Antoine.
>>
>>
>> >
>> > Eric
>> >
>> > On 1/11/2022 8:41 AM, Antoine Pitrou wrote:
>> > > Hello,
>> > >
>> > > Currently, a f-string is not recognized as a docstring:
>> > >
>> >  class C: f"foo"
>> >  C.__doc__
>> > 
>> > > This means you need to use a (admittedly easy) workaround:
>> > >
>> >  class C: __doc__ = f"foo"
>> >  C.__doc__
>> > > 'foo'
>> > >
>> > > Shouldn't the former be allowed for convenience?
>> > >
>> > > Regards
>> > >
>> > > Antoine.
>> > >
>> > >
>> > > ___
>> > > Python-Dev mailing list -- python-dev@python.org
>> > > To unsubscribe send an email to python-dev-le...@python.org
>> > > https://mail.python.org/mailman3/lists/python-dev.python.org/
>> > > Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/UALMEMQ4QW7W4HE2PIBARWYBKFWJZFB4/
>> > > Code of Conduct: http://python.org/psf/codeofconduct/
>>
>>
>>
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/37YAHCREZYZFSV4BRDKUEQAX4ZF4JTI6/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
> --
> --Guido van Rossum (python.org/~guido)
> *Pronouns: he/him **(why is my pronoun here?)*
> 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/35R3DCNPIQJ7ZCHTLP64IP2XZCK7QSLJ/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QFGCXW25TZOMEN2DRVLDQ4XQQSYNNTI7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Sanity check about ctypes

2022-01-05 Thread Gregory P. Smith
On Wed, Jan 5, 2022 at 3:17 PM Yonatan Zunger  wrote:

> Hey everyone.
>
> Quick sanity check: The ctypes docs
>  refer to
> _CData as a non-public class which is in the module, but _ctypes.c doesn't
> actually export it
> .
> (I discovered this because it turns out that typeshed *is* referencing
> _CData, e.g. in its type annotations for RawIOBase
> 
> )
>
> Is this intended behavior in CPython (in which case the docs are a bit off
> and typeshed has a bug), or is it unexpected to people on this list (in
> which case it's an issue in _ctypes.c)?
>

typeshed is presumably referring to itself. It defines an interface for
ctypes._CData in
https://github.com/python/typeshed/blob/master/stdlib/ctypes/__init__.pyi#L82

The CPython ctypes docs *seem* reasonable to me. There is such a class. It
is not public, so you cannot access ctypes._CData in any direct manner.
That it gets called a class may be somewhat historical - its purpose is to
provide a common interface. What code would ever actually care that it used
class mechanisms as an internal implementation detail to do that?

-gps


>
> Yonatan
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/7L6DNNI3MJ4UIM3C7A7KAIWHX562MRZL/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ENKP334GO73ISSQ5XM5UWTOQYGTROK4E/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Is anyone using 15-bit PyLong digits (PYLONG_BITS_IN_DIGIT=15)?

2022-01-04 Thread Gregory P. Smith
On Tue, Jan 4, 2022 at 3:24 AM Tal Einat  wrote:

> I have a spare RPi zero that I could try to set up as a buildbot. Would
> that be useful?
>

No need. We've already got a 32-bit raspbian bot, adding another wouldn't
add value. The rpi0/1/2 are too slow to compile on anyways.

-gps

On Tue, Jan 4, 2022 at 10:59 AM Antoine Pitrou  wrote:
>
>> On Mon, 3 Jan 2022 22:40:25 -0800
>> "Gregory P. Smith"  wrote:
>> >
>> > rerunning a mere few of those in --rigorous mode for more runs does not
>> > significantly improve the stddev so I'm not going to let that finish.
>>
>> The one benchmark that is bigint-heavy is pidigits AFAIK, so you might
>> want to re-run that one if you want a more rigorous confirmation that
>> there is no regression.
>>
>> > my recommendation: proceed with removing 15-bit bignum digit support.
>> > 30-bit only future with simpler better code here we come.
>>
>> Sounds reasonable to me as well.
>>
>> Regards
>>
>> Antoine.
>>
>>
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/MRE5HF3XRKJBEHF3YJNGXWXECLX7GQGG/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/FPYXF33EKE23AUYPALAOKDLGKE7LGUGX/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FZ7WAQGF6UVPVNG7SHIYTYWYBZXAPJSV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Is anyone using 15-bit PyLong digits (PYLONG_BITS_IN_DIGIT=15)?

2022-01-03 Thread Gregory P. Smith
On Sun, Jan 2, 2022 at 2:37 AM Mark Dickinson  wrote:

> On Sat, Jan 1, 2022 at 9:05 PM Antoine Pitrou  wrote:
>
>> Note that ARM is merely an architecture with very diverse
>> implementations having quite differing performance characteristics.  [...]
>>
>
> Understood. I'd be happy to see timings on a Raspberry Pi 3, say. I'm not
> too worried about things like the RPi Pico - that seems like it would be
> more of a target for MicroPython than CPython.
>
> Wikipedia thinks, and the ARM architecture manuals seem to confirm, that
> most 32-bit ARM instruction sets _do_ support the UMULL
> 32-bit-by-32-bit-to-64-bit multiply instruction. (From
> https://en.wikipedia.org/wiki/ARM_architecture#Arithmetic_instructions:
> "ARM supports 32-bit × 32-bit multiplies with either a 32-bit result or
> 64-bit result, though Cortex-M0 / M0+ / M1 cores don't support 64-bit
> results.") Division may still be problematic.
>

It's rather irrelevant anyways, the pi zero/one is the lowest spec arm that
matters at all. Nobody is ever going to ship something worse than that
capable of running CPython.

Anyways I ran actual benchmarks on a pi3. On 32-bit raspbian I build
CPython 3.10 with no configure flags and with --enable-big-digits (or
however that's spelled) for 30-bit digits and ran pyperformance 1.0.2 on
them.

Caveat: This is not a good system to run benchmarks on.  widely variable
performance (it has a tiny heatsink which never meaningfully got hot), and
the storage is a random microsd card. Each full pyperformance run took 6
hours. :P

Results basically say: no notable difference.  Most do not change and the
variability (look at those stddev's and how they overlap on the few things
that produced a "significant" result at all) is quite high.  Things wholly
unrelated to integers such as the various regex benchmarks showing up as
faster demonstrate the unreliability of the numbers.  And also at how
pointless caring about this fine level of detail for performance is on this
platform.

```
pi@pi3$ pyperf compare_to 15bit.json 30bit.json
2to3: Mean +- std dev: [15bit] 7.88 sec +- 0.39 sec -> [30bit] 8.02 sec +-
0.36 sec: 1.02x slower
crypto_pyaes: Mean +- std dev: [15bit] 3.22 sec +- 0.34 sec -> [30bit] 3.40
sec +- 0.22 sec: 1.06x slower
fannkuch: Mean +- std dev: [15bit] 13.4 sec +- 0.5 sec -> [30bit] 13.8 sec
+- 0.5 sec: 1.03x slower
pickle_list: Mean +- std dev: [15bit] 74.7 us +- 22.1 us -> [30bit] 85.7 us
+- 15.5 us: 1.15x slower
pyflate: Mean +- std dev: [15bit] 19.6 sec +- 0.6 sec -> [30bit] 19.9 sec
+- 0.6 sec: 1.01x slower
regex_dna: Mean +- std dev: [15bit] 2.99 sec +- 0.24 sec -> [30bit] 2.81
sec +- 0.22 sec: 1.06x faster
regex_v8: Mean +- std dev: [15bit] 520 ms +- 71 ms -> [30bit] 442 ms +- 115
ms: 1.18x faster
scimark_monte_carlo: Mean +- std dev: [15bit] 3.31 sec +- 0.24 sec ->
[30bit] 3.22 sec +- 0.24 sec: 1.03x faster
scimark_sor: Mean +- std dev: [15bit] 6.42 sec +- 0.34 sec -> [30bit] 6.27
sec +- 0.33 sec: 1.03x faster
spectral_norm: Mean +- std dev: [15bit] 4.85 sec +- 0.31 sec -> [30bit]
4.74 sec +- 0.20 sec: 1.02x faster
unpack_sequence: Mean +- std dev: [15bit] 1.42 us +- 0.42 us -> [30bit]
1.60 us +- 0.33 us: 1.13x slower

Benchmark hidden because not significant (47): chameleon, chaos, deltablue,
django_template, dulwich_log, float, go, hexiom, json_dumps, json_loads,
logging_format, logging_silent, logging_simple, mako, meteor_contest,
nbody, nqueens, pathlib, pickle, pickle_dict, pickle_pure_python, pidigits,
python_startup, python_startup_no_site, raytrace, regex_compile,
regex_effbot, richards, scimark_fft, scimark_lu, scimark_sparse_mat_mult,
sqlalchemy_declarative, sqlalchemy_imperative, sqlite_synth, sympy_expand,
sympy_integrate, sympy_sum, sympy_str, telco, tornado_http, unpickle,
unpickle_list, unpickle_pure_python, xml_etree_parse, xml_etree_iterparse,
xml_etree_generate, xml_etree_process
```

rerunning a mere few of those in --rigorous mode for more runs does not
significantly improve the stddev so I'm not going to let that finish.

my recommendation: proceed with removing 15-bit bignum digit support.
30-bit only future with simpler better code here we come.

-gps


>
> --
> Mark
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/F53IZRZPNAKB4DUPOVYWGMQDC4DAWLTF/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5RJGI6THWCDYTTEPXMWXU7CK66RQUTD4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Is anyone using 15-bit PyLong digits (PYLONG_BITS_IN_DIGIT=15)?

2021-12-31 Thread Gregory P. Smith
Regarding ABI issues, I don't see anything obvious either. I was probably
misremembering the potential marshal issue, which was addressed.

struct _longobject (the implementation details behind the public
PyLongObject typedef name) and the digit definition are excluded from
Py_LIMITED_API.  So per https://docs.python.org/3.10/c-api/stable.html we
are free to change the struct layout.  yay.

regardless, I have confirmed that, sys.getsizeof(0) returns the same value
(12) on a 32-bit build both with 15-bit and 30-bit (--enable-big-digits)
builds on 32-bit architectures (I checked arm and x86).

So it'd only "break" something depending on non-limited minor version
specific ob_digit definitions and using it on the wrong Python version.
not a big deal.  People wanting that need to use Py_LIMITED_API in their
extension code as per our existing policy.

The getsizeof increments go from 12 14 16 18 20 to 0digits=12 1digit=16
2digit=20 as expected when doubling the digit size, but this isn't a
problem.  memory allocator wise, the same amount of ram is going to be
consumed by the same magnitude int regardless of how it gets built.
nothing allocates and tracks at a 2-byte granularity.

Perhaps I missed it, but maybe an action item would be to add a
>> buildbot which configures for 15-bit PyLong digits.
>>
>
> Yep, good point. I was wrong to say that  "15-bit builds don't appear to
> be exercised by the buildbots": there's a 32-bit Gentoo buildbot that's
> (implicitly) using 15-bit digits, and the GitHub Actions Windows/x86 build
> also uses 15-bit digits. I don't think we have anything that's explicitly
> using the `--enable-big-digits` option, though.
>

My raspbian bot covers the 32-bit use case we primarily care about.  (I
should promote that to one to stable)

I suggest just going for it. Remove 15-bit digit support and clean up the
code. My guess is that there will not be a meaningful performance impact on
32-bit hosts. I'm happy to run some tests on a rpi once you've got a PR up
if you don't already have a dozen of those laying around. :)

-gps


>
> --
> Mark
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ZIR2UF7KHYJ2W5Z4A3OS5BDRI3DS5QTM/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/UAYEAALSRWZGZWEB7W3QE2LFRCGT5USR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Is anyone using 15-bit PyLong digits (PYLONG_BITS_IN_DIGIT=15)?

2021-12-30 Thread Gregory P. Smith
On Thu, Dec 30, 2021 at 12:42 PM Gregory P. Smith  wrote:

>
> On Thu, Dec 30, 2021 at 4:47 AM Mark Dickinson  wrote:
>
>> tl;dr: I'd like to deprecate and eventually remove the option to use
>> 15-bit digits in the PyLong implementation. Before doing so, I'd like to
>> find out whether there's anyone still using 15-bit PyLong digits, and if
>> so, why they're doing so.
>>
>> History: the use of 30-bit digits in PyLong was introduced for Python 3.1
>> and Python 2.7, to improve performance of int (Python 3) / long (Python 2)
>> arithmetic. At that time, we retained the option to use 15-bit digits, for
>> two reasons:
>>
>> - (1) use of 30-bit digits required C99 features (uint64_t and friends)
>> at a time when we hadn't yet committed to requiring C99
>> - (2) it wasn't clear whether 30-bit digits would be a performance win on
>> 32-bit operating systems
>>
>> Twelve years later, reason (1) no longer applies, and I suspect that:
>>
>> - No-one is deliberately using the 15-bit digit option.
>> - There are few machines where using 15-bit digits is faster than using
>> 30-bit digits.
>>
>> But I don't have solid data on either of these suspicions, hence this
>> post.
>>
>> Removing the 15-bit digit option would simplify the code (there's
>> significant mental effort required to ensure we don't break things for
>> 15-bit builds when modifying Objects/longobject.c, and 15-bit builds don't
>> appear to be exercised by the buildbots), remove a hidden compatibility
>> trap (see b.p.o. issue 35037), widen the applicability of the various fast
>> paths for arithmetic operations, and allow for some minor fast-path
>> small-integer optimisations based on the fact that we'd be able to assume
>> that presence of *two* extra bits in the C integer type rather than just
>> one. As an example of the latter: if `a` and `b` are PyLongs that fit in a
>> single digit, then with 15-bit digits and a 16-bit `digit` and `sdigit`
>> type, `a + b` can't currently safely (i.e., without undefined behaviour
>> from overflow) be computed with the C type `sdigit`. With 30-bit digits and
>> a 32-bit `digit` and `sdigit` type, `a + b` is safe.
>>
>> Mark
>>
>
> tying the thread together: this is https://bugs.python.org/issue45569
>
> Check 32-bit builds.  When I pushed for the 30-bit digit implementation, I
> wanted it for all builds but if I recall correctly it *might* have
> changed the minimum structure size for PyLong which could've been an ABI
> issue?  double check that.  32-bit is still important. Raspbian. rpi, rpi
> zero, and first rev rpi2 are 32-bit arm architectures so even with 64-bit
> raspbian on the horizon, that won't be the norm.  and for those, memory
> matters so a 32-bit userspace on 64-bit capable hardware is still preferred
> for small pointer sizes on the majority which have <=4GiB ram.
>
> I believe performance was the other concern, 30-bit happens to perform
> great on 32-bit x86 as it has 32*32->64 multiply hardware.  Most 32-bit
> architectures do not AFAIK, making 30 bit digit multiplies less efficient.
> And 32-bit x86 was clearly on its way out by the time we adopted 30-bit so
> it was simpler to just not do it on that dying snowflake of a platform.
> (test it on raspbian - it's the one that matters)
>
> Regardless of possible issues to work out, I'd love us to have a simpler
> 30-bit only implementation.
>
> Granted, modern 64-bit hardware often has 64*64->128 bit multiply hardware
> so you can imagine going beyond 30 and winding up in complexity land
> again.  at least the extra bits would be >=2 at that point.  The reason for
> digits being a multiple of 5 bits should be revisited vs its original
> intent and current state of the art "bignum optimized for mostly small
> numbers" at some point as well.
>
> -gps
>
>
Historical context of adding the 30-bit support (also driven primarily by
Mark, no surprise!) in late 2008 early 2009:
https://bugs.python.org/issue4258 (and https://codereview.appspot.com/14105)

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PRL7TJWUDYAMBQKVDLAIKD2OS2VOQPMQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Is anyone using 15-bit PyLong digits (PYLONG_BITS_IN_DIGIT=15)?

2021-12-30 Thread Gregory P. Smith
On Thu, Dec 30, 2021 at 4:47 AM Mark Dickinson  wrote:

> tl;dr: I'd like to deprecate and eventually remove the option to use
> 15-bit digits in the PyLong implementation. Before doing so, I'd like to
> find out whether there's anyone still using 15-bit PyLong digits, and if
> so, why they're doing so.
>
> History: the use of 30-bit digits in PyLong was introduced for Python 3.1
> and Python 2.7, to improve performance of int (Python 3) / long (Python 2)
> arithmetic. At that time, we retained the option to use 15-bit digits, for
> two reasons:
>
> - (1) use of 30-bit digits required C99 features (uint64_t and friends) at
> a time when we hadn't yet committed to requiring C99
> - (2) it wasn't clear whether 30-bit digits would be a performance win on
> 32-bit operating systems
>
> Twelve years later, reason (1) no longer applies, and I suspect that:
>
> - No-one is deliberately using the 15-bit digit option.
> - There are few machines where using 15-bit digits is faster than using
> 30-bit digits.
>
> But I don't have solid data on either of these suspicions, hence this post.
>
> Removing the 15-bit digit option would simplify the code (there's
> significant mental effort required to ensure we don't break things for
> 15-bit builds when modifying Objects/longobject.c, and 15-bit builds don't
> appear to be exercised by the buildbots), remove a hidden compatibility
> trap (see b.p.o. issue 35037), widen the applicability of the various fast
> paths for arithmetic operations, and allow for some minor fast-path
> small-integer optimisations based on the fact that we'd be able to assume
> that presence of *two* extra bits in the C integer type rather than just
> one. As an example of the latter: if `a` and `b` are PyLongs that fit in a
> single digit, then with 15-bit digits and a 16-bit `digit` and `sdigit`
> type, `a + b` can't currently safely (i.e., without undefined behaviour
> from overflow) be computed with the C type `sdigit`. With 30-bit digits and
> a 32-bit `digit` and `sdigit` type, `a + b` is safe.
>
> Mark
>

tying the thread together: this is https://bugs.python.org/issue45569

Check 32-bit builds.  When I pushed for the 30-bit digit implementation, I
wanted it for all builds but if I recall correctly it *might* have changed
the minimum structure size for PyLong which could've been an ABI issue?
double check that.  32-bit is still important. Raspbian. rpi, rpi zero, and
first rev rpi2 are 32-bit arm architectures so even with 64-bit raspbian on
the horizon, that won't be the norm.  and for those, memory matters so a
32-bit userspace on 64-bit capable hardware is still preferred for small
pointer sizes on the majority which have <=4GiB ram.

I believe performance was the other concern, 30-bit happens to perform
great on 32-bit x86 as it has 32*32->64 multiply hardware.  Most 32-bit
architectures do not AFAIK, making 30 bit digit multiplies less efficient.
And 32-bit x86 was clearly on its way out by the time we adopted 30-bit so
it was simpler to just not do it on that dying snowflake of a platform.
(test it on raspbian - it's the one that matters)

Regardless of possible issues to work out, I'd love us to have a simpler
30-bit only implementation.

Granted, modern 64-bit hardware often has 64*64->128 bit multiply hardware
so you can imagine going beyond 30 and winding up in complexity land
again.  at least the extra bits would be >=2 at that point.  The reason for
digits being a multiple of 5 bits should be revisited vs its original
intent and current state of the art "bignum optimized for mostly small
numbers" at some point as well.

-gps



>
>
> *References*
>
> Related b.p.o. issue: https://bugs.python.org/issue45569
> MinGW compatibility issue: https://bugs.python.org/issue35037
> Introduction of 30-bit digits: https://bugs.python.org/issue4258
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ZICIMX5VFCX4IOFH5NUPVHCUJCQ4Q7QM/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WS5WHRCRHO4DOQPRW7EZYG3PRF4UIKAW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Python release announcement format

2021-12-14 Thread Gregory P. Smith
On Tue, Dec 14, 2021 at 9:06 AM Yann Droneaud  wrote:

> Hi,
>
> I'm not familiar with the Python release process, but looking at the latest 
> release
> https://www.python.org/downloads/release/python-3101/
>
> we can see MD5 is still used ... which doesn't sound right in 2021 ...
> especially since we proved it's possible to build different .tar.gz that have
> the same MD5
>
> https://twitter.com/ydroneaud/status/1448659749604446211https://twitter.com/angealbertini/status/1449736035110461443
>
> You would reply there's OpenPGP / GnuPG signature. But then I would like to 
> raise
> another issue regarding the release process:
>
> As the announcement on comp.lang.python.announce / 
> python-announce-l...@python.org
> doesn't record the release digest / release signature, the operator 
> behindhttps://www.python.org/downloads/release/python-3101/ are free to 
> change the release
> content at any time, provided there's a valid signature. And there will no 
> way for
> us to check the release wasn't modified after the announcement.
>
>
For source archives, one can diff the contents of the source download vs
those of the equivalent tag in the git repository. For binaries, well,
there's already a ton of trust involved in accepting a binary from anyone.
But agreed having the currently secure hashes in the announcement email
would be good.


> It would be great if https://www.python.org/dev/peps/pep-0101/ would be 
> improved
> from the naive:
>
>  "Write the announcement for the mailing lists.  This is the fuzzy bit 
> because not
>   much can be automated.  You can use an earlier announcement as a template, 
> but
>   edit it for content!"
>
> to require the release announcement to record release archives digests as 
> SHA-2 256
> (added point if the announcement is signed), or the armored OpenPGP 
> signatures (but's
> that a lot of base64 characters).
>
> Should I open a bug for this issue ?
>
>
Makes sense, it is a pretty small change to make to the announcement
format. Filed. https://bugs.python.org/issue46077

-gps

Regards.
>
> --
> Yann Droneaud
> OPTEYA
>
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/6NI6V7DHTXCTUTNC2C5YSGOB6UJRFUDR/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/OXS2TK43QKH2M54R5HHECOZ6HYCQGJON/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Packing a long list of numbers into memory

2021-10-10 Thread Gregory P. Smith
On Sun, Oct 10, 2021 at 7:25 AM Facundo Batista 
wrote:

> Hello everyone!
>
> I need to pack a long list of numbers into shared memory, so I thought
> about using `struct.pack_into`.
>
> Its signature is
>
> struct.pack_into(format, buffer, offset, v1, v2, ...)
>
> I have a long list of nums (several millions), ended up doing the
> following:
>
> struct.pack_into(f'{len(nums)}Q', buf, 0, *nums)
>
> However, passing all nums as `*args` is very inefficient [0]. So I
> started wondering why we don't have something like:
>
> struct.pack_into(format, buffer, offset, values=values)
>
> which would receive the list of values directly.
>
> Is that because my particular case is very uncommon? Or maybe we *do*
> want this but we don't have it yet? Or do we already have a better way
> of doing this?
>
> Thanks!
>
> [0] https://linkode.org/#95ZZtVCIVtBbx72dURK7a4


My first reaction on seeing things like this is "Why not use a numpy.array?"

Does what you have really need to be a long list?  If so, that's already a
huge amount of Python object storage as it is. Is it possible for your
application to have kept that in a numpy array for the entirety of the data
lifetime?
https://numpy.org/doc/stable/reference/routines.array-creation.html

I'm not saying the stdlib shouldn't have a better way to do this by not
abusing *args as an API, just that other libraries solve the larger problem
of data-memory-inefficiency in their own way already.

*(neat tricks from others regarding stdlib array, shm, & memoryview even
if... not ideal)*

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YD7WP6F3LEXFSEITVPZUNONVUBK3AKUW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 654 except* formatting

2021-10-03 Thread Gregory P. Smith
On Sun, Oct 3, 2021 at 10:47 AM Łukasz Langa  wrote:

>
>  I know it's a bit late for bikeshedding this thing so if we want to be
> conservative and stick to the current syntactical options already defined
> in PEP 654, I'm voting Option 2 (given the awkwardness of the *(E1, E2)
> example).
>

+1 on the `except* E` Option 2 syntax. It better conveys its uniqueness and
non-relation to other meanings of *.

Someone mentioned allowing both and letting people decide.  Whatever is
chosen, please not that.  There should be only one way to write this.  That
avoids style arguments when no auto-formatter is involved.

-gps


>
> - Ł
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/T7QZJ575RFNYZ5KMYD66YMR2ZLNDVF56/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/T5FOZVPJ2YWIQ25HRSQ5NZ6U3YMDKI7K/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 667: Consistent views of namespaces

2021-08-23 Thread Gregory P. Smith
Just adding a datapoint, searching our internal codebase at work, I only
found two things that I'd consider to be uses of PyEval_GetLocals() outside
of CPython itself:

https://github.com/earwig/mwparserfromhell/blob/develop/src/mwparserfromhell/parser/ctokenizer/tokenizer.c#L220
https://github.com/numba/numba/blob/master/numba/_dispatcher.cpp#L664

The bulk of uses are naturally with CPython's own ceval.c and similar.

-gps

On Mon, Aug 23, 2021 at 9:02 AM Guido van Rossum  wrote:

> On Mon, Aug 23, 2021 at 8:46 AM Mark Shannon  wrote:
>
>> Hi Guido,
>>
>> On 23/08/2021 3:53 pm, Guido van Rossum wrote:
>> > On Mon, Aug 23, 2021 at 4:38 AM Mark Shannon > > > wrote:
>> >
>> > Hi Nick,
>> >
>> > On 22/08/2021 4:51 am, Nick Coghlan wrote:
>> >
>> >  > If Mark's claim that PyEval_GetLocals() could not be fixed was
>> > true then
>> >  > I would be more sympathetic to his proposal, but I know it isn't
>> > true,
>> >  > because it still works fine in the PEP 558 implementation (it
>> even
>> >  > immediately sees changes made via proxies, and proxies see
>> > changes to
>> >  > extra variables). The only truly unfixable public API is
>> >  > PyFrame_LocalsToFast().
>> >
>> > You are making claims that seem inconsistent with each other.
>> > Namely, you are claiming that:
>> >
>> > 1. That the result of locals() is ephemeral.
>> > 2. That PyEval_GetLocals() returns a borrowed reference.
>> >
>> > This seems impossible, as you can't return a borrowed reference to
>> > an emphemeral object. That's just a pointer to freed memory.
>> >
>> > Do `locals()` and `PyEval_GetLocals()` behave differently?
>> >
>> >
>> > That is my understanding, yes. in PEP 558 locals() returns a snapshot
>> > dict, the Python-level f_locals property returns a fresh proxy that has
>> > no state except a pointer to the frame, and PyEval_GetLocals() returns
>> a
>> > borrowed reference to the dict that's stored on the frame's C-level
>> > f_locals attribute
>>
>> Can we avoid describing the C structs in any of these PEPs?
>>
>> It confuses readers having Python attributes and "C-level attributes"
>> (C struct fields?).
>> It also restricts the implementation unnecessarily.
>>
>> (E.g. the PyFrameObject doesn't have a `f_locals` field in 3.11:
>>
>> https://github.com/python/cpython/blob/main/Include/cpython/frameobject.h#L7
>> )
>>
>
> I'd be happy to. Nick's PEP still references it (and indeed it is very
> confusing) and I took it from him. And honestly it would be nice to have a
> specific short name for it, rather than circumscribing it with "an internal
> dynamic snapshot stored on the frame object " :-)
>
>
>> >
>> > (In my "crazy" proposal all that is the same.)
>>
>> >
>> > Is the result of `PyEval_GetLocals()` cached, but `locals()` not?
>> >
>> >
>> > I wouldn't call it a cache -- deleting it would affect the semantics,
>> > not just the performance. But yes, it returns a reference to an object
>> > that is owned by the frame, just as it does in 3.10 and before.
>> >
>> > If that were the case, then it is a bit confusing, but could work.
>> >
>> >
>> > Yes, see my "crazy" proposal.
>> >
>> > Would PyEval_GetLocals() be defined as something like this?
>> >
>> > (add _locals_cache attribute to the frame which is initialized to
>> NULL).
>> >
>> > def PyEval_GetLocals():
>> >   frame._locals_cache attribute = locals()
>> >   return borrow(frame._locals_cache attribute)
>> >
>> >
>> > Nah, the dict returned by PyEval_GetLocals() is stored in the frame's
>> > C-level f_locals attribute, which is consulted by the Python-level
>> > f_locals proxy -- primarily to store "extra" variables, but IIUC in
>> > Nick's latest version it is also still used to cache by that proxy.
>> > Nick's locals() just returns dict(sys._getframe().f_locals).
>>
>> The "extra" variables must be distinct from the result of locals() as
>> that includes both extras and "proper" variables.
>> If we want to cache the locals(), it needs to be distinct from the extra
>> variables.
>>
>
> I don't care that much about caching locals(), but it seems we're bound to
> cache any non-NULL result from PyEval_GetLocals(), since it returns a
> borrowed reference. So they may be different things, with different
> semantics, if we don't cache locals().
>
>
>> A debugger setting extra variables in a function that that is also
>> accessed by a C call to PyEval_GetLocals() is going to be incredibly
>> rare. Let's not worry about efficiency here.
>>
>
> Agreed.
>
> >
>> > None of this is clear (at least not to me) from PEP 558.
>> >
>> >
>> > One problem with PEP 558 is that it's got too many words, and it's
>> > lacking a section that crisply describes the semantics of the proposed
>> > implementation. I've suggested to Nick that he add a section with
>> > pseudo-code for the implementation, like you did in yours.
>> >
>> > (PS, did you read my PS 

[Python-Dev] Re: PEP 467 feedback from the Steering Council

2021-08-22 Thread Gregory P. Smith
On Tue, Aug 10, 2021 at 3:48 PM Christopher Barker 
wrote:

> On Tue, Aug 10, 2021 at 3:00 PM  wrote:
>
>> The history of bytes/bytearray is a dual-purpose view.  It can be used in
>> a string-like way to emulate Python 2 string handling (hence all the usual
>> string methods and a repr that displays in a string-like fashion).  It can
>> also be used as an array of numbers, 0 to 255 (hence the list methods and
>> having an iterator of ints).  ISTM that the authors of this PEP reject or
>> want to discourage the latter use cases.
>>
>
> I didn't read it that way, but if so, please no, I"d rather see the former
> use cases discouraged. ISTM that the Py2 string handling is still needed
> for working with mixed binary / text data -- but that should be a pretty
> specialized use case. spelling the way to create a byte, byte() sure makes
> more sense in any other context.
>
>
>> ... anything where a C programmer would an array of unsigned chars).
>>
>
> or any programmer would use an array of unsigned 8bit integers :-) numpy
> spells it: `np.uint8`, and the the type in the C99 stdint.h is `uint8_t`.
> My point is that for anyone not an "old time" C programmer, or even a
> Python2 programmer, the  "character is an unsigned 8 bit int" concept is
> alien and confusing, not a helpful mnemonic.
>
>
>> For example, creating a single byte with bytes([0x1f]) isn't pleasant,
>> obvious, or fast.
>>
>
> no, though bytes([31]) isn't horrible ;-)   (despite coding for over four
> decades, I'm still not comfortable with hex notation)
>
> I say it's not horrible, because bytes is a Sequence of bytes (or integer
> values between 0 and 255), initializing it with an iterable seems pretty
> reasonable, that's how we initialize most (all?) other sequences after all.
> And compatible with array.array and numpy arrays.
>

I consider bytes([31]) notation to be horrible API design because a simple
easy to make typo of omitting the [] or using () and forgetting the
tupleizing comma turns it into a different valid call with an entirely
different meaning.  bytes([31]) vs bytes((31)) vs bytes(31).

It's also ugly to anyone who thinks about what bytecode is generated and
executed in order to do it.  an entire new list object with a single
element referring to a tiny int is created and destroyed just to create a
b'\037' object?  An optimizer pass to fix that up at the bytecode level
isn't easy as it can only be done when it can prove that `bytes` has not
been reassigned to something other than the builtin.  Near impossible in a
lot of code.  bytes.fromint(31) isn't much better in the bytecode regard,
but at least a temporary list is not being created.

As much as I think that bytes(size: int) was a bad idea to have as an API -
bytearray(size: int) is fine and useful as it is mutable - that ship sailed
and getting rid of it would break some odd code.  It doesn't have much use,
so adding fromsize(size: int) methods don't sound very compelling as it
just adds yet another way to do the same thing.  we should just live with
that specific wart.

`bchr` as a builtin... I'm with the others on saying no to any new builtin
that isn't expected to see frequent use.  bchr won't see frequent use.

`bytes.fromint` seems fine.  others are proposing `bytes.byte` for that.  I
don't *like* to argue over names (the last stage of anything) but I do need
to point out how that sounds to read.  It falls victim to API stuttering.
"bytes dot byte" or "bytes byte" doesn't convey much to a reader in English
as the difference is a subtle "s".  "bytes dot from int" or "bytes from
int" is quite clear.  (avoiding stuttering in API design was popularized by
golang - it's a good thing to strive for in any language)  It's times like
this that i wish Python had chosen consistent camelCase, CapWords, or
snake_case in all API names as conjoinedwords aren't great. But they are
sadly consistent with our past sins.

One thing never mentioned in the PEP.  If you expect a primary use of the
fromint (aka bchr builtin that isn't going to happen) to be called on
constant values often.  Why are we adding name lookups and function calls
to this?  Why not address the elephant in the room and allow for decimal
values to be written as an escape sequence within bytes literals?

b'\d31' for example to say "decimal byte 31".  Proposal: Only values 0-255
with no leading zero should be accepted when parsing such an escape.  (Do
not bother adding the same feature for codepoints in unicode strs; leave
that to later if someone shows actual demand).  This can't address the
bytearray need, but that's been true of bytearray for ages, a common way to
create them is via a copy from transient bytes objects.  bytearray(b'\d31')
isn't much different than bytearray.fromint(31).  one less name lookup.

Why not add a \d escape? Introducing a new escape is fraught with peril as
existing \d's within b'' literals in code could change meaning.  backwards
compatibility fail.  But one that is easy to check for 

[Python-Dev] Re: Making code object APIs unstable

2021-08-17 Thread Gregory P. Smith
+cc: cython-devel

background reading for those new to the thread:
https://mail.python.org/archives/list/python-dev@python.org/thread/ZWTBR5ESYR26BUIVMXOKPFRLGGYDJSFC/

On Tue, Aug 17, 2021 at 9:47 AM Victor Stinner  wrote:

> Since Cython is a common consumer of this C API, can somone please dig
> into Cython to see exactly what it needs in terms of API? How does
> Cython create all arguments of the __Pyx_PyCode_New() macro? Does it
> copy an existing function to only override some fields, something like
> CodeType.replace(field=new_value)?
>
> If possible, I would prefer that Cython only uses the *public* C API.
> Otherwise, it will be very likely that Cython will break at every
> single Python release. Cython has a small team to maintain the code
> base, whereas CPython evolves much faster with a larger team.
>
> Victor
>

I don't claim knowledge of Cython internals, but the two places it appears
to call it's __Pyx_PyCode_New macro are:

https://github.com/cython/cython/blob/master/Cython/Utility/Exceptions.c#L769
in __Pyx_CreateCodeObjectForTraceback() - this one already has a `#if
CYTHON_COMPILING_IN_LIMITED_API` code path option in it.

 and

https://github.com/cython/cython/blob/master/Cython/Compiler/ExprNodes.py#L9722
in CodeObjectNode.generate_result_code()  that creates PyCodeObject's for
CyFunction instances per its comment.  Slightly described in this comment
http://google3/third_party/py/cython/files/Cython/Compiler/ExprNodes.py?l=397.
I don't see anything obvious mentioning the limited API in that code
generator.

it'd be best to loop in Cython maintainers for more of an idea of Cython's
intents and needs with PyCode_New APIs.  I've cc'd cython-de...@python.org.

-Greg


> On Tue, Aug 17, 2021 at 8:51 AM Gregory P. Smith  wrote:
> >
> > Doing a search of a huge codebase (work), the predominant user of
> PyCode_New* APIs appears to be checked in Cython generated code (in all
> sorts of third_party OSS projects). It's in the boilerplate that Cython
> extensions make use of via it's __Pyx_PyCode_New macro.
> https://github.com/cython/cython/blob/master/Cython/Utility/ModuleSetupCode.c#L470
> >
> > I saw very few non-Cython uses.  There are some, but at a very quick
> first glance they appear simple - easy enough to reach out to the projects
> with a PR to update their code.
> >
> > The Cython use will require people to upgrade Cython and regenerate
> their code before they can use the Python version that changes these. That
> is not an uncommon thing for Cython. It's unfortunate that many projects on
> ship generated sources rather than use Cython at build time, but that isn't
> _our_ problem to solve. The more often we change internal APIs that things
> depend on, the more people will move their projects towards doing the right
> thing with regards to either not using said APIs or rerunning an up to date
> code generator as part of their build instead of checking in generated
> unstable API using sources.
> >
> > -gps
> >
> >
> > On Mon, Aug 16, 2021 at 8:04 PM Guido van Rossum 
> wrote:
> >>
> >> On Mon, Aug 16, 2021 at 4:44 PM Nick Coghlan 
> wrote:
> >>>
> >>> [...]
> >>> A cloning-with-replacement API that accepted the base code object and
> the "safe to modify" fields could be a good complement to the API
> deprecation proposal.
> >>
> >>
> >> Yes (I forgot to mention that).
> >>
> >>>
> >>> Moving actual "from scratch" code object creation behind the
> Py_BUILD_CORE guard with an underscore prefix on the name would also make
> sense, since it defines a key piece of the compiler/interpreter boundary.
> >>
> >>
> >> Yeah, we have _PyCode_New() for that.
> >>
> >>>
> >>> Cheers,
> >>> Nick.
> >>>
> >>> P.S. Noting an idea that won't work, in case anyone else reading the
> thread was thinking the same thing: a "PyType_FromSpec" style API won't
> help here, as the issue is that the compiler is now doing more work up
> front and recording that extra info in the code object for the interpreter
> to use. There is no way to synthesise that info if it isn't passed to the
> constructor, as it isn't intrinsically recorded in the opcode sequence.
> >>
> >>
> >> That's the API style that _PyCode_New() uses (thanks to Eric who IIRC
> pushed for this and implemented it). You gave me an idea now: the C
> equivalent to .replace() could use the same input structure; one can leave
> fields NULL that should be copied from the original unmodified.
> >>
> >> --
> >> --Guido van Rossum (python

[Python-Dev] Re: Making code object APIs unstable

2021-08-17 Thread Gregory P. Smith
Doing a search of a huge codebase (work), the predominant user of
PyCode_New* APIs appears to be checked in Cython generated code (in all
sorts of third_party OSS projects). It's in the boilerplate that Cython
extensions make use of via it's __Pyx_PyCode_New macro.
https://github.com/cython/cython/blob/master/Cython/Utility/ModuleSetupCode.c#L470

I saw very few non-Cython uses.  There are some, but at a very quick first
glance they appear simple - easy enough to reach out to the projects with a
PR to update their code.

The Cython use will require people to upgrade Cython and regenerate their
code before they can use the Python version that changes these. That is not
an uncommon thing for Cython. It's unfortunate that many projects on ship
generated sources rather than use Cython at build time, but that isn't
_our_ problem to solve. The more often we change internal APIs that things
depend on, the more people will move their projects towards doing the right
thing with regards to either not using said APIs or rerunning an up to date
code generator as part of their build instead of checking in generated
unstable API using sources.

-gps


On Mon, Aug 16, 2021 at 8:04 PM Guido van Rossum  wrote:

> On Mon, Aug 16, 2021 at 4:44 PM Nick Coghlan  wrote:
>
>> [...]
>> A cloning-with-replacement API that accepted the base code object and the
>> "safe to modify" fields could be a good complement to the API deprecation
>> proposal.
>>
>
> Yes (I forgot to mention that).
>
>
>> Moving actual "from scratch" code object creation behind the
>> Py_BUILD_CORE guard with an underscore prefix on the name would also make
>> sense, since it defines a key piece of the compiler/interpreter boundary.
>>
>
> Yeah, we have _PyCode_New() for that.
>
>
>> Cheers,
>> Nick.
>>
>> P.S. Noting an idea that won't work, in case anyone else reading the
>> thread was thinking the same thing: a "PyType_FromSpec" style API won't
>> help here, as the issue is that the compiler is now doing more work up
>> front and recording that extra info in the code object for the interpreter
>> to use. There is no way to synthesise that info if it isn't passed to the
>> constructor, as it isn't intrinsically recorded in the opcode sequence.
>>
>
> That's the API style that _PyCode_New() uses (thanks to Eric who IIRC
> pushed for this and implemented it). You gave me an idea now: the C
> equivalent to .replace() could use the same input structure; one can leave
> fields NULL that should be copied from the original unmodified.
>
> --
> --Guido van Rossum (python.org/~guido)
> *Pronouns: he/him **(why is my pronoun here?)*
> 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/NWYMCDAMS4YRJ7ESXNWQ6MIBSRAZEXEM/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/67DMIW7NQE6M6LEPLANXKZQEFOFVPBBL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Does anyone use threading debug PYTHONTHREADDEBUG=1 env var? Can I remove it?

2021-07-07 Thread Gregory P. Smith
On Wed, Jul 7, 2021 at 2:28 AM Victor Stinner  wrote:

> Hi,
>
> Does anyone use threading debug PYTHONTHREADDEBUG=1 env var on a
> Python debug build? If not, can I just remove it?
>
> --
>
> To fix a race condition at thread exit on Linux using the glibc, I
> removed calls to pthread_exit() (PyThread_exit_thread()) in the
> _thread module:
>
>https://bugs.python.org/issue44434
>
> A side effect of this change is the removal of the
> "PyThread_exit_thread called" threading debug log when using
> PYTHONTHREADDEBUG=1 environment variable.
>
> I never used PYTHONTHREADDEBUG. I just tried it and it produces tons
> of output in stdout about locks. It looks basically useless because it
> produces way too many logs, and it pollutes stdout (ex: most Python
> tests fail when it's enabled).
>
> This debug mode requires to build Python in debug mode (./configure
> --with-pydebug):
>
>https://docs.python.org/dev/using/configure.html#python-debug-build
>
> IMO there are enough external debugging tools to debug threading
> issues. Python no longer has to come with its built-in logs.
>
> I propose to deprecate the feature in Python 3.11 and remove it in 2
> releases (Python 3.13).
>

I agree with its removal.


> Victor
> --
> Night gathers, and now my watch begins. It shall not end until my death.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/NMLGCDRUKLZSTK4UICJTKR54WRXU2ZGJ/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/F3MFU5YD6H6NIPW5ULIRNGAMSPSWXYPN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proposal: declare "unstable APIs"

2021-06-03 Thread Gregory P. Smith
Overall agreement. Your list of ast and code objects and bytecode
instructions are things that I'd _hope_ people already consider unstable so
declaring them as such just makes sense if we're not doing that already.

The ideal way to declare an API as unstable is to constantly change it in a
breaking manner.  With every release and potentially even within some patch
releases when the point really needs to be made.  Even when you didn't have
a reason to change anything.  If you don't do that, people are going to
find it convenient, discover stability, assume it exists, depend on it, and
complain about breakage no matter what was stated.
https://www.hyrumslaw.com/

One obvious now in hindsight question: Why are any of these APIs even
public? They all deserve underscore prefixed names to highlight their
private-ness and potential instability.

-gps

On Thu, Jun 3, 2021 at 10:46 AM Guido van Rossum  wrote:

> In practice, provisional APIs have been quite stable. The term
> "provisional" was introduced for PEPs that introduce new modules, where we
> wanted to allow some wiggle room for changes based on experience with using
> the new module during the first release cycle where it's made available.
> You can think of it as a sort of extended beta period for that module only.
> Generally provisional status only lasts for one release cycle.
>
> "Unstable" has a different meaning -- it's for APIs (including modules)
> that are likely to change in every release (or most releases, anyway).
> Users are not discouraged from using these, but they *must* be mindful of
> their code breaking with every new release.
>
> I could imagine some unstability to allow incompatible changes in bugfix
> releases, though for my main use case it would be sufficient to only allow
> those in minor releases.
>
> On Thu, Jun 3, 2021 at 10:32 AM Senthil Kumaran 
> wrote:
>
>> On Thu, Jun 03, 2021 at 10:10:53AM -0700, Guido van Rossum wrote:
>> > This is not a complete thought yet, but it occurred to me that while we
>> have
>> > deprecated APIs (which will eventually go away), and provisional APIs
>> (which
>> > must mature a little before they're declared stable), and stable APIs
>> (which
>> > everyone can rely on), it might be good to also have something like
>> *unstable*
>> > APIs, which will continually change without ever going away or
>> stabilizing.
>>
>> The first grey area will between Provisional API vs Unstable API.
>>
>> Do developers consider provisional APIs as stable and start relying upon
>> heavily? I am not sure.
>>
>> I also lack the experience for the use-cases that you are thinking
>> about.
>>
>> --
>> Senthil
>>
>
>
> --
> --Guido van Rossum (python.org/~guido)
> *Pronouns: he/him **(why is my pronoun here?)*
> 
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ZUTAVE3S3QMWNIBGTBDOTJ7M62CTO57R/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NCWBNN5AH4D26ECGL4DYRI2DKZBIKV6C/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: GDB not breaking at the right place

2021-05-25 Thread Gregory P. Smith
On Tue, May 25, 2021 at 7:49 PM Inada Naoki  wrote:

> On Tue, May 25, 2021 at 5:38 AM Guido van Rossum  wrote:
> >
> > To the contrary, I think if you want the CI jobs to be faster you should
> add the CFLAGS to the configure call used to run the CI jobs.
> >
>
> -Og makes it faster not only CI jobs, but also everyday "edit code and
> run `make test` with all assertions" cycles.
>
> I don't have opinion which should be default. (+0 for -O0).
> I use -Og by default and use -O0 only when I need anyway.
>

Agreed, what we do today is already fine.  -Og or -O1 are decent options
for fast unoptimized builds that lead to increased productivity in the
common case.
Actually firing up a debugger on CPython's C code is not the common thing
for a developer to do.
When someone wants to do that, they should build with the relevant compiler
for that purpose.  ie: Skip should do this.  If there is confusion about
the meaning of --with-pydebug, that's just a documentation/help-text update
to be made.

-gps

FWIW, we can disable optimization per-file basis during debugging.
>
>   // Put this line on files you want to debug.
>   #pragma GCC optimize ("O0")
>
> Regards,
>
> --
> Inada Naoki  
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/OJJZKWS446PJPXHUBNUVIYE756D5HHP4/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PMOP7CQHRHB7C5737E5BKRIBXHX5PBHX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-11 Thread Gregory P. Smith
On Tue, May 11, 2021 at 3:33 PM Mike Miller  wrote:

>
> On 5/11/21 1:57 AM, Baptiste Carvello wrote:
> > Le 11/05/2021 à 09:35, Steven D'Aprano a écrit :
> >> On Mon, May 10, 2021 at 09:44:05PM -0400, Terry Reedy wrote:
> >>
> >>> The vanilla interpreter could be updated to recognize when it is
> running
> >>> on a similated 35-year-old terminal that implements ansi-vt100 color
> >>> codes rather than a similated 40+-year-old black-and-white
> teletype-like
> >>> terminal.
> >>
> >> This is what is called "scope creep", although in this case
> >> perhaps "scope gallop" is more appropriate *wink*
> >> [...]
> >
> > Also: people paste tracebacks into issue reports, so all information has
> > to survive copy-pasting.
> >
>
> The first ANSI standard supported underlined text, didn't it?  The VT100
> did.
> That would make it part of the 40+ year old subset from the late 70's.
>
> While color might stand out more, underline suits the problem well, also
> without
> increasing the line count.
>
> There are a number of terminal emulators that support rich text copies,
> but not
> all of them.  This is added information however, so it not being
> copy-pastable
> everywhere shouldn't be a blocking requirement imho.
>

fancier REPL frontends have supported things like highlighting and such in
their tracebacks, I expect they'll adopt column information and render it
as such.

There's a difference between tracebacks dumped as plain text (utf-8) by
traceback.print_exc() appearing on stderr or directed into log files and
what can be displayed within a terminal.  It is highly unusual to emit
terminal control characters into log files.

-G


>
> -Mike
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/W44D2BWWICNJTWPQOZUWVQEIJ6T3QWYM/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7AQFQOCZCN44ML3UDY5RNWJJHOEDS4JN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-09 Thread Gregory P. Smith
On Sun, May 9, 2021 at 9:13 AM Antoine Pitrou  wrote:

> On Sun, 09 May 2021 02:16:02 -
> "Jim J. Jewett"  wrote:
> > Antoine Pitrou wrote:
> > > On Sat, 8 May 2021 02:58:40 +
> > > Neil Schemenauer nas-pyt...@arctrix.com wrote:
> >
> > > > It would be cool if we could mmap the pyc files and have the VM run
> > > > code without an unmarshal step.
> > > > What happens if another process mutates or truncates the file while
> the
> > > CPython VM is executing code from the mapped file?  Crash?
> >
> > Why would this be any different than whatever happens now?
>
> What happens now is that the pyc file is transferred at once to memory
> using regular IO.  So the chance is really slim that you read invalid
> data due to concurrent mutation.
>

concurrent mutation isn't even what I was talking about.  We don't protect
against that today as that isn't a concern.  But POSIX semantics on the
bulk of systems where this would ever matter do software updates by moving
new files into place.  Because that is an idempotent inode change.  So the
existing open file already in the process of being read is not changed.
But as soon as you do a new open call on the pathname you get a different
file than the last time that path was opened.

This is not theoretical.  I've seen production problems as a result
(zipimport - https://bugs.python.org/issue19081) making the incorrect
assumption that they can reopen a file that they've read once at a later
point in time.  So if we do open files later, we must code defensively and
assume they might not contain what we thought.

We already have this problem with source code lines displayed in tracebacks
today as those are read on demand.  But as that is debugging information
only the wrong source lines being shown next to the filename +
linenumber in a traceback is something people just learn to ignore in these
situations.  We have the data to prevent this, we just never have.
https://bugs.python.org/issue44091 filed to track that.

Given this context, M.-A. Lemburg's alternative idea could have some merit
as it would synchronize our source skew behavior with our additional
debugging information behavior.  My initial reaction is that it's falling
into the trap of bundling too into one place though.

quoting M.-A. Lemburg:
> Create a new file format which supports enhanced debugging. This
> would include the source code in a indexed format, the AST and
> mappings between byte code, AST node, lines and columns.
>
> Python would then only use and load this file when it needs
> to print a traceback - much like it does today with the source
> code.
>
> The advantage is that you can add even more useful information
> for debugging while not making the default code distribution
> format take more memory (both disk and RAM).

Realistically: This is going to take more disk space in the common case
because in addition to the py, pyc, pyc.opt-1, pyc.opt-2 that some distros
apparently include all of today, there'd be a new pyc.debuginfo to go along
side it. The only benefit is that it isn't resident in ram. And someone
*could* choose to filter these out of their distro or container or
whatever-the-heck-their-package-format-is. But I really doubt that'll be
the default.

Not having debugging information when a problem you're trying to hunt down
and reproduce but only happens once in a blue moon is extraordinarily
frustrating.  Which is why people who value engineering time deploy with
debugging info.

There are environments where people intentionally do not deploy source
code.  But do want to get debugging data from tracebacks that they can then
correlate to their sources later for analysis (they're tracking exactly
which versions of pycs from which versions of sources were deployed).  It'd
be a shame to exclude column information for this scenario.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6E7UZ5SFUAADUJUQ6DKPJIGO6CCGCNFU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-08 Thread Gregory P. Smith
On Sat, May 8, 2021 at 2:40 PM Jonathan Goble  wrote:

> On Sat, May 8, 2021 at 5:08 PM Pablo Galindo Salgado 
> wrote:
>
>> > Why not put in it -O instead?  Then -O means lose asserts and lose
>> fine-grained tracebacks, while -OO continues to also
>> strip out doc strings.
>>
>> What if someone wants to keep asserts but do not want the extra data?
>>
>
> What if I want to keep asserts and docstrings but don't want the extra
> data?
>
> Or actually, consider this. I *need* to keep asserts (because rightly or
> wrongly, I have a dependency, or my own code, that relies on them), but I
> *don't* want docstrings (because they're huge and I don't want the overhead
> in production), and I *don't* want the extra data in production either.
>
> Now what?
>
> I think what this illustrates is that the entire concept of optimizations
> in Python needs a complete rethink. It's already fundamentally broken for
> someone who wants to keep asserts but remove docstrings. Adding a third
> layer to this is a perfect opportunity to reconsider the whole paradigm.
>

Reconsidering "the whole paradigm" is always possible, but is a much larger
effort. It should not be something that blocks this enhancement from
happening.

We have discussed the -O mess before, on list and at summits and sprints.
-OO and the __pycache__ and longer .pyc names and versioned names were
among the results of that.  But we opted not to try and make life even more
complicated by expanding the test matrix of possible generated bytecode
even larger.

I'm getting off-topic here, and this should probably be a thread of its
> own, but perhaps what we should introduce is a compiler directive, similar
> to future statements but not that, that one can place at the top of a
> source file to tell the compiler "this file depends on asserts, don't
> optimize them out". Same for each thing that can be optimized that has a
> runtime behavior effect, including docstrings.
>

This idea has merit.  Worth keeping in mind for the future.  But agreed,
this goes beyond this threads topic so I'll leave it at that.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PCZGEWFIPS2YPMJWTILVANJYT6VWS27B/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-08 Thread Gregory P. Smith
On Sat, May 8, 2021 at 2:09 PM Pablo Galindo Salgado 
wrote:

> > Why not put in it -O instead?  Then -O means lose asserts and lose
> fine-grained tracebacks, while -OO continues to also
> strip out doc strings.
>
> What if someone wants to keep asserts but do not want the extra data?
>

exactly my theme.  our existing -O and -OO already don't serve all user
needs.  (I've witnessed people who need asserts but don't want docstrings
wasting ram jump through hacky hoops to do that).  Complicating these
options more by combining additional actions on them them doesn't help.

The reason we have -O and -OO generate their own special opt-1 and opt-2
pyc files is because both of those change the generated bytecode and
overall flow of the program by omitting instructions and data.  code using
those will run slightly faster as there are fewer instructions.

The change we're talking about here doesn't do that.  It just adds
additional metadata to whatever instructions are generated.  So it doesn't
feel -O related.

While some people aren't going to like the overhead, I'm happy not offering
the choice.

> Greg, what do you think if instead of not writing it to the pyc file with
-OO or adding a header entry to decide to read/write, we place None in the
field? That way we can
> leverage the option that we intend to add to deactivate displaying the
traceback new information to reduce the data in the pyc files. The only
problem
> is that there will be still a tiny bit of overhead: an extra object per
code object (None), but that's much much better than something that scales
with the
> number of instructions :)
>
> What's your opinion on this?

I don't understand the pyc structure enough to comment on how that works,
but that sounds fine from a way to store less data if these are stored as a
side table rather than intermingled with each instruction itself.  *If
anyone even cares about storing less data.*  I would not activate
generation of that in py_compile and compileall based on the -X flag to
disable display of tracebacks though.  A flag changing a setting of the
current runtime regarding traceback printing detail level should not change
the metadata in pyc files it emits.  I realize -O and -OO behave this way,
but I don't view those as a great example. We're not writing new uniquely
named pyc files, I suggest making this an explicit option for py_compile
and compileall if we're going to support generation of pyc files without
column data at all.

I'm unclear on what the specific goals are with all of these option
possibilities.

Who non-hypothetically cares about a 22% pyc file size increase?  I don't
think we should be concerned.  I'm in favor of always writing them and the
20% size increase that results in.  If pyc size is an issue that should be
its own separate enhancement PEP.  When it comes to pyc files there is more
data we may want to store in the future for performance reasons - I don't
see them shrinking without an independent effort.

Caring about additional data retained in memory at runtime makes more sense
to me as ram cost is much greater than storage cost and is paid repeatedly
per process.  Storing an additional reference to None on code objects where
a column information table is perfectly fine.  That can be a -X style
interpreter startup option.  It isn't something that needs to impacted by
the pyc files.  Pass that option to the interpreter, and it just discards
column info tables on code objects after loading them or compiling them.
If people want to optimize for a shared pyc situation with memory mapping
techniques, that is also something that should be a separate enhancement
PEP and not involved here.  People writing code to use the column
information should always check it for None first, that'd be something we
document with the new feature.

-gps


>
> On Sat, 8 May 2021 at 22:05, Ethan Furman  wrote:
>
>> On 5/8/21 1:31 PM, Pablo Galindo Salgado wrote:
>>  >> We can't piggy back on -OO as the only way to disable this, it needs
>> to
>>  >> have an option of its own.  -OO is unusable as code that relies on
>> "doc"
>>  >> strings as application data such as
>> http://www.dabeaz.com/ply/ply.html
>>  >> exists.
>>  >
>>  > -OO is the only sensible way to disable the data. There are two things
>> to disable:
>>  >
>>  > * The data in pyc files
>>  > * Printing the exception highlighting
>>
>> Why not put in it -O instead?  Then -O means lose asserts and lose
>> fine-grained tracebacks, while -OO continues to also
>> strip out doc strings.
>>
>> --
>> ~Ethan~
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/BEE4BGUZHXBTVDPOW5R4DC3S463XC3EJ/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
> 

[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-08 Thread Gregory P. Smith
On Sat, May 8, 2021 at 1:32 PM Pablo Galindo Salgado 
wrote:

> > We can't piggy back on -OO as the only way to disable this, it needs to
> have an option of its own.  -OO is unusable as code that relies on
> "doc"strings as application data such as
> http://www.dabeaz.com/ply/ply.html exists.
>
> -OO is the only sensible way to disable the data. There are two things to
> disable:
>

nit: I wouldn't choose the word "sensible" given that -OO is already
fundamentally unusable without knowing if any code in your entire
transitive dependencies might depend on the presence of docstrings...


>
> * The data in pyc files
> * Printing the exception highlighting
>
> Printing the exception highlighting can be disabled via combo of
> environment variable / -X option but collecting the data can only be
> disabled by -OO. The reason is that this will end in pyc files
> so when the data is not there, a different kind of pyc files need to be
> produced and I really don't want to have another set of pyc file extension
> just to deactivate this. Notice that also a configure
> time variable won't work because it will cause crashes when reading pyc
> files produced by the interpreter compiled without the flag.
>

I don't think the optional existence of column number information needs a
different kind of pyc file.  Just a flag in a pyc file's header at most.
It isn't a new type of file.


> On Sat, 8 May 2021 at 21:13, Gregory P. Smith  wrote:
>
>>
>>
>> On Sat, May 8, 2021 at 11:58 AM Pablo Galindo Salgado <
>> pablog...@gmail.com> wrote:
>>
>>> Hi Brett,
>>>
>>> Just to be clear, .pyo files have not existed for a while:
>>>> https://www.python.org/dev/peps/pep-0488/.
>>>
>>>
>>> Whoops, my bad, I wanted to refer to the pyc files that are generated
>>> with -OO, which have the "opt-2" prefix.
>>>
>>> This only kicks in at the -OO level.
>>>
>>>
>>> I will correct the PEP so it reflex this more exactly.
>>>
>>> I personally prefer the idea of dropping the data with -OO since if
>>>> you're stripping out docstrings you're already hurting introspection
>>>> capabilities in the name of memory. Or one could go as far as to introduce
>>>> -Os to do -OO plus dropping this extra data.
>>>
>>>
>>> This is indeed the plan, sorry for the confusion. The opt-out mechanism
>>> is using -OO, precisely as we are already dropping other data.
>>>
>>
>> We can't piggy back on -OO as the only way to disable this, it needs to
>> have an option of its own.  -OO is unusable as code that relies on
>> "doc"strings as application data such as
>> http://www.dabeaz.com/ply/ply.html exists.
>>
>> -gps
>>
>>
>>>
>>> Thanks for the clarifications!
>>>
>>>
>>>
>>> On Sat, 8 May 2021 at 19:41, Brett Cannon  wrote:
>>>
>>>>
>>>>
>>>> On Fri, May 7, 2021 at 7:31 PM Pablo Galindo Salgado <
>>>> pablog...@gmail.com> wrote:
>>>>
>>>>> Although we were originally not sympathetic with it, we may need to
>>>>> offer an opt-out mechanism for those users that care about the impact of
>>>>> the overhead of the new data in pyc files
>>>>> and in in-memory code objectsas was suggested by some folks (Thomas,
>>>>> Yury, and others). For this, we could propose that the functionality will
>>>>> be deactivated along with the extra
>>>>> information when Python is executed in optimized mode (``python -O``)
>>>>> and therefore pyo files will not have the overhead associated with the
>>>>> extra required data.
>>>>>
>>>>
>>>> Just to be clear, .pyo files have not existed for a while:
>>>> https://www.python.org/dev/peps/pep-0488/.
>>>>
>>>>
>>>>> Notice that Python
>>>>> already strips docstrings in this mode so it would be "aligned" with
>>>>> the current mechanism of optimized mode.
>>>>>
>>>>
>>>> This only kicks in at the -OO level.
>>>>
>>>>
>>>>>
>>>>> Although this complicates the implementation, it certainly is still
>>>>> much easier than dealing with compression (and more useful for those that
>>>>> don't want the feature). Notice that we also
>>>>> expect pessimistic results from compression as offsets would be quite
>

[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-08 Thread Gregory P. Smith
On Sat, May 8, 2021 at 11:58 AM Pablo Galindo Salgado 
wrote:

> Hi Brett,
>
> Just to be clear, .pyo files have not existed for a while:
>> https://www.python.org/dev/peps/pep-0488/.
>
>
> Whoops, my bad, I wanted to refer to the pyc files that are generated
> with -OO, which have the "opt-2" prefix.
>
> This only kicks in at the -OO level.
>
>
> I will correct the PEP so it reflex this more exactly.
>
> I personally prefer the idea of dropping the data with -OO since if you're
>> stripping out docstrings you're already hurting introspection capabilities
>> in the name of memory. Or one could go as far as to introduce -Os to do -OO
>> plus dropping this extra data.
>
>
> This is indeed the plan, sorry for the confusion. The opt-out mechanism is
> using -OO, precisely as we are already dropping other data.
>

We can't piggy back on -OO as the only way to disable this, it needs to
have an option of its own.  -OO is unusable as code that relies on
"doc"strings as application data such as http://www.dabeaz.com/ply/ply.html
exists.

-gps


>
> Thanks for the clarifications!
>
>
>
> On Sat, 8 May 2021 at 19:41, Brett Cannon  wrote:
>
>>
>>
>> On Fri, May 7, 2021 at 7:31 PM Pablo Galindo Salgado 
>> wrote:
>>
>>> Although we were originally not sympathetic with it, we may need to
>>> offer an opt-out mechanism for those users that care about the impact of
>>> the overhead of the new data in pyc files
>>> and in in-memory code objectsas was suggested by some folks (Thomas,
>>> Yury, and others). For this, we could propose that the functionality will
>>> be deactivated along with the extra
>>> information when Python is executed in optimized mode (``python -O``)
>>> and therefore pyo files will not have the overhead associated with the
>>> extra required data.
>>>
>>
>> Just to be clear, .pyo files have not existed for a while:
>> https://www.python.org/dev/peps/pep-0488/.
>>
>>
>>> Notice that Python
>>> already strips docstrings in this mode so it would be "aligned" with
>>> the current mechanism of optimized mode.
>>>
>>
>> This only kicks in at the -OO level.
>>
>>
>>>
>>> Although this complicates the implementation, it certainly is still much
>>> easier than dealing with compression (and more useful for those that don't
>>> want the feature). Notice that we also
>>> expect pessimistic results from compression as offsets would be quite
>>> random (although predominantly in the range 10 - 120).
>>>
>>
>> I personally prefer the idea of dropping the data with -OO since if
>> you're stripping out docstrings you're already hurting introspection
>> capabilities in the name of memory. Or one could go as far as to introduce
>> -Os to do -OO plus dropping this extra data.
>>
>> As for .pyc file size, I personally wouldn't worry about it. If someone
>> is that space-constrained they either aren't using .pyc files or are only
>> shipping a single set of .pyc files under -OO and skipping source code. And
>> .pyc files are an implementation detail of CPython so there  shouldn't be
>> too much of a concern for other interpreters.
>>
>> -Brett
>>
>>
>>>
>>> On Sat, 8 May 2021 at 01:56, Pablo Galindo Salgado 
>>> wrote:
>>>
 One last note for clarity: that's the increase of size in the stdlib,
 the increase of size
 for pyc files goes from 28.471296MB to 34.750464MB, which is an
 increase of 22%.

 On Sat, 8 May 2021 at 01:43, Pablo Galindo Salgado 
 wrote:

> Some update on the numbers. We have made some draft implementation to
> corroborate the
> numbers with some more realistic tests and seems that our original
> calculations were wrong.
> The actual increase in size is quite bigger than previously advertised:
>
> Using bytes object to encode the final object and marshalling that to
> disk (so using uint8_t) as the underlying
> type:
>
> BEFORE:
>
> ❯ ./python -m compileall -r 1000 Lib > /dev/null
> ❯ du -h Lib -c --max-depth=0
> 70M Lib
> 70M total
>
> AFTER:
> ❯ ./python -m compileall -r 1000 Lib > /dev/null
> ❯ du -h Lib -c --max-depth=0
> 76M Lib
> 76M total
>
> So that's an increase of 8.56 % over the original value. This is
> storing the start offset and end offset with no compression
> whatsoever.
>
> On Fri, 7 May 2021 at 22:45, Pablo Galindo Salgado <
> pablog...@gmail.com> wrote:
>
>> Hi there,
>>
>> We are preparing a PEP and we would like to start some early
>> discussion about one of the main aspects of the PEP.
>>
>> The work we are preparing is to allow the interpreter to produce more
>> fine-grained error messages, pointing to
>> the source associated to the instructions that are failing. For
>> example:
>>
>> Traceback (most recent call last):
>>
>>   File "test.py", line 14, in 
>>
>> lel3(x)
>>
>> ^^^
>>
>>   File "test.py", line 12, in lel3
>>

[Python-Dev] Re: a name for the ExceptHandler.type when it is a literal tuple of types

2021-05-08 Thread Gregory P. Smith
On Sat, May 8, 2021 at 8:54 AM Thomas Grainger  wrote:

> That's this bit:
>
> ```
> except (A, B):
>^^
> ```
>
> bpo-43149 currently calls it an "exception group", but that conflicts with
> PEP 654 -- Exception Groups and except*
>
> ```
>
>>>> try:
>...   pass
>... except A, B:
>...   pass
>Traceback (most recent call last):
>SyntaxError: exception group must be parenthesized
> ```
>
> some alternatives:
>
> exception classinfo must be parenthesized (classinfo so named from the
> parameter to issubclass)
> exception sequence must be parenthesized
>
> see also:
>
> - https://github.com/python/cpython/pull/24467#discussion_r628756347
> - https://www.python.org/dev/peps/pep-0654/


Given it requires ()s it is probably better to call it an "exception
sequence" or even go fully to "exception tuple" in order to avoid confusion
and tie in with the other meanings of the required syntax.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Y7SPN4WSXXFPAZITS2PMF2PRSVX3H5SE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Speeding up CPython

2021-05-07 Thread Gregory P. Smith
On Fri, May 7, 2021 at 6:51 PM Steven D'Aprano  wrote:

> On Tue, Oct 20, 2020 at 01:53:34PM +0100, Mark Shannon wrote:
> > Hi everyone,
> >
> > CPython is slow. We all know that, yet little is done to fix it.
> >
> > I'd like to change that.
> > I have a plan to speed up CPython by a factor of five over the next few
> > years. But it needs funding.
>
> I've noticed a lot of optimization-related b.p.o. issues created by
> Mark, which is great. What happened with Mark's proposal here? Did the
> funding issue get sorted?
>

I believe Guido has Mark contracting on Python performance through
Microsoft?

-Greg
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5RTMMJLGZE5FHW3SAYWSKUYOLEUZ2RFX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-07 Thread Gregory P. Smith
On Fri, May 7, 2021 at 3:24 PM Pablo Galindo Salgado 
wrote:

> Thanks a lot Gregory for the comments!
>
> An additional cost to this is things that parse text tracebacks not
>> knowing how to handle it and things that log tracebacks
>> generating additional output.
>
> We should provide a way for people to disable the feature on a process as
>> part of this while they address tooling and logging issues.  (via the usual
>> set of command line flag + python env var + runtime API)
>
>
> Absolutely! We were thinking about that and that's easy enough as that is
> a single conditional on the display function + the extra init configuration.
>
> Neither of those is large. While I'd lean towards uint8_t instead of
>> uint16_t because not even humans can understand a 255 character line so why
>> bother being pretty about such a thing... Just document the caveat and move
>> on with the lower value. A future pyc format could change it if a
>> compelling argument were ever found.
>
>
> I very much agree with you here but is worth noting that I have heard the
> counter-argument that the longer the line is, the more important may be to
> distinguish what part of the line is wrong.
>

haha, true... Does our parser even have a maximum line length? (I'm not
suggesting being unlimited or matching that if huge, 64k is already
ridiculous)


>
> A compromise if you want to handle longer lines: A single uint16_t.
>> Represent the start column in the 9 bits and width in the other 7 bits. (or
>> any variations thereof)  it's all a matter of what tradeoff you want to
>> make for space reasons.  encoding as start + width instead of start + end
>> is likely better anyways if you care about compression as the width byte
>> will usually be small and thus be friendlier to compression.  I'd
>> personally ignore compression entirely.
>
>
> I would personally prefer not to implement very tricky compression
> algorithms because tools may need to parse this and I don't want to
> complicate the logic a lot. Handling lnotab is already a bit painful and
> when bugs ocur it makes debugging very tricky. Having the possibility to
> index something based on the index of the instruction is quite a good API
> in my opinion.
>
> Overall doing this is going to be a big win for developer productivity!
>
>
> Thanks! We think that this has a lot of potential indeed! :)
>
> Pablo
>
>
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/E7OM3GA4GNMRXAXOFAIZCCNTBWFUJAEP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-07 Thread Gregory P. Smith
On Fri, May 7, 2021 at 2:50 PM Pablo Galindo Salgado 
wrote:

> Hi there,
>
> We are preparing a PEP and we would like to start some early discussion
> about one of the main aspects of the PEP.
>
> The work we are preparing is to allow the interpreter to produce more
> fine-grained error messages, pointing to
> the source associated to the instructions that are failing. For example:
>
> Traceback (most recent call last):
>
>   File "test.py", line 14, in 
>
> lel3(x)
>
> ^^^
>
>   File "test.py", line 12, in lel3
>
> return lel2(x) / 23
>
>^^^
>
>   File "test.py", line 9, in lel2
>
> return 25 + lel(x) + lel(x)
>
> ^^
>
>   File "test.py", line 6, in lel
>
> return 1 + foo(a,b,c=x['z']['x']['y']['z']['y'], d=e)
>
>  ^
>
> TypeError: 'NoneType' object is not subscriptable
>
>
An additional cost to this is things that parse text tracebacks not knowing
how to handle it and things that log tracebacks generating additional
output.  We should provide a way for people to disable the feature on a
process as part of this while they address tooling and logging issues.
(via the usual set of command line flag + python env var + runtime API)

The cost of this is having the start column number and end column number
> information for every bytecode instruction
> and this is what we want to discuss (there is also some stack cost to
> re-raise exceptions but that's not a big problem in
> any case). Given that column numbers are not very big compared with line
> numbers, we plan to store these as unsigned chars
> or unsigned shorts. We ran some experiments over the standard library and
> we found that the overhead of all pyc files is:
>
> * If we use shorts, the total overhead is ~3% (total size 28MB and the
> extra size is 0.88 MB).
> * If we use chars. the total overhead is ~1.5% (total size 28 MB and the
> extra size is 0.44MB).
>
> One of the disadvantages of using chars is that we can only report columns
> from 1 to 255 so if an error happens in a column
> bigger than that then we would have to exclude it (and not show the
> highlighting) for that frame. Unsigned short will allow
> the values to go from 0 to 65535.
>

Neither of those is large. While I'd lean towards uint8_t instead of
uint16_t because not even humans can understand a 255 character line so why
bother being pretty about such a thing... Just document the caveat and move
on with the lower value. A future pyc format could change it if a
compelling argument were ever found.


> Unfortunately these numbers are not easily compressible, as every
> instruction would have very different offsets.
>
> There is also the possibility of not doing this based on some build flag
> on when using -O to allow users to opt out, but given the fact
> that these numbers can be quite useful to other tools like coverage
> measuring tools, tracers, profilers and the such adding conditional
> logic to many places would complicate the implementation considerably and
> will potentially reduce the usability of those tools so we prefer
> not to have the conditional logic. We believe this is extra cost is very
> much worth the better error reporting but we understand and respect
> other points of view.
>
> Does anyone see a better way to encode this information **without
> complicating a lot the implementation**? What are people thoughts on the
> feature?
>

A compromise if you want to handle longer lines: A single uint16_t.
Represent the start column in the 9 bits and width in the other 7 bits. (or
any variations thereof)  it's all a matter of what tradeoff you want to
make for space reasons.  encoding as start + width instead of start + end
is likely better anyways if you care about compression as the width byte
will usually be small and thus be friendlier to compression.  I'd
personally ignore compression entirely.

Overall doing this is going to be a big win for developer productivity!

-Greg
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ULNDFY5CWVDELNPE6S4HY5SDAODOT7DC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Future PEP: Include Fine Grained Error Locations in Tracebacks

2021-05-07 Thread Gregory P. Smith
On Fri, May 7, 2021 at 3:01 PM Larry Hastings  wrote:

> On 5/7/21 2:45 PM, Pablo Galindo Salgado wrote:
>
> Given that column numbers are not very big compared with line numbers, we
> plan to store these as unsigned chars
> or unsigned shorts. We ran some experiments over the standard library and
> we found that the overhead of all pyc files is:
>
> * If we use shorts, the total overhead is ~3% (total size 28MB and the
> extra size is 0.88 MB).
> * If we use chars. the total overhead is ~1.5% (total size 28 MB and the
> extra size is 0.44MB).
>
> One of the disadvantages of using chars is that we can only report columns
> from 1 to 255 so if an error happens in a column
> bigger than that then we would have to exclude it (and not show the
> highlighting) for that frame. Unsigned short will allow
> the values to go from 0 to 65535.
>
> Are lnotab entries required to be a fixed size?  If not:
>
> if column < 255:
> lnotab.write_one_byte(column)
> else:
> lnotab.write_one_byte(255)
> lnotab.write_two_bytes(column)
>
> If non-fixed size is acceptable. use utf-8 to encode the column number as
a single codepoint number into bytes and you don't even need to write your
own encode/decode logic for a varint.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QNWOZWTNFAVPD77KNG4LRYWCEDY3F6HX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Keeping Python a Duck Typed Language.

2021-04-23 Thread Gregory P. Smith
When reading this, I wrote most of it early and left a draft to bake
Then deleted a ton of it after other people replied. I'm conscious that my
terminology might be all over the map.  Keep that in mind before hitting
reply.  It'll take me a while to digest and pedantically use Luciano's
terms, they appear to be a great start. :)

On Tue, Apr 20, 2021 at 10:09 AM Mark Shannon  wrote:

> Hi everyone,
>
> Once upon a time Python was a purely duck typed language.
>
> Then came along abstract based classes, and some nominal typing starting
> to creep into the language.
>
> If you guarded your code with `isinstance(foo, Sequence)` then I could
> not use it with my `Foo` even if my `Foo` quacked like a sequence. I was
> forced to use nominal typing; inheriting from Sequence, or explicitly
> registering as a Sequence.
>

True.  Though in practice I haven't run into this often *myself*.  Do you
have practical examples of where this has bitten users such that code they
would've written pre-abc is no longer possible?  This audience can come up
with plenty of theoretical examples, those aren't so interesting to me.
I'm more interested in observances of actual real world fallout due to
something "important" (as defined however each user wants) using isinstance
checks when it ideally wouldn't.

Practically speaking, one issue I have is how easy it is to write
isinstance or issubclass checks. It has historically been much more
difficult to write and maintain a check that something looks like a duck.

 `if hasattr(foo, 'close') and hasattr(foo, 'seek') and hasattr(foo,
'read'):`

Just does not roll off the figurative tongue and that is a relatively
simple example of what is required for a duck check.

To prevent isinstance use when a duck check would be better, we're missing
an easy builtin elevated to the isinstance() availability level behaving
as lookslikeaduck() that does matches against a (set of) declared
typing.Protocol shape(s). An implementation of this exists -
https://www.python.org/dev/peps/pep-0544/#runtime-checkable-decorator-and-narrowing-types-by-isinstance
- but it requires the protocols to declare runtime checkability and has
them work with isinstance similar to ABCs...  technically accurate *BUT via
isinstance*? Doh!  It promotes the use of isinstance when it really isn't
about class hierarchy at all...

Edit: Maybe that's okay, isinstance can be read leniently to mean "is an
instance of something that one of these things over here says it matches"
rather than meaning "a parent class type is..."?  From a past experience
user perspective I don't read "isinstance" as "looks like a duck" when I
read code.  I assume I'm not alone.

I'd prefer something not involving metaclasses and __instancecheck__ type
class methods.  Something direct so that the author and reader both
explicitly see that they're seeing a duck check rather than a type
hierarchy check.  I don't think this ship has sailed, it could be built up
on top of what already exists if we want it.  Was this already covered in
earlier 544 discussions perhaps?

As Nathaniel indicated, how deep do we want to go down this rabbit hole of
checking?  just names?  signatures and types on those?  What about
exceptions (something our type system has no way to declare at all)?  and
infinite side effects?  At the end of the day we're required to trust the
result of whatever check we use and any implementation may not conform to
our desires no matter how much checking we do. Unless we solve the halting
problem. :P

PEP 544 supports structural typing, but to declare a structural type you
> must inherit from Protocol.
> That smells a lot like nominal typing to me.
>

Not quite.  A Protocol is merely a way to describe a structural type.  You
do not *need* to have your *implementations* of anything inherit from
typing.Protocol.  I'd *personally* advise people *do not inherit* from
Protocol in their implementation. Leave that for a structural type
declaration for type description and annotation purposes only, even though
Protocol appears to support direct inheritance. I understand why some don't
like this separate shape declaration concept.

Luciano notes that it is preferred to define your protocols as narrow and
define them in places *where they're used*, to follow a golang interface
practice.  My thinking aligns with that.

That inheritance is used in the *declaration* of the protocol is an
implementation detail because our language has never had a syntax for
declaring an interface.  544 fit within our existing language syntax.

Then came PEP 563 and said that if you wanted to access the annotations
> of an object, you needed to call typing.get_type_hints() to get
> annotations in a meaningful form.
> This smells a bit like enforced static typing to me.
>

I think useful conversations are ongoing here.  Enforced is the wrong
word.  *[rest of comment deleted in light of Larry's work in progress
response]*

Nominal typing in a dynamically typed language makes 

[Python-Dev] Re: NOTE: Python 3.9.3 contains an unintentional ABI incompatibility leading to crashes on 32-bit systems

2021-04-03 Thread Gregory P. Smith
On Sat, Apr 3, 2021 at 7:49 PM Terry Reedy  wrote:

> On 4/3/2021 7:15 PM, Miro Hrončok wrote:
> > On 03. 04. 21 21:44, Łukasz Langa wrote:
> >> The memory layout of PyThreadState was unintentionally changed in the
> >> recent 3.9.3 bugfix release. This leads to crashes on 32-bit systems
> >> when importing binary extensions compiled for Python 3.9.0 - 3.9.2.
> >> This is a regression.
> >>
> >> We will be releasing a hotfix 3.9.4 around 24 hours from now to
> >> address this issue and restore ABI compatibility with C extensions
> >> built for Python 3.9.0 - 3.9.2.
> >
> > Thanks for the hotifx.
> >
> > However, I need to ask: Would this also happen if there was a rc version
> > of 3.9.3?
>
> Unless the mistake was just introduced, the mistake would have happened.
>   One this severe would likely have been caught within the week or two
> before a final.  But as Łukasz noted when announcing the change, .rcs
> are generally ignored.  (I suspect that most everyone assumes that
> someone else will test them.  And begging people to not do that does not
> work well enough to justify the release.) 3.8.5 (2020 July 20 was hotfix
> for 3.8.4 (2020 July 14, which did have a candidate, which did not get
> tested the way that 3.8.4 itself was.
>
> --
> Terry Jan Reedy
>

For 3.9.4 I suggest a strict revert of the offending change. I created such
a PR and attached it to the bpo-43710 issue. It is a holiday weekend for a
large swath of the world. The recursion based crasher issue the original
change was fixing can be saved for a future release and not made under time
pressure.

I filed https://bugs.python.org/issue43725 to track one suggested way to
help automate prevention of these from landing in a release branch and
slipping through the cracks in a release. (discuss that on the issue, not
here)

-Greg
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LOZUUVC22ZCTSQDJEVCYQACLHZ43FF73/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: New public C API functions must not steal references or return borrowed references

2021-03-26 Thread Gregory P. Smith
On Thu, Mar 25, 2021 at 11:58 AM Mark Shannon  wrote:

> Hi Victor,
>
> I'm with you 100% on not returning borrowed references, doing so is just
> plain dangerous.
>
> However, is a blanket ban on stealing references the right thing?
>
> Maybe the problem is the term "stealing".
> The caller is transferring the reference to the callee.
> In some circumstances it can make a lot of sense to do so, since the
> caller has probably finished with the reference and the callee needs a
> new one.
>
> Cheers,
> Mark.
>

When was the last time a non-internal API that transferred references added?

I suggest keeping the restriction on new APIs in place until we actually
find a situation where we think we "need" one outside of Include/internal/
to help force the discussion as to why that needs to be public.

-gps

On 25/03/2021 4:27 pm, Victor Stinner wrote:
> > Hi,
> >
> > A new Include/README.rst file was just added to document the 3 C API
> > provided by CPython:
> >
> > * Include/: Limited C API
> > * Include/cpython/: CPython implementation details
> > * Include/internal/: The internal API
> >
> > I would like to note that *new* public C API functions must no longer
> > steal references or return borrowed references.
> >
> > Don't worry, there is no plan to deprecate or remove existing
> > functions which do that, like PyModule_AddObject() (streal a
> > reference) or PyDict_GetItem() (return a borrowed reference). The
> > policy is only to *add* new functions.
> >
> > IMO for the *internal* C API, it's fine to continue doing that for
> > best performances.
> >
> > Moreover, the limited C API must not expose "implementation details".
> > For example, structure members must not be accessed directly, because
> > most structures are excluded from the limited C API. A function call
> > hiding implementation details is usually better.
> >
> > Here is a copy of the current Include/README.rst file:
> >
> > The Python C API
> > 
> >
> > The C API is divided into three sections:
> >
> > 1. ``Include/``
> > 2. ``Include/cpython/``
> > 3. ``Include/internal/``
> >
> >
> > Include: Limited API
> > 
> >
> > ``Include/``, excluding the ``cpython`` and ``internal`` subdirectories,
> > contains the public Limited API (Application Programming Interface).
> > The Limited API is a subset of the C API, designed to guarantee ABI
> > stability across Python 3 versions, and is defined in :pep:`384`.
> >
> > Guidelines for expanding the Limited API:
> >
> > - Functions *must not* steal references
> > - Functions *must not* return borrowed references
> > - Functions returning references *must* return a strong reference
> > - Macros should not expose implementation details
> > - Please start a public discussion before expanding the API
> > - Functions or macros with a ``_Py`` prefix do not belong in
> ``Include/``.
> >
> > It is possible to add a function or macro to the Limited API from a
> > given Python version.  For example, to add a function to the Limited API
> > from Python 3.10 and onwards, wrap it with
> > ``#if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 >= 0x030A``.
> >
> >
> > Include/cpython: CPython implementation details
> > ===
> >
> > ``Include/cpython/`` contains the public API that is excluded from the
> > Limited API and the Stable ABI.
> >
> > Guidelines for expanding the public API:
> >
> > - Functions *must not* steal references
> > - Functions *must not* return borrowed references
> > - Functions returning references *must* return a strong reference
> >
> >
> > Include/internal: The internal API
> > ==
> >
> >
> > With PyAPI_FUNC or PyAPI_DATA
> > -
> >
> > Functions or structures in ``Include/internal/`` defined with
> > ``PyAPI_FUNC`` or ``PyAPI_DATA`` are internal functions which are
> > exposed only for specific use cases like debuggers and profilers.
> >
> >
> > With the extern keyword
> > ---
> >
> > Functions in ``Include/internal/`` defined with the ``extern`` keyword
> > *must not and can not* be used outside the CPython code base.  Only
> > built-in stdlib extensions (built with the ``Py_BUILD_CORE_BUILTIN``
> > macro defined) can use such functions.
> >
> > When in doubt, new internal C functions should be defined in
> > ``Include/internal`` using the ``extern`` keyword.
> >
> > Victor
> >
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/5UURMDSQUGSNZEUDUSQNHWRZIUKDIZJH/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to 

[Python-Dev] Re: Move support of legacy platforms/architectures outside Python

2021-02-21 Thread Gregory P. Smith
On Sun, Feb 21, 2021 at 12:03 PM Michał Górny  wrote:

> On Sun, 2021-02-21 at 13:35 +0100, Christian Heimes wrote:
> > On 21/02/2021 13.13, Victor Stinner wrote:
> > > Hi,
> > >
> > > I propose to actively remove support for *legacy* platforms and
> > > architectures which are not supported by Python according to PEP 11
> > > rules: hardware no longer sold and end-of-life operating systems. The
> > > removal should be discussed on a case by case basis, but I would like
> > > to get an agreement on the overall idea first. Hobbyists wanting to
> > > support these platforms/archs can continue to support them with
> > > patches maintained outside Python. For example, I consider that the
> > > 16-bit m68k architecture is legacy, whereas the OpenBSD platform is
> > > still actively maintained.
> > [...]
> > > Python has different kinds of platform and architecture supports. In
> > > practice, I would say that we have:
> > >
> > > * (1) Fully supported. Platform/architecture used by core developers
> > > and have at least one working buildbot worker: fully supported. Since
> > > core developers want to use Python on their machine, they fix issues
> > > as soon as they notice them. Examples: x86-64 on Linux, Windows and
> > > macOS.
> > >
> > > * (2) Best effort. Platform/architecture which has a buildbot worker
> > > usually not used by core developers. Regressions (buildbot failures)
> > > are reported to bugs.python.org, if someone investigates and provides
> > > a fix, the fix is merged. But there is usually no "proactive" work to
> > > ensure that Python works "perfectly" on these platforms. Example:
> > > FreeBSD/x86-64.
> > >
> > > * (3) Not (officially) supported. We enter the blurry grey area. There
> > > is no buildbot worker, no core dev use it, but Python contains code
> > > specific to these platforms/architectures. Example: 16-bit m68k and
> > > 31-bit s390 architectures, OpenBSD.
> > >
> > > The Rust programming language has 3 categories of Platform Support,
> > > the last one is :
> >
> > Thanks Victor!
> >
> > (short reply, I'm heading out)
> >
> > I'm +1 in general for your proposal. I also like the idea to adopt
> > Rust's platform support definition.
> >
> > For 3.10 I propose to add a configure option to guard builds on
> > unsupported / unstable platforms. My draft PR
> >
> https://github.com/python/cpython/pull/24610/commits/f8d2d56757a9cec7ae4dc721047336eaba097125
> > implements a checker for unsupported platforms and adds a
> > --enable-unstable-platforms flag. Configuration on unsupported platforms
> > fails unless users explicitly opt-in.
> >
> > The checker serves two purposes:
> >
> > 1) It gives users an opportunity to provide full PEP 11 support
> > (buildbot, engineering time) for a platform.
>
> Does that mean that if someone offers to run the build bot for a minor
> platform and do the necessary maintenance to keep it working, they will
> be able to stay?  How much maintenance is actually expected, i.e. is it
> sufficient to maintain CPython in a 'good enough' working state to
> resolve major bugs blocking real usage on these platforms?
>
>
Definitely start with this.  This level of effort to maintain the minor
platform support in-tree may be able to keep an otherwise neglected minor
platform in second tier non-release-blocker best effort status.  Having a
buildbot, at least provides visibility, and sometimes PR authors will be
sympathetic to easy edits that keep something working that way (no
guarantee).

The main thing from a project maintenance perspective is for platforms to
not become a burden to other code maintainers.  PRs need to be reviewed.
Every #if/#endif in code is a cognitive burden.  So being a minor platform
can come with unexpected breakages that need fixing due to other changes
made in the codebase that did not pay attention to the platform.  As we
cannot expect everyone working on code to care about anything beyond the
tier-1 fully supported platforms, buildbot or not.

Example: I consider many of the BSDs and the Solaris derivatives to be in
this state. (non specific here, i don't even know which ones we claim are
supported or not without going and reading whatever policy docs we might or
might not have today - Victor alludes to this state of the world).  We tend
to accept patches when someone offers them.  Occasionally we have a core
dev who actually runs one of them.  But most of us don't go out of the way
ourselves to try and keep changes we make working there.  We expect
interested parties to jump in when something isn't working right.  And will
generally do related PR reviews/merges if they're not burdensome.

An example of the above happening recently is VxWorks support via
https://bugs.python.org/issue31904.

-gps


> --
> Best regards,
> Michał Górny
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> 

[Python-Dev] Re: Move support of legacy platforms/architectures outside Python

2021-02-21 Thread Gregory P. Smith
On Sun, Feb 21, 2021 at 10:15 AM Christian Heimes 
wrote:

> On 21/02/2021 13.47, glaub...@debian.org wrote:
> > Rust doesn't keep any user from building Rust for Tier 2 or Tier 3
> platforms. There is no separate configure guard. All platforms that Rust
> can build for, are always enabled by default. No one in Rust keeps anyone
> from cross-compiling code for sparc64 or powerpcspe, for example.
> >
> > So if you want to copy Rust's mechanism, you should just leave it as is
> and not claim that users are being confused because "m68k" shows up in
> configure.ac.
>
> A --enable-unstable-platforms configure flag is my peace offer to meet
> you half way. You get a simple way to enable builds on untested
> platforms and we can clearly communicate that some OS and hardware
> platforms are not supported.
>

I personally wouldn't want to maintain such a check in autoconf, but it'll
be an isolated thing on its own, that if you or someone else creates, will
do its job and not bother the rest of us.

I think just publishing our list of (1) supported, (2) best-effort
non-release-blocker quasi-supported, and (3) explicitly unsupported in a
policy doc is sufficient.  But it's not like any of us are going to stop
someone from codifying that in configure.ac to require a flag.

-gps



> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/W3L2RISXFKHRWPYQB232XH7PDIOPKNDY/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TX6UDWZZJX5IMJIQCGUM7C3QDJ6ESACR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 647 (type guards) -- final call for comments

2021-02-14 Thread Gregory P. Smith
(there's a small TL;DR towards the end of my reply if you want to read in
reverse to follow my thought process from possible conclusions to how i got
there - please don't reply without reading the whole thing first)

*TL;DR of my TL;DR* - Not conveying bool-ness directly in the return
annotation is my only complaint.  A BoolTypeGuard spelling would alleviate
that.  I'm +0.3 now.  Otherwise I elaborate on other guarding options and
note a few additional Rejected/Postponed/Deferred Ideas sections that the
PEP should mention as currently out of scope: Unconditional guards,
multiple guarded parameters, and type mutators.  Along the way I work my
way towards suggestions for those, but I think they don't belong in _this_
PEP and could serve as input for future ones if/when desired.

On Sun, Feb 14, 2021 at 8:53 AM Paul Bryan  wrote:

> I'm a +1 on using Annotated in this manner. Guido mentioned that it was
> intended for only third-parties though. I'd like to know more about why
> this isn't a good pattern for use by Python libraries.
>
> On Sun, 2021-02-14 at 16:29 +0100, Adrian Freund wrote:
>
> Here's another suggestion:
>
> PEP 593 introduced the `Annotated` type annotation. This could be used to
> annotate a TypeGuard like this:
>
> `def is_str_list(val: List[object]) -> Annotated[bool,
> TypeGuard(List[str])]`
>
>
I like Annotated better than not having it for the sake of not losing the
return type.  BUT I still feel like this limits things too much and
disconnects the information about what parameter(s) are transformed. It
also doesn't solve the problem of why the guard _must_ be tied to the
return value. Clearly sometimes it is desirable to do that. But on many
other scenarios the act of not raising an exception is the narrowing
action: ie - it should be declared as always happening.  Nothing in the
above annotation reads explicitly to me as saying that the return value
determines the type outcome.


>
> Note that I used ( ) instead of [ ] for the TypeGuard, as it is no longer
> a type.
>
> This should fulfill all four requirements, but is a lot more verbose and
> therefore also longer.
> It would also be extensible for other annotations.
>
> For the most extensible approach both `-> TypeGuard(...)` and `->
> Annotated[bool, TypeGuard(...)]` could be allowed, which would open the
> path for future non-type-annotations, which could be used regardless of
> whether the code is type-annotated.
>
>
> --
> Adrian
>
> On February 14, 2021 2:20:14 PM GMT+01:00, Steven D'Aprano <
> st...@pearwood.info> wrote:
>
> On Sat, Feb 13, 2021 at 07:48:10PM -, Eric Traut wrote:
>
> I think it's a reasonable criticism that it's not obvious that a
> function annotated with a return type of `TypeGuard[x]` should return
> a bool.
>
>
> [...]
>
> As Guido said, it's something that a developer can easily
> look up if they are confused about what it means.
>
>
>
> Yes, developers can use Bing and Google :-)
>
> But it's not the fact that people have to look it up. It's the fact that
> they need to know that this return annotation is not what it seems, but
> a special magic value that needs to be looked up.
>
> That's my objection: we're overloading the return annotation to be
> something other than the return annotation, but only for this one
> special value. (So far.) If you don't already know that it is special,
> you won't know that you need to look it up to learn that its special.
>
>
>  I'm open to alternative formulations that meet the following requirements:
>
>  1. It must be possible to express the type guard within the function
>  signature. In other words, the implementation should not need to be
>  present. This is important for compatibility with type stubs and to
>  guarantee consistent behaviors between type checkers.
>
>
>
> When you say "implementation", do you mean the body of the function?
>
> Why is this a hard requirement? Stub files can contain function
> bodies, usually `...` by convention, but alternatives are often useful,
> such as docstrings, `raise NotImplementedError()` etc.
>
> https://mypy.readthedocs.io/en/stable/stubs.html
>
> I don't think that the need to support stub files implies that the type
> guard must be in the function signature. Have I missed something?
>
>
> 2. It must be possible to annotate the input parameter types _and_ the
> resulting (narrowed) type. It's not sufficient to annotate just one or
> the other.
>
>
>
> Naturally :-)
>
> That's the whole point of a type guard, I agree that this is a truly
> hard requirement.
>
>
> 3. It must be possible for a type checker to determine when narrowing
> can be applied and when it cannot. This implies the need for a bool
> response.
>
>
>
> Do you mean a bool return type? Sorry Eric, sometimes the terminology
> you use is not familiar to me and I have to guess what you mean.
>
>
> 4. It should not require changes to the grammar because that would
> prevent this from being adopted in most code bases for many years.
>
>
>
> Fair 

[Python-Dev] Re: PEP 647 (type guards) -- final call for comments

2021-02-12 Thread Gregory P. Smith
My primary reaction seems similar to Mark Shannon's.

When I see this code:

def is_str_list(val: List[object]) -> TypeGuard[List[str]]:
...

I cannot tell what it returns.  There is no readable indication in that
this returns a boolean so the reader cannot immediately see how to use the
function.  In fact my first reaction is to assume it returns some form of
List[str].  Which it most definitely does not do.

Additionally, it seems like restricting this concept of a type guard to
only a function that returns a bool is wrong.
It is also a natural idiom to encounter functions that raise an exception
if their type conditions aren't met.
Those should also narrow types within a non-exception caught codepath after
their return.  Be simple and assume that catching any exception from their
call ends their narrowing?

A narrowing is meta-information about a typing side effect, it isn't a type
itself.  It isn't a value.  So replacing a return type with it seems like
too much.  But if we do wind up going that route, at least make the name
indicate that the return value is a bool.  i.e.: def parse_num(thing: Any)
-> NarrowingBool[float]

The term TypeGuard is too theoretical for anyone reading code.  I don't
care if TypeScript uses it in their language... looking that up at a quick
glance -
https://www.typescriptlang.org/docs/handbook/advanced-types.html#user-defined-type-guards
- it doesn't look like they ever use the term TypeGuard in the language
syntax itself? good.

Perhaps this would be better expressed as an annotation on the argument(s)
that the function narrows?

def is_str_list(val: Constrains[List[object]:List[str]) -> bool:
...

I'm using Constrains as an alternative to Narrows here... still open to
suggestions.  But I do not think we should get hung up on TypeScript and
assume that TypeGuard is somehow a good name.  This particular proposed
example uses : slice notation rather than a comma as a proposal that may be
more readable.  I'd also consider using the 'as' keyword instead of , or :
but that wouldn't even parse today.  *Really* what might be nice for this
is our -> arrow.

def assert_str_list(var:  Narrows[List[object] -> List[str]]) -> None:
...

as the arrow carries the implication of something effective "after" the
call. I suspect adding new parsing syntax isn't what anyone had in mind for
now though.  : seems like a viable stopgap, whether syntax happens or not.
(i'm intentionally avoiding the comma as an exploration in readability)

The key point I'm trying to represent is that declaring it per argument
allows for more expressive functions and doesn't interfere with their
return value (if they even return anything). Clearly there's a use case
that _does_ want to make the narrowing conditional on the return value
boolness.  I still suggest expressing that on the arguments themselves.
More specific flavors of the guarding parameter such as ConstrainsWhenTrue
or NarrowsWhenFalse perhaps.


def is_set_of(val: Constrains[Set[Any]:Set[_T]], type: Type[_T]) -> bool:
   ...

which might also be written more concisely as:

def is_set_of(val: Set[Constrains[Any:_T]], type: Type[_T]) -> bool:

If we allow the full concept.

hopefully food for thought,  i'm -1 right now.
-gps


On Fri, Feb 12, 2021 at 2:34 AM Mark Shannon  wrote:

> Hi,
>
> First of all, sorry for not commenting on this earlier.
> I only became aware of this PEP yesterday.
>
>
> I like the general idea of adding a marker to show that a boolean
> function narrows the type of one (or more?) of its arguments.
> However, the suggested approach seems clunky and impairs readability.
>
> It impairs readability, because it muddles the return type.
> The function in the example returns a bool.
> The annotation is also misleading as the annotation is on the return
> type, not on the parameter that is narrowed.
>
> At a glance, most programmers should be able to work out what
>
> def is_str_list(val: List[object]) -> bool:
>
> returns.
>
> But,
>
> def is_str_list(val: List[object]) -> TypeGuard[List[str]]:
>
> is likely to confuse and require careful reading.
> Type hints are for humans as well as type checkers.
>
>
>
> Technical review.
> -
>
> For an annotation of this kind to be useful to a checker, that checker
> must perform both flow-sensitive and call-graph analysis. Therefore it
> is theoretically possible to remove the annotation altogether, using the
> following approach:
>
> 1. Scan the code looking for functions that return boolean and
> potentially narrow the type of their arguments.
> 2. Inline those functions in the analysis
> 3. Rely on pre-existing flow-senstive analysis to determine the correct
> types.
>
> However, explicit is better and implicit. So some sort of annotation
> seems sensible.
>
> I would contend that the minimal:
>
> @narrows
> def is_str_list(val: List[object]) -> bool:
>
> is sufficient for a checker, as the checker can inline anything marked
> @narrows.
> Plus, it does not mislead 

[Python-Dev] Re: Security releases of CPython

2021-02-12 Thread Gregory P. Smith
On Thu, Feb 11, 2021 at 8:29 PM Terry Reedy  wrote:

> On 2/11/2021 3:23 PM, Michał Górny wrote:
> > Hello,
> >
> > I'm the primary maintainer of CPython packages in Gentoo. I would like
> > to discuss possible improvement to the release process in order to
> > accelerate releasing security fixes to users.
> >
> > I feel that vulnerability fixes do not make it to end users fast enough.
>
> When we introduce a bad regression in a release, including a
> severe-enough security vulnerability, and discover it soon enough, we
> have sometimes made immediately releases.
>

Indeed. What Terry said. There is risk in doing that. It means the solution
is more rushed and hasn't been vetted nearly as well. So without a
compelling severe reason we're unlikely to.

I would expect distros shipping their own python runtime packages to track
security issues and their fixes and backport those (generally easy) to
their own packages before an official release is ready anyways. The big
distros definitely do this. How to deal with this is up to the individual
policies of any given OS distro's package owners.

-gps

Beyond that, your proposal to add several releases per Python version,
> perhaps double what we do now, runs up against the limits of volunteer
> time and effort.  As it is now, becoming a release manager is a 7 year
> commitment, with at least one release about every other month for the
> first 4.  There are also the 2 people who produce the Windows and macOS
> installers.  I believe each has done it for at least 7 or 8 years
> already.  Releases are not just a push of a button.  Make the release
> job too onerous, and there might not be any more volunteers.
>
> > For example, according to the current release schedules for 3.9 and 3.8,
> > the bugfix releases are planned two months apart. While the release is
> > expected to happen in the next few days, both versions are known to be
> > vulnerable for almost a month!
> >
> > Ironically, things look even worse for security-supported versions.
> > Please correct me if I'm wrong but both 3.7 and 3.6 seem to be behind
> > schedule (planned for Jan 15th), and they are known to be vulnerable
> > since mid-October.
> >
> > In my opinion, this causes three issues:
> >
> > 1. Users using official releases are exposed to security vulnerabilities
> > for prolonged periods of time.
> >
> > 2. When releases happen, security fixes are often combined with many
> > other changes. This causes problems for distribution maintainers who, on
> > one hand, would like to deploy the security fixes to production versions
> > ASAP, and on the other, would prefer that the new version remained in
> > testing for some time due to the other changes.
> >
> > 3. Effectively, it means that distribution developers need to track
> > and backport security fixes themselves. In the end, this means a lot of
> > duplicate work.
>
> Perhaps people doing duplicate work could get together to eliminate some of
> the duplication.  It should be possible to filter security commits from the
> python-checkins list by looking at the news entries and if need be, the bpo
> issues.
>
> > I think that security fixes are important enough to justify not sticking
> > to a strict release schedule. Therefore, I would like to propose that if
> > vulnerability fixes are committed, new releases are made
> > as frequently as necessary and as soon as possible (i.e. possibly giving
> > some time for testing) rather than according to a strict schedule.
> >
> > Furthermore, I think that at least for branches that are in higher level
> > of maintenance than security, it could make sense to actually make
> > security releases (e.g. 3.9.1.x) that would include only security fixes
> > without other changes.
>
> --
> Terry Jan Reedy
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/4S5LOTVJZUA5PQ5TRGQFCWHTGA4BOXBO/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Y6KYIFEMECIMWM7BYYSCJ7AMW2FVNUWK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Why aren't we allowing the use of C11?

2021-01-28 Thread Gregory P. Smith
On Thu, Jan 28, 2021 at 10:52 AM Charalampos Stratakis 
wrote:

>
>
> - Original Message -
> > From: "Mark Shannon" 
> > To: "Python Dev" 
> > Sent: Thursday, January 28, 2021 5:26:37 PM
> > Subject: [Python-Dev] Why aren't we allowing the use of C11?
> >
> > Hi everyone,
> >
> > PEP 7 says that C code should conform to C89 with a subset of C99
> allowed.
> > It's 2021 and all the major compilers support C11 (ignoring the optional
> > parts).
> >
> > C11 has support for thread locals, static asserts, and anonymous structs
> > and unions. All useful features.
> >
> > Is there a good reason not to start using C11 now?
> >
> > Cheers,
> > Mark.
> >
> >
> > ___
> > Python-Dev mailing list -- python-dev@python.org
> > To unsubscribe send an email to python-dev-le...@python.org
> > https://mail.python.org/mailman3/lists/python-dev.python.org/
> > Message archived at
> >
> https://mail.python.org/archives/list/python-dev@python.org/message/PLXETSQE7PRFXBXN2QY6VNPKUTM6I7OD/
> > Code of Conduct: http://python.org/psf/codeofconduct/
> >
> >
>
> Depends what platforms the python core developers are willing to support.
>
> Currently downstream on e.g. RHEL7 we compile versions of CPython under
> gcc 4.8.2 which does not support C11.
>
> In addition the manylinux2014 base image is also based on CentOS 7, which
> wouldn't support C11 as well.
>

I *suspect* this is the primary technical reason not to adopt C11 left.

But aren't things like manylinux2014 defined by the contents of a centrally
maintained docker container?
If so (I'm not one who knows how wrong my guess likely is...), can we get
those updated to include a more modern compiler so we can move on sooner
than the deprecation of manylinux2014?

-gps


>
> --
> Regards,
>
> Charalampos Stratakis
> Software Engineer
> Python Maintenance Team, Red Hat
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/PCOZN5NHNZ7HIANOKQQ7GQQMV3A3JF72/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MNLR45HD4X6EKYO2ARXLOF7OGTKODOJG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Bumping minimum Sphinx version to 3.2 for cpython 3.10?

2021-01-13 Thread Gregory P. Smith
My take on this is to keep it simple for CPython: Require the newer version
of Sphinx on all branches we backport docs changes to.

We, the CPython project, determine the exact version(s) of Sphinx a
documentation build for a given version of CPython requires.  If Sphinx has
unfortunately made a breaking change (as appears to be the case?), we
should update our docs for all versions we backport doc changes to to use
the most modern version (along with the docs themselves).

We should explicitly not care at all about people using external sphinx
installs that are not fetched at the correct version via our Doc tree's
Doc/requirements.txt.

That some distros want to use their own incompatible-version packaged
version of sphinx to build our docs is on them to handle on their own.  Not
on the CPython project.  That was never a guaranteed thing that would
work.  Until now, they merely got lucky.  This is a docs build-time
dependency with no impact on anything at runtime so we should be free to
change versions as we see fit.

-gps


On Wed, Jan 13, 2021 at 8:25 AM Carol Willing  wrote:

> Hi Julien,
>
> I think that there are two items to consider:
> - `needs_sphinx` in `conf.py`
> - setting for Sphinx in `cpython/Doc/requirements.txt`
>
> I believe that the `needs_sphinx` field identifies the minimal version of
> Sphinx that can be used to build the docs. The actual version that is used
> to build the docs is in the `requirements.txt` file.
>
>
> On Wed, Jan 13, 2021 at 7:29 AM Serhiy Storchaka 
> wrote:
>
>> 12.01.21 22:38, Julien Palard via Python-Dev пише:
>> > During the development of cpython 3.10, Sphinx was bumped to 3.2.1.
>> >
>> > Problem is Sphinx 3 have some incompatibilities with Sphinx 2, some
>> that
>> > we could work around, some are bit harder, so we may need to bump
>> > `needs_sphinx = '3.2'` (currently it is 1.8).
>>
>> Sphinx version in the current Ubuntu LTS (20.04) is 1.8.5. Would not it
>> cause problems with builting documentation on Ubuntu?
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/EJWTFPHZUX522RNCTIGAAOHWD23VD7NQ/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/S53K4P272EE73EF2NLZWS5DVNR6VJG3R/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Y2ZCBJAF7NL5CP7GDGYVAICBYBIVDTC2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: unittest of sequence equality

2020-12-22 Thread Gregory P. Smith
Numpy chose to violate the principal of equality by having __eq__ not
return a bool. So a numpy type can't be used reliably outside of the numpy
DSL.

-gps

On Tue, Dec 22, 2020, 11:51 AM Alan G. Isaac  wrote:

> Here, `seq1 == seq2` produces a boolean array (i.e., an array of boolean
> values).
> hth, Alan Isaac
>
>
> On 12/22/2020 2:28 PM, Ivan Pozdeev via Python-Dev wrote:
> >
> > You sure about that? For me, bool(np.array) raises an exception:
> >
> > In [12]: np.__version__ Out[12]: '1.19.4'
> >
> > In [11]: if [False, False]==np.array([False, False]): print("foo") <...>
> ValueError: The truth value of an array with more than one element is
> ambiguous.
> > Use a.any() or a.all()
> >
> >
>
>
>
> > On 22.12.2020 21:52, Alan G. Isaac wrote:
> >> The following test fails because because `seq1 == seq2` returns a
> (boolean) NumPy array whenever either seq is a NumPy array.
> >>
> >> import unittest import numpy as np
> unittest.TestCase().assertSequenceEqual([1.,2.,3.], np.array([1.,2.,3.]))
> >>
> >> I expected `unittest` to rely only on features of a
> `collections.abc.Sequence`, which based on
> https://docs.python.org/3/glossary.html#term-sequence, I
> >> believe are satisfied by a NumPy array. Specifically, I see no
> requirement that a sequence implement __eq__ at all much less in any
> particular way.
> >>
> >> In short: a test named `assertSequenceEqual` should, I would think,
> work for any sequence and therefore (based on the available documentation)
> should not
> >> depend on the class-specific implementation of __eq__.
> >>
> >> Is that wrong?
> >>
> >> Thank you, Alan Isaac ___
> Python-Dev mailing list -- python-dev@python.org To unsubscribe send an
> email to
> >> python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/ Message
> archived at
> >>
> https://mail.python.org/archives/list/python-dev@python.org/message/6Z43SU2RPIHTRABYAXBOGRKWGTLIFJYK/
> Code of Conduct:
> >> http://python.org/psf/codeofconduct/
> >
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/74CUML37OCQXXKMVUFZORMSCOYP2W5GH/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FNMNQVMKCN3F3XHLESZLV2NT6XGV2DIM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: macOS issues with 3.8.7rc1

2020-12-09 Thread Gregory P. Smith
As a meta question: Is there a good reason to support binaries running on
macOS earlier than ~ $latest_version-1?

Aren't systems running those old releases rather than upgrading unsupported
by Apple, never to be patched, and thus not wise to even have on a network?

Yes, that means some very old hardware becomes useless as Apple drops
support. But that is what people signed up for when they bought it. Why
should that be our problem?

(It sounds like y'all will make it work, that's great! I'm really just
wondering where the motivation comes from)

-gps

On Wed, Dec 9, 2020, 9:25 AM Gregory Szorc  wrote:

> On Wed, Dec 9, 2020 at 4:13 AM Ronald Oussoren 
> wrote:
>
>>
>>
>> On 8 Dec 2020, at 19:59, Gregory Szorc  wrote:
>>
>> Regarding the 3.8.7rc1 release, I wanted to raise some issues regarding
>> macOS.
>>
>> Without the changes from https://github.com/python/cpython/pull/22855
>> backported, attempting to build a portable binary on macOS 11 (e.g. by
>> setting `MACOSX_DEPLOYMENT_TARGET=10.9`) results in a myriad of `warning:
>> 'XXX' is only available on macOS 10.13 or newer
>> [-Wunguarded-availability-new]` warnings during the build. This warning
>> could be innocuous if there is run-time probing in place (the symbols in
>> question are weakly linked, which is good). But if I'm reading the code
>> correctly, run-time probing was introduced by commits like eee543722 and
>> isn't present in 3.8.7rc1.
>>
>> I don't have a machine with older macOS sitting around to test, but I'm
>> fairly certain the lack of these patches means binaries built on macOS 11
>> will blow up at run-time when run on older macOS versions.
>>
>> These same patches also taught CPython to build and run properly on Apple
>> ARM hardware. I suspect some people will care about these being backported
>> to 3.8.
>>
>> We know. Backporting the relevant changes to 3.8 is taking more time than
>> I had hoped. It doesn’t help that I’ve been busy at work and don’t have as
>> much energy during the weekend as I’d like.
>>
>> The backport to 3.9 was fairly easy because there were few changes
>> between master and the 3.9 branch at the time. Sadly there have been
>> conflicting changes since 3.8 was forked (in particular in posixmodule.c).
>>
>> The current best practice for building binaries that work on macOS 10.9
>> is to build on that release (or rather, with that SDK).  That doesn’t help
>> if you want to build Universal 2 binaries though.
>>
>
> Thank you for your hard work devising the patches and working to backport
> them.
>
> I personally care a lot about these patches and I have the technical
> competency to perform the backport. If you need help, I could potentially
> find time to hack on it. Just email me privately (or ping @indygreg on
> GitHub) and let me know. Even if they don't get into 3.8.7, I'll likely
> cherry pick the patches for
> https://github.com/indygreg/python-build-standalone. And I'm sure other
> downstream packagers will want them as well. So having them in an
> unreleased 3.8 branch is better than not having them at all.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/5AWFX2POTPNVW72VUPCPTJIOA6AOSVWY/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Q45VNQYOYXTRRTA26Q4M2WNXPFKL4S2O/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: __future__ annotations loses closure scope

2020-12-08 Thread Gregory P. Smith
What is the utility of a type annotation when the thing it refers to cannot
exist?

Deferred annotation lookups are intended to be something that analysis time
can make sense of but can always have no useful meaning at runtime.

No nesting required:

```
from __future__ import annotations
Class X:
  ...

def foo(hi: X):
  ...

del X
```

Now try analyzing foo at runtime...  I assume "Boom" with that NameError
again?  (*On a phone, can't try it now)*

I believe this isn't a problem get_type_hints() can ever solve.

Code that does this isn't what I'd call "reasonably" structured for use
with type hints.

If anything, type checkers should try to warn about it?

-gps

On Tue, Dec 8, 2020, 7:03 PM Paul Bryan  wrote:

> Let's try an example that static type checkers should have no problem with:
>
> Python 3.9.0 (default, Oct  7 2020, 23:09:01)
>
> [GCC 10.2.0] on linux
>
> Type "help", "copyright", "credits" or "license" for more information.
>
> >>> from __future__ import annotations
>
> >>>
>
> >>> def make_a_class():
>
> ... class A:
>
> ... def get_b(self) -> B:
>
> ... return B()
>
> ... class B:
>
> ... def get_a(self) -> A:
>
> ... return A()
>
> ... return A
>
> ...
>
> >>> A = make_a_class()
>
> >>> a = A()
>
> >>>
>
> >>> import typing
>
> >>> typing.get_type_hints(a.get_b)
>
> Traceback (most recent call last):
>
>   File "", line 1, in 
>
>   File "/usr/lib/python3.9/typing.py", line 1386, in get_type_hints
>
> value = _eval_type(value, globalns, localns)
>
>   File "/usr/lib/python3.9/typing.py", line 254, in _eval_type
>
> return t._evaluate(globalns, localns, recursive_guard)
>
>   File "/usr/lib/python3.9/typing.py", line 493, in _evaluate
>
> eval(self.__forward_code__, globalns, localns),
>
>   File "", line 1, in 
>
> NameError: name 'B' is not defined
>
> >>>
>
>
>
>
> On Tue, 2020-12-08 at 18:48 -0800, Guido van Rossum wrote:
>
> Yeah, static type checkers won't like it regardless.
>
> On Tue, Dec 8, 2020 at 6:39 PM Paul Bryan  wrote:
>
> It appears that when from future import __annotations__, a type hint
> annotation derived from a closure loses scope.
>
> Simplistic example:
>
> Python 3.9.0 (default, Oct  7 2020, 23:09:01)
>
> [GCC 10.2.0] on linux
>
> Type "help", "copyright", "credits" or "license" for more information.
>
> >>> def make_a_class(data_type):
>
> ... class Foo:
>
> ... def put_data(self, data: data_type):
>
> ... self.data = data
>
> ... return Foo
>
> ...
>
> >>> import typing
>
> >>> foo = make_a_class(str)()
>
> >>> typing.get_type_hints(foo.put_data)
>
> {'data': }
>
> >>>
>
>
> If I add a single import to the top, it breaks:
>
> Python 3.9.0 (default, Oct  7 2020, 23:09:01)
>
> [GCC 10.2.0] on linux
>
> Type "help", "copyright", "credits" or "license" for more information.
>
> *>>> from __future__ import annotations  # added this line*
>
> >>> def make_a_class(data_type):
>
> ... class Foo:
>
> ... def put_data(self, data: data_type):
>
> ... self.data = data
>
> ... return Foo
>
> ...
>
> >>> import typing
>
> >>> foo = make_a_class(str)()
>
> >>> typing.get_type_hints(foo.put_data)
>
> Traceback (most recent call last):
>
>   File "", line 1, in 
>
>   File "/usr/lib/python3.9/typing.py", line 1386, in get_type_hints
>
> value = _eval_type(value, globalns, localns)
>
>   File "/usr/lib/python3.9/typing.py", line 254, in _eval_type
>
> return t._evaluate(globalns, localns, recursive_guard)
>
>   File "/usr/lib/python3.9/typing.py", line 493, in _evaluate
>
> eval(self.__forward_code__, globalns, localns),
>
>   File "", line 1, in 
>
> NameError: name 'data_type' is not defined
>
> >>>
>
>
> I don't see how I can supply the closure scope as localns to
> get_type_hints. Any suggestions? Is constructing a
> (dynamically-type-annotated) class in a function like this an anti-pattern?
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/5RK6VXF263F5I4CU7FUMOGOYN2UQG73Q/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/NRH4HBD36WDIP4WR2L4TLTOYMQL2NUFV/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived 

[Python-Dev] Re: nanosecond stat fields, but not os.path methods ?

2020-12-07 Thread Gregory P. Smith
If the precision is available via OS APIs, this is mostly an issue+PR away
from being implemented by someone who cares.

FAT32 has a two billion nanosecond resolution IIRC. :P

-gps

On Mon, Dec 7, 2020 at 8:22 AM David Mertz  wrote:

> Are there any filesystems that can actually record a meaningful ns
> modification time?  I find discussions claiming this:
>
>
>- XFS and EXT3: second precision
>- EXT4: millisecond precision
>- NTFS: 100ns precision
>- APFS: 1 ns precision
>
>
> But also notes that the precision is likely to exceed the accuracy by many
> times on real systems.
>
> On Mon, Dec 7, 2020 at 2:34 PM Mats Wichmann  wrote:
>
>>
>> there are stat fields now for ns precision, e.g. st_mtime now has an
>> analogue st_mtime_ns.  But os.path didn't grow corresponding methods -
>> there's an os.path.getmtime but not _ms. Was that intentional?  The
>> wrappers in genericpath.py are trivial and arguably aren't particularly
>> needed, but still curious...
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/CK3S2DYI3PKZ7VGRFEO4PKVLZCPUTR6N/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
> --
> The dead increasingly dominate and strangle both the living and the
> not-yet born.  Vampiric capital and undead corporate persons abuse
> the lives and control the thoughts of homo faber. Ideas, once born,
> become abortifacients against new conceptions.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ZAPQSEQHL3KO7AALG7NQTIA2BPG753AN/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WJU7EMOTVCYA36CFBBEPR3DOIX4KV77R/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Drop Solaris, OpenSolaris, Illumos and OpenIndiana support in Python

2020-10-30 Thread Gregory P. Smith
On Fri, Oct 30, 2020 at 2:30 PM Garrett D'Amore via Python-Dev <
python-dev@python.org> wrote:

> I’m not on this list.  But I have offered to help - if there are tasks
> that need to be done to help this I can help put the weight of a commercial
> entity behind it whether that involves assigning our developers to work on
> this, helping pay for external developers to do so, or assisting with
> access to machine resources.
>
> For the record there are multiple  illumos distributions and most are both
> free and run reasonably well in virtual machines. Claiming that developers
> don’t have access as a reason to discontinue the port is a bit
> disingenuous. Anyone can get access if they want and if they can figure out
> how to login and use Linux then this should be pretty close to trivial for
> them.
>

Thanks!  It usually isn't just about access.  This email thread and related
tweets appear to have served their purpose: To drum up volunteers+resources
from the otherwise potentially impacted communities.  The valuable thing is
developer time.

Access: I took it upon myself to spin up some Solaris-derivative VMs for
Python dev things in the (now distant) past.  It wasn't a positive
experience, I won't do it again.  Bonus on top of that: Oracle, the owner
of Solaris, is *still* actively attempting to destroy the entire software
industry .
Working on anything attempting to directly benefit them is a major ethical
violation for me.  Others make their own choices.

I won't even spin up major BSD VMs anymore for a similar reason.  It isn't
a good positive use of my time, even despite having an enjoyable past with
those OSes and knowing several past and present Free/Net/OpenBSD core devs.

I look forward to new *Solaris buildbot(s) joining the fleet,
-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7Y7BXPMRPYJE6CUCMIYBY3EXOKQLMIVW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Drop Solaris, OpenSolaris, Illumos and OpenIndiana support in Python

2020-10-30 Thread Gregory P. Smith
On Fri, Oct 30, 2020 at 1:14 PM Raymond Hettinger <
raymond.hettin...@gmail.com> wrote:

> FWIW, when the tracker issue landed with a PR, I became concerned that it
> would be applied without further discussion and without consulting users.


An issue and a PR doesn't simply mean "it is happening".  It is an
effective way to communicate and demonstrate a possible change.  It appears
to have served its purpose.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BSN5BIHQWBXZ2G5EXVWKEIEBC4X6KJYO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Drop Solaris, OpenSolaris, Illumos and OpenIndiana support in Python

2020-10-30 Thread Gregory P. Smith
On Fri, Oct 30, 2020 at 11:03 AM Brett Cannon  wrote:

>
> On Fri, Oct 30, 2020 at 6:37 AM Pablo Galindo Salgado 
> wrote:
>
>> >Two volunteer core developers and at least one buildbot would help a
>> > lot to ensure that Python is working on Solaris for real, and reduce
>> > the number of open Solaris issues. If it happens, I'm perfectly fine
>> > with keeping Solaris support.
>> > I also hope that more people will contribute to maintain the code, not
>> > only Ronald and Jesús. Many people wrote to me that Python is a key
>> > component of Illumos (the package manager is written in Python). So
>> > maybe Python on Illumos deserves some love?
>>
>> If we decide to take the proposal seriously, I think it may be beneficial
>> to establish a deadline for having the buildbot green and the issues
>> fixed,
>> so the promise of having "real" support at some point in the future does
>> actually manifest itself or it does not block following Victor's proposal
>> if it
>> actually does not happen.
>>
>
> I agree, and so maybe it's time to more formally establish what platforms
> we do support, such that any platform not listed is not considered
> supported. We could reorient PEP 11 towards that and for each platform we
> list:
>
> 1. Which buildbot must be green for that platform to be considered
> supported
> 2. Who is in charge of submitting patches to keep that buildbot green (and
> what that minimum number of people is)
> 3. We have an agreed upon timeline that if a buildbot stays red for longer
> than the specified time then the platform is considered abandoned
> 4. We all agree to prioritize patches from the people listed for a
> platform to fix their buildbots if they are not core devs
> 5. We have a clear definition of "platform" (e.g. is "Linux" a platform,
> or does it break down to individual distros?)
>
> Would there be anything else we would want for a platform for us to be
> willing to consider it supported?
>

If we're going to explicitly list Platforms, that gets messy to maintain.
It becomes important to draw a distinction between types of support.
Support is not a boolean.  We're autoconf based on the posix side (like it
or not) which leads to a lot of things that for the most part just mostly
work, regardless of support.  That is working as intended.

Generally, recent enough of each of: Linux distros (all architectures),
macOS, Windows, and AIX.  But that alone does not constitute a platform.

But to define things explicitly you need a definition of what a Platform is:

is it an OS kernel? what version(s)? compiler toolchain? what versions? Is
it also the C library? what version(s) of which libc variants (linux has at
least three)? other system libraries? what versions? specific CPU
architectures? what versions.  The matrix gets huge.

It can be defined.

But then you need to clarify the different levels of not being in that
matrix. "on the list, all clear" vs "likely works, no guarantees" vs
"expect lots of test failures" vs "expect extension module build failures"
vs "expect the core interpreter build to fail".

We've not done this in the past.  Would it even add value?

It is much easier to look at the list of buildbots tagged "stable".  and
consider that "on the list, all clear" and everything else is in an
unspecified one of the other four+ categories without arguments over which
sub-support list any given thing is in.

-gps


> -Brett
>
>
>>
>> On Fri, 30 Oct 2020 at 12:54, Victor Stinner  wrote:
>>
>>> Hi Ronald,
>>>
>>> Le ven. 30 oct. 2020 à 12:59, Ronald Oussoren 
>>> a écrit :
>>> > I agree. That’s what I tried to write, its not just providing a
>>> buildbot but also making sure that it keeps working and stays green.
>>>
>>> This is really great!
>>>
>>> Jesús Cea Avión is also a volunteer to maintain the Solaris (see the
>>> bpo).
>>>
>>> Moreover, it seems like some people would like to provide servers to
>>> run a Solaris buildbot. Example:
>>> https://bugs.python.org/issue42173#msg379895
>>>
>>> Two volunteer core developers and at least one buildbot would help a
>>> lot to ensure that Python is working on Solaris for real, and reduce
>>> the number of open Solaris issues. If it happens, I'm perfectly fine
>>> with keeping Solaris support.
>>>
>>> I also hope that more people will contribute to maintain the code, not
>>> only Ronald and Jesús. Many people wrote to me that Python is a key
>>> component of Illumos (the package manager is written in Python). So
>>> maybe Python on Illumos deserves some love?
>>>
>>> There are many ways to contribute to the Solaris support of Python:
>>>
>>> * Comment Solaris issues (bugs.python.org, search for "Solaris" in the
>>> title)
>>> * Propose PRs to fix issues or implement Solaris specific features
>>> * Review Solaris PRs
>>> * Provide Solaris servers accessible to Python core developers (SSH
>>> access)
>>> * Donate to the CPython project:
>>>
>>>   * https://www.python.org/psf/donations/python-dev/
>>>   * 

[Python-Dev] Re: Drop Solaris, OpenSolaris, Illumos and OpenIndiana support in Python

2020-10-30 Thread Gregory P. Smith
On Fri, Oct 30, 2020 at 8:26 AM Sebastian Wiedenroth 
wrote:

> Hi,
>
> I've already commented on the issue, but want to make a few more points
> here as well.
>
> Removing Solaris support would not only impact Oracle Solaris but the open
> source illumos community as well.
> Both systems share the "SunOS" uname for historical reasons. Removing
> support would be a disaster for us.
>
> We are a small community compared to Linux, but there are illumos
> distributions (OpenIndiana, OmniOS, SmartOS, Tribblix, ...) that have many
> python users.
> It's also an essential part of our tooling and package management.
>
> I've offered to host an illumos buildbot before but it was not accepted
> because not all tests passed at that time.
> There is active work going on to get this debugged and fixed.
> If it is acceptable to skip some tests we can have the buildbot online
> tomorrow.
>

FWIW making a PR that adds platform specific test skips or expected failure
decorators is a good way to start bringing up new buildbots.  It serves as
effective documentation of what does and doesn't work that lives directly
in the codebase, in a way that can be evolved over time as more is made to
work.

-gps

On the ticket many users and developers have offered to step up, myself
> included.
> In our IRC channel we also had some discussions yesterday and we're hoping
> to bring more patches upstream soon.
>
> If there is interest in ssh access to illumos systems that is also
> something I can offer.
>
> Please let us know if there is more we need to do to keep python on
> illumos supported.
>
> Best regards,
> Sebastian
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/UL4MQAKQZOIEEA2DHNJNB4BB4J3QR7FY/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/S3MDFH6PHZR2TN4NYLM4PBNQPHQZZMWW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Drop Solaris, OpenSolaris, Illumos and OpenIndiana support in Python

2020-10-29 Thread Gregory P. Smith
I agree, remove Solaris support. Nobody willing to contribute seems
interested.

-gps

--
blame half the typos on my phone.

On Thu, Oct 29, 2020, 2:50 PM Victor Stinner  wrote:

> Hi,
>
> I propose to drop the Solaris support in Python to reduce the Python
> maintenance burden:
>
>https://bugs.python.org/issue42173
>
> I wrote a draft PR to show how much code could be removed (around 700
> lines in 65 files):
>
>https://github.com/python/cpython/pull/23002/files
>
> In 2016, I asked if we still wanted to maintain the Solaris support in
> Python, because Solaris buildbots were failing for longer than 6
> months and nobody was able to fix them. It was requested to find a
> core developer volunteer to fix Solaris issues and to set up a Solaris
> buildbot.
>
>
> https://mail.python.org/archives/list/python-dev@python.org/thread/NOT2RORSNX72ZLUHK2UUGBD4GTPNKBUS/#NOT2RORSNX72ZLUHK2UUGBD4GTPNKBUS
>
> Four years later, nothing has happened. Moreover, in 2018, Oracle laid
> off the Solaris development engineering staff. There are around 25
> open Python bugs specific to Solaris.
>
> I see 3 options:
>
> * Current best effort support (no change): changes only happen if a
> core dev volunteers to review and merge a change written by a
> contributor.
>
> * Schedule the removal in 2 Python releases (Python 3.12) and start to
> announce that Solaris support is going to be removed
>
> * Remove the Solaris code right now (my proposition): Solaris code
> will have to be maintained outside the official Python code base, as
> "downstream patches"
>
>
> Solaris has a few specific features visible at the Python level:
> select.devpoll, os.stat().st_fstype and stat.S_ISDOOR().
>
> While it's unclear to me if Oracle still actively maintains Solaris
> (latest release in 2018, no major update since 2018), Illumos and
> OpenSolaris (variants or "forks") still seem to be active.
>
> In 2019, a Solaris blog post explains that Solaris 11.4 still uses
> Python 2.7 but plans to migrate to Python 3, and Python 3.4 is also
> available. These two Python versions are no longer supported.
>
> https://blogs.oracle.com/solaris/future-of-python-on-solaris
>
> The question is if the Python project has to maintain the Solaris
> specific code or if this code should now be maintained outside Python.
>
> What do you think? Should we wait 5 more years? Should we expect a
> company will offer to maintain the Solaris support? Is there a
> motivated core developer to fix Solaris issue? As I wrote, nothing has
> happened in the last 4 years...
>
> Victor
> --
> Night gathers, and now my watch begins. It shall not end until my death.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/VDD7NMEDFXMOP4S74GEYJUHJRJPK2UR3/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HHP6DVPMDNUHEZZPKJDLZTMOI7O7LWPO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: os.scandir bug in Windows?

2020-10-26 Thread Gregory P. Smith
On Mon, Oct 26, 2020, 4:06 PM Chris Angelico  wrote:

> On Tue, Oct 27, 2020 at 10:00 AM Greg Ewing 
> wrote:
> >
> > On 27/10/20 8:24 am, Victor Stinner wrote:
> > > I would
> > > rather want to kill the whole concept of "access" time in operating
> > > systems (or just configure the OS to not update it anymore). I guess
> > > that it's really hard to make it efficient and accurate at the same
> > > time...
> >
> > Also it's kind of weird that just looking at data on the
> > disk can change something about it. Sometimes it's an
> > advantage to *not* have quantum computing!
> >
>
> And yet, it's of incredible value to be able to ask "now, where was
> that file... the one that I was looking at last week, called something
> about calendars, and it had a cat picture in it". Being able to answer
> that kinda depends on recording accesses one way or another, so the
> weirdnesses are bound to happen.
>

scandir is never going to answer that. Neither is a simple blind "access"
time stored in filesystem metadata.

ChrisA
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ZMNVRGZ7ZEC5EAKLUOX64R4WKHOLPF4O/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YW5NMIE2SC3RQWDMJX2DVIS3FRHNPEQM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Speeding up CPython

2020-10-21 Thread Gregory P. Smith
On Wed, Oct 21, 2020 at 4:04 PM Greg Ewing 
wrote:

> A concern I have about this is what effect it will have on the
> complexity of CPython's implementation.
>
> CPython is currently very simple and straightforward. Some parts
> are not quite as simple as they used to be, but on the whole it's
> fairly easy to understand, and I consider this to be one of its
> strengths.
>
> I worry that adding four layers of clever speedup tricks will
> completely destroy this simplicity, leaving us with something
> that can no longer be maintained or contributed to by
> ordinary mortals.
>

I have never considered ceval.c to be simple and straightforward.  Nor our
parser(s).  Or our optimizers.  Or the regular expression implementation.
Or the subprocess internals.  (we may differ on lists of what isn't simple
and straightforward, but i guarantee you we've all got something in mind)

Making this not a major concern for me.

We'd decide if there is something we find to be dark magic that seems
challenging to maintain during the review phases and decide what if
anything needs to be done about that to ameliorate such issues.

-gps


> --
> Greg
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/6XBJ2OA46KNMJ6FFI3B6RFYRTTD4HYOI/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QMIKJ3EE73U3J6TM4RGHMTHGB6N4WZGV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Speeding up CPython

2020-10-20 Thread Gregory P. Smith
On Tue, Oct 20, 2020 at 5:59 AM Mark Shannon  wrote:

> Hi everyone,
>
> CPython is slow. We all know that, yet little is done to fix it.
>
> I'd like to change that.
> I have a plan to speed up CPython by a factor of five over the next few
> years. But it needs funding.
>
> I am aware that there have been several promised speed ups in the past
> that have failed. You might wonder why this is different.
>
> Here are three reasons:
> 1. I already have working code for the first stage.
> 2. I'm not promising a silver bullet. I recognize that this is a
> substantial amount of work and needs funding.
> 3. I have extensive experience in VM implementation, not to mention a
> PhD in the subject.
>
> My ideas for possible funding, as well as the actual plan of
> development, can be found here:
>
> https://github.com/markshannon/faster-cpython
>
> I'd love to hear your thoughts on this.
>
> Cheers,
> Mark.
>

+1

Overall I think you are making quite a reasonable proposal.  It sounds
like effectively bringing your hotpy2 concepts into the CPython interpreter
with an intent to help maintain them over the long term.

People worried that you are doing this out of self interest may not know
who you are.  Sure, you want to be paid to do work that you appear to love
and have been mulling over for a decade+.  There is nothing wrong with
that.  Payment is proposed as on delivery per phase.  I like the sound of
that, nice!

Challenges I expect we'll face, that seem tractable to me, are mostly
around what potential roadblocks we, us existing python-committers and our
ultimate decider steering council might introduce intentionally or not that
prevents landing such work.  Payment on delivery helps that a lot, if we
opt out of some work, it is both our losses.  One potential outcome is that
you'd burn out and go away if we didn't accept something meaning payment
wasn't going to happen.  That already happens amongst all core devs today
so I don't have a problem with this even though it isn't what we'd
rightfully want to happen.  Middle grounds are quite reasonable
renegotiations.  The deciders on this would be the PSF (because money) and
the PSF would presumably involve the Steering Council in decisions of terms
and judgements.

Some background materials for those who don't already know Mark's past work
along these lines that is where this proposal comes from:
  https://sites.google.com/site/makingcpythonfast/ (hotpy)
  and the associated presentation in 2012:
https://www.youtube.com/watch?v=c6PYnZUMF7o

The amount of money seems entirely reasonable to me. Were it to be taken
on, part of the PSF's job is to drum up the money. This would be a contract
with outcomes that could effectively be sold to funders in order to do so.
There are many companies who use CPython a lot that we could solicit
funding from, many of whom have employees among our core devs already. Will
they bite? It doesn't even have to be from a single source or be all
proposed phases up front, that is what the PSF exists to decide and
coordinate on.

We've been discussing on and off in the past many years how to pay people
for focused work on CPython and the ecosystem and balance that with being
an open source community project.  We've got some people employed along
these lines already, this would become more of that and in many ways just
makes sense to me.

Summarizing some responses to points others have brought up:

Performance estimates:
 * Don't fret about claimed speedups of each phase.  We're all going to
doubt different things or expect others to be better.  The proof is
ultimately in the future pudding.

JIT topics:
 * JITs rarely stand alone. The phase 1+2 improved interpreter will always
exist. It is normal to start with an interpreter for fast startup and
initial tracing before performing JIT compilations, and as a fallback
mechanism when the JIT isn't appropriate or available. (my background:
Transmeta. We had an Interpreter and at least two levels of Translators
behind our x86 compatible CPUs, all were necessary)
 * Sometimes you simply want to turn tracing and jit stuff off to save
memory. That knob always winds up existing. If nothing else it is normal to
run our test suite with such a knob in multiple positions for proper
coverage.
 * It is safe to assume any later phase actual JIT would target at least
one important platform (ex: amd64 or aarch64) and if successful should
easily elicit contributions supporting others either as work or as funding
to create it.

"*Why this, why not fund XyZ?*" whataboutism:
 * This conversation is separate from other projects. The way attracting
funding for a project works can involve spelling out what it is for. It
isn't my decision, but I'd be amazed if anything beyond maybe phase 1 came
solely out of a PSF general no obligation fund. CPython is the most used
Python VM in the world. A small amount of funding is not going to get
maintainers and users to switch to PyPy.  There is unlikely to be a major
this or 

[Python-Dev] Re: os.scandir bug in Windows?

2020-10-19 Thread Gregory P. Smith
On Mon, Oct 19, 2020 at 6:28 AM Ivan Pozdeev via Python-Dev <
python-dev@python.org> wrote:

>
> On 19.10.2020 14:47, Steve Dower wrote:
> > On 19Oct2020 1242, Steve Dower wrote:
> >> On 15Oct2020 2239, Rob Cliffe via Python-Dev wrote:
> >>> TLDR: In os.scandir directory entries, atime is always a copy of mtime
> rather than the actual access time.
> >>
> >> Correction - os.stat() updates the access time to _now_, while
> os.scandir() returns the last access time without updating it.
> >
> > Let me correct myself first :)
> >
> > *Windows* has decided not to update file access time metadata *in
> directory entries* on reads. os.stat() always[1] looks at the file entry
> > metadata, while os.scandir() always looks at the directory entry
> metadata.
>
> Is this behavior documented somewhere?
>
> Such weirdness certaintly something that needs to be documented but I
> really don't like describing such quirks that are out of our control
> and may be subject to change in Python documentation. So we should only
> consider doing so if there are no other options.
>

I'm sure this is covered in MSDN.  Linking to that if it has it in a
concise explanation would make sense from a note in our docs.

If I'm understanding Steve correctly this is due to Windows/NTFS storing
the access time potentially redundantly in two different places. One within
the directory entry itself and one with the file's own metadata.  Those of
us with a traditional posix filesystem background may raise eyeballs at
this duplication, seeing a directory as a place that merely maps names to
inodes with the inode structure (equiv: file entry metadata) being the sole
source of truth.  Which ones get updated when and by what actions is up to
the OS.

So yes, just document the "quirk" as an intended OS behavior.  This is one
reason scandir() can return additional information on windows vs what it
can return on posix.  The entire point of scandir() is to return as much as
possible from the directory without triggering reads of the
inodes/file-entry-metadata. :)

-gps


>
> >
> > My suggested approach still applies, other than the bit where we might
> fix os.stat(). The best we can do is regress os.scandir() to have
> > similarly poor performance, but the best *you* can do is use os.stat()
> for accurate timings when files might be being modified while your
> > program is running, and don't do it when you just need names/kinds (and
> I'm okay adding that note to the docs).
> >
> > Cheers,
> > Steve
> >
> > [1]: With some fallback to directory entries in exceptional cases that
> don't apply here.
> > ___
> > Python-Dev mailing list -- python-dev@python.org
> > To unsubscribe send an email to python-dev-le...@python.org
> > https://mail.python.org/mailman3/lists/python-dev.python.org/
> > Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/QHHJFYEDBANW7EC3JOUFE7BQRT5ILL4O/
> > Code of Conduct: http://python.org/psf/codeofconduct/
> > --
> > Regards,
> > Ivan
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/VFXDBURSZ4QKA6EQBZLU6K4FKMGZPSF5/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IZ6KSRTJLORCB33OMVUPFYQYLMBM26EJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: os.scandir bug in Windows?

2020-10-17 Thread Gregory P. Smith
Could you please file this as an issue on bugs.python.org?

Thanks!
-Greg


On Sat, Oct 17, 2020 at 7:25 PM Rob Cliffe via Python-Dev <
python-dev@python.org> wrote:

>
> TLDR: In os.scandir directory entries, atime is always a copy of mtime
> rather than the actual access time.
>
> Demo program: Windows 10, Python 3.8.3:
>
> # osscandirtest.py
> import time, os
> with open('Test', 'w') as f: f.write('Anything\n') # Write to a file
> time.sleep(10)
> with open('Test', 'r') as f: f.readline() # Read the file
> print(os.stat('Test'))
> for DirEntry in os.scandir('.'):
>  if DirEntry.name == 'Test':
>  stat = DirEntry.stat()
>  print(f'scandir DirEntry {stat.st_ctime=} {stat.st_mtime=}
> {stat.st_atime=}')
>
> Sample output:
>
> os.stat_result(st_mode=33206, st_ino=8162774324687317,
> st_dev=2230120362, st_nlink=1, st_uid=0,
> st_gid=0, st_size=10, st_atime=1600631381, st_mtime=1600631371,
> st_ctime=1600631262)
> scandir DirEntry stat.st_ctime=1600631262.951019
> stat.st_mtime=1600631371.7062848 stat.st_atime=1600631371.7062848
>
> For os.stat, atime is 10 seconds more than mtime, as would be expected.
> But for os.scandir, atime is a copy of mtime.
> ISTM that this is a bug, and in fact recently it stopped me from using
> os.scandir in a program where I needed the access timestamp. No big
> deal, but ...
> If it is a feature for some reason, presumably it should be documented.
>
> Best wishes
> Rob Cliffe
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/RIKQAXZVUAQBLECFMNN2PUOH322B2BYD/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/INJBNXRKOBYFGFJ7CLHNJKVQQKU6X6NM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Hygenic macros PEP.

2020-09-15 Thread Gregory P. Smith
On Tue, Sep 15, 2020 at 1:27 PM  wrote:

> September 15, 2020 4:02 PM, "Daniel Butler"  >
> wrote:
>
> > Users would be encouraged to try it but
>
> > NOT to publish code using it.
>
> Thinking out loud, macros could be enabled with a command line flag.
> Advanced users would know how to use it but others would not. If the macro
> flag is not enabled it raises a syntax error. Thoughts?
> --
> Thank you!
>
> Daniel Butler
>
>
> A command line flag would be slightly better. All the documentation
> warnings in the world will not be enough to prevent all the cargo culters
> from creating some of the hardest to read code you ever saw.
>

If you're talking about a command line flag, I suggest you read the
pre-PEP.  The proposal requires explicit import-like syntax to bring the
macro in for parsing of code in the duration of the scope of that import.
Which is actually what you'd want for this kind of thing: explicitly
declaring which additional / alternate syntax features you use in the code
using it. It is similar from the existing `from __future__ import behavior`
system.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6Z7NFMSP32NYW23J3EEH2R7QZF2EHZ5O/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 626: Precise line numbers for debugging and other tools.

2020-07-29 Thread Gregory P. Smith
On Tue, Jul 28, 2020 at 2:12 PM Jim J. Jewett  wrote:

> ah... we may have been talking past each other.
>
> Steve Dower wrote:
> > On 25Jul2020 2014, Jim J. Jewett wrote:
> > > But it sounds as though you are saying the benefit
>
> [of storing the line numbers in an external table, I thought,
> but perhaps Pablo Galindo Salgado and yourself were
> talking only of the switch from an lnotab string to an opaque
> co_linetable?]
>
> > > is irrelevant; it is just inherently too expensive to ask programs
> that are already dealing
> > > with internals and trying to optimize performance to make a mechanical
> change from:
> > >  code.magic_attrname
> > > to:
> > >  magicdict[code]
> > > What have I missed?
>
> > You've missed that debugging and profiling tools that operate purely on
> > native memory can't execute Python code, so the "magic" has to be easily
> > representable in C such that it can be copied into whichever language is
> > being used (whether it's C, C++, C#, Rust, or something else).
>
> Unless you really were talking only of the switch to co_linetable, I'm
> still
> missing the problem.  To me, it still looks like a call to:
>
> PyAPI_FUNC(PyObject *) PyObject_GetAttrString(PyObject *, const char
> *);
>
> with the code object being stepped through and "co_lnotab"
> would be replaced by:
>
> PyAPI_FUNC(PyObject *) PyDict_GetItem(PyObject *mp, PyObject *key);
>
> using that same code object as the key, but getting the dict from
> some well-known (yet-to-be-defined) location, such as sys.code_to_lnotab.
>
> Mark Shannon and Carl Shapiro had seemed to object to the PEP because
> the new structure would make the code object longer, and making it smaller
> by a string does seem likely to be good.  But if your real objections are
> to
> just to replacing the lnotab format with something that needs to be
> executed, then I apologize for misunderstanding.
>

Introspection of the running CPython process is happening from outside of
the CPython interpreter itself.  Either from a signal handler or C/C++
managed thread within the process, or (as Pablo suggested) from outside the
process entirely.  Calling CPython APIs is a non-option in all of those
situations.

That is why I suggested that the "undocumented" new co_linetable will be
used instead of the disappeared co_lnotab regardless of documentation or
claimed stability guarantees.  It sounds like an equivalent read only data
source for this purpose.  It doesn't matter to anyone with such a profiler
if it is claimed to be unspecified.

The data is needed, and the format shouldn't change within a stable python
major.minor release (we'd be unlikely to anyways even without that
guarantee).  Given this, I suggest at least specifying valuable properties
of it such as "read only, never mutated" even if the exact format is
intentionally left as implementation defined, subject to change between
minor releases structure.

-gps


>
> -jJ
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/WUEFHFTPVTOPA3EFHACDECT3ZPLGGTFJ/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MQHSIXLAUL2ZSRORRDJEF3I3T73XP772/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 626: Precise line numbers for debugging and other tools.

2020-07-22 Thread Gregory P. Smith
On Wed, Jul 22, 2020 at 5:19 AM Mark Shannon  wrote:

>
>
> On 21/07/2020 9:46 pm, Gregory P. Smith wrote:
> >
> >
> > On Fri, Jul 17, 2020 at 8:41 AM Ned Batchelder  > <mailto:n...@nedbatchelder.com>> wrote:
> >
> > https://www.python.org/dev/peps/pep-0626/ :)
> >
> > --Ned.
> >
> > On 7/17/20 10:48 AM, Mark Shannon wrote:
> >  > Hi all,
> >  >
> >  > I'd like to announce a new PEP.
> >  >
> >  > It is mainly codifying that Python should do what you probably
> > already
> >  > thought it did :)
> >  >
> >  > Should be uncontroversial, but all comments are welcome.
> >  >
> >  > Cheers,
> >  > Mark.
> >
> >
> > """When a frame object is created, the f_lineno will be set to the line
> > at which the function or class is defined. For modules it will be set to
> > zero."""
> >
> > Within this PEP it'd be good for us to be very pedantic.  f_lineno is a
> > single number.  So which number is it given many class and function
> > definition statements can span multiple lines.
> >
> > Is it the line containing the class or def keyword?  Or is it the line
> > containing the trailing :?
>
> The line of the `def`/`class`. It wouldn't change for the current
> behavior. I'll add that to the PEP.
>
> >
> > Q: Why can't we have the information about the entire span of lines
> > rather than consider a definition to be a "line"?
>
> Pretty much every profiler, coverage tool, and debugger ever expects
> lines to be natural numbers, not ranges of numbers.
> A lot of tooling would need to be changed.
>
> >
> > I think that question applies to later sections as well.  Anywhere we
> > refer to a "line", it could actually mean a span of lines. (especially
> > when you consider \ continuation in situations you might not otherwise
> > think could span lines)
>
> Let's take an example:
> ```
> x = (
>  a,
>  b,
> )
> ```
>
> You would want the BUILD_TUPLE instruction to have a of span lines 1 to
> 4 (inclusive), rather just line 1?
> If you wanted to break on the BUILD_TUPLE where you tell pdb to break?
>
> I don't see that it would add much value, but it would add a lot of
> complexity.
>

We should have the data about the range at bytecode compilation time,
correct?  So why not keep it?  sure, most existing tooling would just use
the start of the range as the line number as it always has.  but some
tooling could find the range useful (ex: semantic code indexing for use in
display, search, editors, IDEs. Rendering lint errors more accurately
instead of just claiming a single line or resorting to parsing hacks to
come up with a range, etc.).  The downside is that we'd be storing a second
number in bytecode making it slightly larger.  Though it could be stored
efficiently as a prefixed delta so it'd likely average out as less than 2
bytes per line number stored.  (i don't have a feeling for our current
format to know if that is significant or not - if it is, maybe this idea
just gets nixed)

The reason the range concept was on my mind is due to something not quite
related but involving a changed idea of a line number in our current system
that we recently ran into with pytype during a Python upgrade.

"""in 3.7, if a function body is a plain docstring, the line number of the
RETURN_VALUE opcode corresponds to the docstring, whereas in 3.6 it
corresponds to the function definition.""" (Thanks, Martin & Rebecca!)

```python
def no_op():
  """docstring instead of pass."""
```

so the location of what *was* originally an end of line `# pytype:
disable=bad-return-type` comment (to work around an issue not relevant
here) turned awkward and version dependent.  pytype is bytecode based, thus
that is where its line numbers come from.  metadata comments in source can
only be tied to bytecode via line numbers.  making end of line directives
occasionally hard to match up.

When there is no return statement, this opcode still exists.  what line
number does it belong to?  3.6's answer made sense to me.  3.7's seems
wrong - a docstring isn't responsible for a return opcode.  I didn't check
what 3.8 and 3.9 do.  An alternate answer after this PEP is that it
wouldn't have a line number when there is no return statement (pedantically
correct, I approve! #win).

-gps


>
> Cheers,
> Mark.
>
> >
> > -gps
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/H3YBK275SUSCR5EHWHYBTJBF655UK7JG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 626: Precise line numbers for debugging and other tools.

2020-07-21 Thread Gregory P. Smith
On Tue, Jul 21, 2020 at 1:39 PM Gregory P. Smith  wrote:

>
> On Fri, Jul 17, 2020 at 10:41 AM Pablo Galindo Salgado <
> pablog...@gmail.com> wrote:
>
>> I like the proposal in general but I am against removing lnotab. The
>> reason is that many tools rely on reading this attribute to figure out the
>> Python call stack information. For instance, many sampler profilers read
>> this memory by using ptrace or process_vm_readv and they cannot execute any
>> code on the process under tracing as that would be a security issue. If we
>> remove a 'static' view of that information, it will impact negatively the
>> current set of remote process analysis tools. The proposed new way of
>> retrieving the line number will rely (if we deprecate and remove lnotab) on
>> executing code, making it much more difficult for the ecosystem of
>> profilers and remote process analysis tools to do their job.
>>
>
> +1 agreed.
>
> """Some care must be taken not to break existing tooling. To minimize
> breakage, the co_lnotab attribute will be retained, but lazily generated on
> demand.""" - https://www.python.org/dev/peps/pep-0626/#id4
>
> This breaks existing tooling.
>

"The co_linetable attribute will hold the line number information. The
format is opaque, unspecified and may be changed without notice."
...
"Tools that parse the co_lnotab table should move to using the new
co_lines() method as soon as is practical."

Given it is impossible for tools doing passive inspection of Python VM
instances to execute code, co_linetable's exact format will be depended on
just as co_lnotab was.  co_lnotab was only quasi-"officially" documented in
the Python docs, it's spec lives in
https://github.com/python/cpython/blob/master/Objects/lnotab_notes.txt (pointed
to by a couple module's docs). The lnotab format "changed" once, in 3.6, an
unsigned delta was changed to signed (but I don't believe anything beyond
some experiments ever actually used negatives?).

How about creating something defined and always present for once given the
need has been demonstrated.  Even if we don't, it will be used, and we will
be unable to change it within a release.

-gps


> -gps
>
>
>> --
>>
>> Pablo
>>
>> On Fri, 17 Jul 2020, 15:55 Mark Shannon,  wrote:
>>
>>> Hi all,
>>>
>>> I'd like to announce a new PEP.
>>>
>>> It is mainly codifying that Python should do what you probably already
>>> thought it did :)
>>>
>>> Should be uncontroversial, but all comments are welcome.
>>>
>>> Cheers,
>>> Mark.
>>> ___
>>> Python-Dev mailing list -- python-dev@python.org
>>> To unsubscribe send an email to python-dev-le...@python.org
>>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>>> Message archived at
>>> https://mail.python.org/archives/list/python-dev@python.org/message/BMX32UARJFY3PZZYKRANS6RCMR2XBVVM/
>>> Code of Conduct: http://python.org/psf/codeofconduct/
>>>
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/57OXMUBV5FAEFXULRBCRAHEF7Q5GP6QT/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HFQKDM4TVJPNHHHIJN3BGU2N3CRRXNQY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 626: Precise line numbers for debugging and other tools.

2020-07-21 Thread Gregory P. Smith
On Fri, Jul 17, 2020 at 8:41 AM Ned Batchelder 
wrote:

> https://www.python.org/dev/peps/pep-0626/ :)
>
> --Ned.
>
> On 7/17/20 10:48 AM, Mark Shannon wrote:
> > Hi all,
> >
> > I'd like to announce a new PEP.
> >
> > It is mainly codifying that Python should do what you probably already
> > thought it did :)
> >
> > Should be uncontroversial, but all comments are welcome.
> >
> > Cheers,
> > Mark.
>
>
"""When a frame object is created, the f_lineno will be set to the line at
which the function or class is defined. For modules it will be set to
zero."""

Within this PEP it'd be good for us to be very pedantic.  f_lineno is a
single number.  So which number is it given many class and function
definition statements can span multiple lines.

Is it the line containing the class or def keyword?  Or is it the line
containing the trailing :?

Q: Why can't we have the information about the entire span of lines rather
than consider a definition to be a "line"?

I think that question applies to later sections as well.  Anywhere we refer
to a "line", it could actually mean a span of lines. (especially when you
consider \ continuation in situations you might not otherwise think could
span lines)

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/F6TQIXELOJWBUDSO5MDW3X5RTQXO6EI3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 626: Precise line numbers for debugging and other tools.

2020-07-21 Thread Gregory P. Smith
On Fri, Jul 17, 2020 at 10:41 AM Pablo Galindo Salgado 
wrote:

> I like the proposal in general but I am against removing lnotab. The
> reason is that many tools rely on reading this attribute to figure out the
> Python call stack information. For instance, many sampler profilers read
> this memory by using ptrace or process_vm_readv and they cannot execute any
> code on the process under tracing as that would be a security issue. If we
> remove a 'static' view of that information, it will impact negatively the
> current set of remote process analysis tools. The proposed new way of
> retrieving the line number will rely (if we deprecate and remove lnotab) on
> executing code, making it much more difficult for the ecosystem of
> profilers and remote process analysis tools to do their job.
>

+1 agreed.

"""Some care must be taken not to break existing tooling. To minimize
breakage, the co_lnotab attribute will be retained, but lazily generated on
demand.""" - https://www.python.org/dev/peps/pep-0626/#id4

This breaks existing tooling.

-gps


> --
>
> Pablo
>
> On Fri, 17 Jul 2020, 15:55 Mark Shannon,  wrote:
>
>> Hi all,
>>
>> I'd like to announce a new PEP.
>>
>> It is mainly codifying that Python should do what you probably already
>> thought it did :)
>>
>> Should be uncontroversial, but all comments are welcome.
>>
>> Cheers,
>> Mark.
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/BMX32UARJFY3PZZYKRANS6RCMR2XBVVM/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/57OXMUBV5FAEFXULRBCRAHEF7Q5GP6QT/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NCW4PCOINV7HYUHND7EQ2GUWR22OZDXF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 622: Structural Pattern Matching

2020-06-26 Thread Gregory P. Smith
On Fri, Jun 26, 2020 at 6:42 AM Mark Shannon  wrote:

>
> > Let us start from some anecdotal evidence: isinstance() is one of the
> most called functions in large scale Python code-bases (by static call
> count). In particular, when analyzing some multi-million line production
> code base, it was discovered that isinstance() is the second most called
> builtin function (after len()). Even taking into account builtin classes,
> it is still in the top ten. Most of such calls are followed by specific
> attribute access.
>
> Why use anecdotal evidence? I don't doubt the numbers, but it would be
> better to use the standard library, or the top N most popular packages
> from GitHub.
>

Agreed.  This anecdote felt off to me and made for a bad introductory
feeling.  I know enough of who is involved to read it as likely "within the
internal Dropbox code base we found isinstance() to be the second most
called built-in by static call counts".  It'd be better worded as such
instead of left opaque if you are going to use this example at all.  [but
read on below, i'm not sure the anecdotal evidence is even relevant to
state]

Also if using this, please include text explaining what "static call count
means".  Was that "number of grep 'isinstance[(]' matches in all .py files
which we reasonably assume are calls"?  Or was that "measuring a running
application and counting cumulative calls of every built-in for the
lifetime of the large application"?  Include a footnote of if you have you
removed all use of six and py2->py3-isms?  Both six and manual py2->3
porting often wound up adding isinstance in places where they'll rightfully
be refactored out when cleaning up the py2 dead code legacy becomes anyones
priority.

A very rough grep of our much larger Python codebase within Google shows
isinstance *call site counts* to likely be lower than int or len and
similar to print.  With a notable percentage of isinstance usage clearly
related to py2 -> py3 compatibility, suggesting many can now go away.  I'm
not going to spend much time looking further as I don't think actual
numbers matter:  *Confirmed, isinstance gets used a lot.*  We can simply
state that as a truth and move on without needing a lot of justification.

>
> > There are two possible conclusions that can be drawn from this
> information:
> >
> > Handling of heterogeneous data (i.e. situations where a variable can
> take values of multiple types) is common in real world code.
> > Python doesn't have expressive ways of destructuring object data
> (i.e. separating the content of an object into multiple variables).
>
> I don't see how the second conclusion can be drawn.
> How does the prevalence of `isinstance()` suggest that Python doesn't
> have expressive ways of destructuring object data?

...

> >
> > We believe this will improve both readability and reliability of
> relevant code. To illustrate the readability improvement, let us consider
> an actual example from the Python standard library:
> >
> > def is_tuple(node):
> > if isinstance(node, Node) and node.children == [LParen(), RParen()]:
> > return True
> > return (isinstance(node, Node)
> > and len(node.children) == 3
> > and isinstance(node.children[0], Leaf)
> > and isinstance(node.children[1], Node)
> > and isinstance(node.children[2], Leaf)
> > and node.children[0].value == "("
> > and node.children[2].value == ")")
> >
>
> Just one example?
> The PEP needs to show that this sort of pattern is widespread.
>

Agreed.  I don't find application code following this pattern to be
common.  Yes it exists, but I would not expect to encounter it frequently
if I were doing random people's Python code reviews.

The supplied "stdlib" code example is lib2to3.fixer_util.is_tuple.  Using
that as an example of code "in the standard library" is *technically*
correct. But lib2to3 is an undocumented deprecated library that we have
slated for removal.  That makes it a bit weak to cite.

Better practical examples don't have to be within the stdlib.

Randomly perusing some projects I know that I expect to have such
constructs, here's a possible example:
https://github.com/PyCQA/pylint/blob/master/pylint/checkers/logging.py#L231.

There are also code patterns in pytype such as
https://github.com/google/pytype/blob/master/pytype/vm.py#L480  and
https://github.com/google/pytype/blob/master/pytype/vm.py#L1088 that might
make sense.

Though I realize you were probably in search of a simple one for the PEP in
order to write a before and after example.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/X7XWT4YZFRFJJYJFHZY6X5LYLBZ7LH52/
Code of Conduct: 

[Python-Dev] Re: PEP 622: Structural Pattern Matching

2020-06-25 Thread Gregory P. Smith
Litmus test: Give someone who does not know Python this code example from
the PEP and ask them what it does and why it does what it does:

match get_shape():
case Line(start := Point(x, y), end) if start == end:
print(f"Zero length line at {x}, {y}")

I expect confusion to be the result.  If they don't blindly assume the
variables come from somewhere not shown to stop their anguish.

With Python experience, my own reading is:
 * I see start actually being assigned.
 * I see nothing giving values to end, x, or y.
 * Line and Point are things being called, probably class constructions due
to being Capitalized.
 * But where did the parameter values come from and why and how can end be
referred to in a conditional when it doesn't exist yet?
   They appear to be magic!

Did get_shape() return these? (i think not).  Something magic and *implicit
rather than explicit* happens in later lines.  The opposite of what Python
is known for.

Where's the pseudo-code describing *exactly* what the above looks like
logically speaking? (there's a TODO in the PEP for the __match__ protocol
code so I assume it will come, thanks!).  I can guess _only_ after reading
a bunch of these discussions and bits of the PEP.  Is it this?  I can't
tell.

shape = get_shape()
values_or_none = Line.__match__(shape)
if values_or_none:
  start, end = values_or_none
  if start == end:
if x, y := Point.__match__(shape):
  print(...)
  del x, y
  else:
print(...)
  del start, end
else:
  # ... onto the next case: ?

Someone unfamiliar with Python wouldn't even have a chance of seeing that.
I had to rewrite the above many times, I'm probably still wrong.

That sample is very confusing code.  It makes me lean -1 on the PEP overall
today.

This syntax does not lead to readable logically understandable code.  I
wouldn't encourage anyone to write code that way because it is not
understandable to others.  We must never assume others are experts in the
language they are working on code in if we want it to be maintainable.  I
wouldn't approve a code review containing that example.

It would help *in part* if ()s were not used to invoke the __match__
protocol.  I think a couple others also mentioned this earlier.  Don't make
it look like a call.  Use different tokens than ().  Point{x, y} for
example.  Or some way to use another token unused in that context in our
toolbook such as @ to signify "get a matcher for this class" instead of
"construct this class".  for example ClassName@() as our match protocol
indicator, shown here with explicit assignments for clarity:

match get_shape() as shape:
  case start, end := Line@(shape):

no implicit assignments, it is clear where everything comes from.  it is
clear it isn't a constructor call.

downside?  possibly a painful bug when someone forgets to type the @.  but
the point of it not being construction needs to be made.  not using ()s but
instead using ClassName@{} or just ClassName{} would prevent that.

The more nested things get with sub-patterns, the worse the confusion
becomes.  The nesting sounds powerful but is frankly something I'd want to
forbid anyone from using when the assignment consequences are implicit.  So
why implement sub-patterns at all?  All I see right now is pain.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/B4OXI6CZTSNC6LHWRAKT4XVFFDIUR4K3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The Anti-PEP

2020-06-25 Thread Gregory P. Smith
On Thu, Jun 25, 2020 at 6:49 PM Raymond Hettinger <
raymond.hettin...@gmail.com> wrote:

> > it is hard to make a decision between the pros and cons,
> > when the pros are in a single formal document and the
> > cons are scattered across the internet.
>
> Mark, I support your idea.  It is natural for PEP authors to not fully
> articulate the voices of opposition or counter-proposals.
> The current process doesn't make it likely that a balanced document is
> created for decision making purposes.
>
>
On some PEPs in the past I seem to recall we've had the PEP author, or at
least editor after the initial draft kicked things off _not_ be among those
invested in seeing the PEP be approved.

Or maybe I'm conflating the old role of the PEP delegate with the editor?

Regardless i don't see how an anti-pep would work much better, but I also
don't see anything stopping anyone from trying one.  I worry that it'll
fragment conversation even more and separate discussions so that everyone
is even more confused about overall opinion tallies?  one way to find out...

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JYJ2GB2LKX7ELWKURAQMOA7Z52DHE3B6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 622: Structural Pattern Matching

2020-06-25 Thread Gregory P. Smith
On Wed, Jun 24, 2020 at 7:58 PM Stephen J. Turnbull <
turnbull.stephen...@u.tsukuba.ac.jp> wrote:

> Ethan Furman writes:
>
>  > _ does not bind to anything, but of what practical importance is that?
>
> *sigh* English speakers ... mutter ... mutter ... *long sigh*
>
> It's absolutely essential to the use of the identifier "_", otherwise
> the I18N community would riot in the streets of Pittsburgh.  Not good
> TV for Python (and if Python isn't the best TV, what good is it? ;-)
>
>
Can I use an i18n'd _("string") within a case without jumping through hoops
to assign it to a name before the match:?


> Steve
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ENUPVNZEW7DFJKVCKK5XQ4NLV3KOW36C/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WPPCVM6YPAYSGA7AJ5YOKGLQQNGXFLF4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 622: Structural Pattern Matching

2020-06-25 Thread Gregory P. Smith
On Wed, Jun 24, 2020 at 7:34 PM Taine Zhao  wrote:

> > e.g., "or", and then I wonder "what does short-circuiting have to do
> > with it?". All reuse of symbols carries baggage.
>
> "or" brings an intuition of the execution order of pattern matching, just
> like how people already know about "short-circuiting".
>
> "or" 's operator precedence also suggests the syntax of OR patterns.
>
> As we have "|"  as an existing operator, it seems that there might be
> cases that the precedence of "|" is not consistent with it in an
> expression. This will mislead users.
>

I prefer "or" to "|" as a combining token as there is nothing bitwise going
on here.  "or" reads better.   Which is why Python used it for logic
operations in the first place.  It is simple English.  "|" does not read
like or to anyone but a C based language programmer.  Something Python
users should never need to know.

There is no existing pythonic way to write "evaluate all of these things at
once in no specific order".

And in reality, there will be an order. It'll be sequential, and if it
isn't the left to right order that things are written with the "|" between
them, it will break someones assumptions and make optimization harder.
Some too-clever-for-the-worlds-own-good author is going to implement
__match__ classmethods that have side effects and make state changes that
impact the behavior of later matcher calls (no sympathy for them). Someone
else is going to order them most likely to least likely for performance (we
should have sympathy for that).

Given we only propose to allow a single trailing guard if per case, using
"or" instead of "|" won't be confused with an if's guard condition.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2C2S4EDIFGYI6AW3K6VVU2QKEPLASZJT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 622: Structural Pattern Matching

2020-06-25 Thread Gregory P. Smith
On Wed, Jun 24, 2020 at 1:38 PM Luciano Ramalho  wrote:

> Thank you Guido, Brandt Bucher, Tobias Kohn, Ivan Levkivskyi and Talin
> for this fun and very useful new feature.
>
> I do enjoy pattern matching a lot in Elixir—my favorite language these
> days, after Python.
>
> I don't want to start a discussion, but I just want to say that as an
> instructor I fear this core language addition may make the language
> less approachable to the non-IT professionals, researchers etc. who
> have saved Python from the decline that we can observe happening in
> Ruby—a language of similar age, with similar strengths and weaknesses,
> but never widely adopted outside of the IT profession.
>
> After I wrap up Fluent Python 2e (which is aimed at professional
> developers) I hope I can find the time to tackle the challenge of
> creating introductory Python content that manages to explain pattern
> matching and other recent developments in a way that is accessible to
> all.
>

Hold on a while.  This feature does not exist.  This PEP has not been
accepted.  Don't count your chickens.py before they hatch.

The last P in PEP stands for Proposal for a reason.  Rejection is a
perfectly valid option.  As is a significant reworking of the proposal so
that it doesn't turn into a nightmare.  This one needs a lot of work at a
minimum.  As proposed, it is extremely non-approachable and non-trivial.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GUQLG3H5REWRB5ZINAISX7ZCS3DJQOED/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: New optional dependency

2020-06-23 Thread Gregory P. Smith
On Tue, Jun 23, 2020 at 7:41 AM Elisha Paine  wrote:

> Hi all,
>
> I am looking at getting TkDND support into tkinter, and have created
> issue40893. However, we are currently considering the practicalities of
> adding a new optional dependency to Python and I was hoping someone
> could help answer the question of: is there a procedure for this?
>
> The problem is that third-party distributors need to be notified of the
> change and edit the package details accordingly. The current idea is
> just to make it very clear on the “what’s new” page, however this would
> not guarantee it would be seen, so I am very open to ideas/opinions.
>
> Thanks,
> Elisha
>

What's New documentation combined with a configure.ac check to
conditionally compile it only when available is sufficient.  (once that
works, also ensuring that the dependency is added to the windows and mac
build setups so our binary builds include it)

It is on the third party distributors to pay attention.  If that worries
you, file your own issues in the debian and fedora trackers and that should
do it.

-gps
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DE7NIJATCRMQEQOUNWGSEIAL3T5GSWO5/
Code of Conduct: http://python.org/psf/codeofconduct/


  1   2   3   4   5   6   7   >