[Python-Dev] Re: PEP 663: Improving and Standardizing Enum str(), repr(), and format() behaviors

2021-09-13 Thread Steve Dower

On 9/13/2021 8:12 PM, Ethan Furman wrote:

On 9/13/21 9:03 AM, Steve Dower wrote:

 > You *are* allowed to put some (brief) details in the abstract. No 
need to avoid spoilers ;)

 >
 > As it stands, "it is time" on its own is a really bad reason to 
change anything. So you're

 > preemptively making it sound like the PEP has no solid backing.


Abstract


Update the ``repr()``, ``str()``, and ``format()`` of the various Enum 
types

for consistency and to better match their intended purpose.



Better?


You don't have a one sentence summary of what the changes entail? 
"Originally these were based on the idea that ..., but now will work 
better for ... by making the results more consistent with ..." (where 
each "..." is filled with specific things).


It doesn't have to save me reading the whole thing, but if I'm digging 
through documents going "why am I seeing  instead of just '1'", 
it should confirm that this is the right document to read.


(Alternatively, think about writing the tweet here that you want people 
to include when they share your PEP, assuming that 99% of people won't 
actually click through to the document. What can you tell them so that 
they at least know what is coming?)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VQI5IVW5B6NQB246YYOSAW5DP3G2ZWAQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 663: Improving and Standardizing Enum str(), repr(), and format() behaviors

2021-09-13 Thread Steve Dower

On 9/11/2021 6:53 AM, Ethan Furman wrote:

Abstract


Now that we have a few years experience with Enum usage it is time to 
update
the ``repr()``, ``str()``, and ``format()`` of the various enumerations 
by their

intended purpose.


You *are* allowed to put some (brief) details in the abstract. No need 
to avoid spoilers ;)


As it stands, "it is time" on its own is a really bad reason to change 
anything. So you're preemptively making it sound like the PEP has no 
solid backing.


I haven't read the rest yet. The abstract is supposed to make me want to 
read it, and this one does not, so I stopped. (Might come back later 
when I'm not trying to catch up on my weekend's email though.)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/KN7QUBTJGWWP7TZ4PWJIZNVFPWU7S2ID/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Making code object APIs unstable

2021-08-16 Thread Steve Dower

On 8/16/2021 12:47 AM, Guido van Rossum wrote:
My current proposal is to issue a DeprecationWarning in PyCode_New() and 
PyCode_NewWithPosArgs(), which can be turned into an error using a 
command-line flag. If it's made an error, we effectively have (B); by 
default, we have (A).


Then in 3.13 we can drop them completely.


We definitely had legitimate use cases come up when adding positional 
arguments (hence the new API, rather than breaking the existing one, 
which was the first attempt at adding the feature).


I don't recall exactly what they are (perhaps Pablo does, or they may be 
in email/issue archives), but since they exist, presumably they are 
useful and viable _despite_ the bytecode varying between releases. This 
suggests there's probably a better API we should add at the same time - 
possibly some kind of unmarshalling or cloning-with-updates function?


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NXSTTN3CVCGAYSCUH24OEZZFIBOEJRBD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Repealing PEP 509 (Add a private version to dict)

2021-07-29 Thread Steve Dower

On 7/29/2021 6:17 PM, Barry Warsaw wrote:

On Jul 29, 2021, at 05:55, Steve Dower  wrote:


Maybe we should have a "Type" other than Standards Track for PEPs that are 
documenting implementation designs, rather than requirements for standardisation?


Wouldn’t Informational fill that need?


Perhaps, though my understanding was that Informational PEPs aren't tied 
to specific Python versions (in this case, a CPython release) and are 
meant to remain "Active" until completely redundant.


Maybe if the PEPs got a preamble covering "this describes part of the 
implementation of CPython as first released in x.y... may have changed 
since that time... refer to source code" it would be clear enough. We 
might want to move them to "Final" after implementation though, unlike 
other Informational PEPs.


Seems like there's _just enough_ nuance there to justify a different 
type. But then, presumably there's also infrastructure around PEP types 
as well that I'm unaware of (and not volunteering to update!), so those 
with the investment in it can have it however they'd like :)


Cheers,
Steve

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/F2WEKHD77EAWFTCW7PWDEESSZMBL64BI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Repealing PEP 509 (Add a private version to dict)

2021-07-29 Thread Steve Dower

On 7/29/2021 11:41 AM, Mark Shannon wrote:
The dictionary version number is currently unused in CPython and just 
wastes memory. I am not claiming that we will never need it, just that
we shouldn't be required to have it. It should be an internal 
implementation detail that we can add or remove depending on requirements.


Sounds reasonable.

Maybe we should have a "Type" other than Standards Track for PEPs that 
are documenting implementation designs, rather than requirements for 
standardisation?


If/when we ever get to a point where other implementations want to claim 
to implement "standard" Python, having an explicit distinction here 
would be helpful.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/W7TRTCE3PLLKHPWKFYMRZN4KHFK65RWD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [slightly OT] cryptographically strong random.SystemRandom()

2021-07-12 Thread Steve Dower

On 7/12/2021 4:11 PM, Dan Stromberg wrote:
It looks like CPython could do better on Windows: SystemRandom (because 
of os.urandom()) is good on Linux and mac, but on Windows they use the 
CryptGenRandom deprecated API


Supporting detail: 
https://docs.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-cryptgenrandom 



Should I open an issue?


Please do, but note that the API is only deprecated because it was not 
very extensible and the new API is much more comple... er... extensible.


There's nothing wrong with the randomness from the function. It'll be 
using the new API under the covers. So this is an enhancement, not a 
security issue, and should only target 3.11.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/N6K3GLAEOXYFAJY5S4Z6APUCKXVYTIEV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Is the Python review process flawed?

2021-07-01 Thread Steve Dower

On 7/1/2021 7:04 PM, Christopher Barker wrote:
PS: All that being said, we, as a community, could do better. For 
instance, someone like me could do high-level triage on bug reports -- I 
need to set aside some time to do that.


In addition/instead of triage, one thing we (as core maintainers) really 
need, which the more involved members of the wider community can 
provide, is *context*.


I presume that if you're on python-dev, you probably have (or have 
access to) a significant Python codebase. Many core developers do as 
well, but we know we don't have great view across all the different ways 
that Python gets used. Some of the "worst" regressions are due to 
scenarios that we simply didn't know existed until they were broken.


We don't need (or wouldn't benefit from) just seeing all of the code, 
because it's the understanding that's important. How would changes 
impact *your* projects, *your* users. Anywhere you can chip in with "we 
do XYZ and this change would [not] impact us this way" or "we'd have to 
mitigate it by doing ..." is very useful.


It's not just a vote, it's the added context that's helpful. Ultimately, 
"votes" don't count for much in merge/reject decisions, because we're 
trying to take into account all users, not just those who show up to 
click a button. Rational arguments based in real and personal impact 
statements are far more valuable, especially if you have context that we 
don't.


So feel free to drop into bugs or PRs when you have relevant context for 
the impact of a change. Try and be as detailed about the impact on you 
as you're allowed (or can be without distracting from the issue). Don't 
be offended if we still think that the impact is "worth it" or that your 
situation isn't actually as impacted as you think (that probably 
indicates that we need to do a better NEWS/Whats New entry for the change).


And yes, these can be *significant contributions* to the project, so if 
you find yourself in a position where you have to justify it to someone 
outside of the project (e.g. using work time for open source), hopefully 
any core developer you've interacted with a bit will gladly write a 
short endorsement for that.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BB3YX3J5NZHBUKCV67O34TPVQLD2CLWA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Making PY_SSIZE_T_CLEAN not mandatory.

2021-06-17 Thread Steve Dower

On 6/9/2021 2:20 PM, Petr Viktorin wrote:

On 09. 06. 21 13:09, Paul Moore wrote:

Also, I often use the stable ABI when embedding, so that
I can replace the Python interpreter without needing to recompile my
application and redeploy new binaries everywhere (my use case is
pretty niche, though, so I wouldn't like to claim I represent a
typical user...).


I hope this use case becomes non-niche. I would love it if embedders 
tell people to just use any Python they have lying around, instead of 
vendoring it (or more realistically, embedding JS or Lua instead).


I also hope it becomes non-niche, but I'd rather you started 
embedding/vendoring CPython rather than using anything that just happens 
to be laying around.


The number one issue that *all* of my customers (and their customers) 
have is installation. For most of them, the best way to solve it is to 
not make them install Python themselves, which in many cases means 
vendoring. The more acceptable and easy we can make this process, the 
more Python will be a viable choice against JS or Lua (though with all 
the other C API, threading and initialization issues, it's unlikely that 
embedding CPython is going to become significantly more attractive than 
those two - even IronPython still lives on for embedding because it 
works so well).


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/36MZXQC3WGDXO6SFTPGT7RC34EMFPP6E/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: change of behaviour for '.' in sys.path between 3.10.0a7 and 3.10.0b1

2021-06-04 Thread Steve Dower

On 6/3/2021 7:42 PM, Senthil Kumaran wrote:

On Thu, Jun 03, 2021 at 07:08:06PM +0100, Robin Becker wrote:

The regression may well be a platform issue. I am by no means an expert at
building python; I followed a recipe from the ARCH PKGBUILD of some time


I meant the change in the diff we were suspecting was supposed to be
"Windows" specific. But interesting that it was observed in ARCH. The
non-windows piece of changes were probably responsible


While we were investigating the reliably broken behaviour, we figured 
that related behaviour was *unreliably* broken on all platforms.


Specifically, if you imported a module through a relative path (resolved 
to CWD), changed the CWD, cleared the module cache, and reimported, you 
would still get the original module, contrary to the intent of getting 
the newly resolved module. ("Correct" code would have also cleared the 
import path caches, not just sys.modules.) This was because the module 
info was cached using the relative path, and so would be used later even 
though its absolute path had changed.


Someone reported this change in 3.8 and we decided to revert it in 
existing released, because the bugfix was for an obscure enough 
situation that it really wasn't worth breaking any existing versions. 
For unreleased versions, it's better to fix it so that imports will be 
more reliable in the future.


Based on Robin's original post, it looks like this is exactly the 
scenario we wanted to fix, and it's probably exposed an underlying issue 
in that test suite.


Unfortunately, import caching has changed a lot since Python 2.7, so 
it's very unlikely that one approach will be able to do reliable 
reloading across all of those versions. Your best bet since 3.3 is to 
use the importlib module's methods (but your *best* bet overall is to 
run separate test suites in their own process so you can avoid the CWD 
changes).


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RDIMZA2KVWZSYUVLWQUZT4TE5BH6QWOV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: GDB not breaking at the right place

2021-05-25 Thread Steve Dower

On 5/24/2021 9:38 PM, Guido van Rossum wrote:
To the contrary, I think if you want the CI jobs to be faster you should 
add the CFLAGS to the configure call used to run the CI jobs.


Big +1

We should have the most useful interactive development/debugging options 
set by default (or with an obvious option), and use the complex 
overrides in our own automated systems.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TNHXYXH6A77J5Y43JBLVMXC3KF722H6T/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Question for potential python development contributions on Windows

2021-05-24 Thread Steve Dower
When you're installing Visual Studio the C++ tools version is listed 
under the selected components as "v14x".


However, at this stage, the *only* version in circulation is 14.x - mine 
shows v142. Until the 14 changes to a "15", it will be binary compatible 
and so you can use any version at all to build CPython and/or extension 
modules.


Our official releases are always using relatively up-to-date compilers, 
but provided the compatibility is maintained on Microsoft's side, 
there's no need to worry about the specific versions.


Cheers,
Steve

On 5/24/2021 4:49 PM, Guido van Rossum wrote:

How do you check that the C++ tools are v14.x?

On Mon, May 24, 2021 at 1:43 AM Łukasz Langa > wrote:




On 20 May 2021, at 07:03, pjfarl...@earthlink.net
 wrote:

The Python Developers Guide specifically states to get VS2017 for
developing or enhancing python on a Windows system.
__ __
Is it still correct to specifically use VS2017 , or is VS2019 also
acceptable?


We have to update the devguide. VS 2019 is supported, probably even
recommended, as long as the C++ tools are v14.x.

Cheers,
Łukasz

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MJVPA353LMZM4OP6U63QJ5PWQ73ZCGGW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The repr of a sentinel

2021-05-21 Thread Steve Dower

On 5/21/2021 9:36 AM, Petr Viktorin wrote:

On 21. 05. 21 3:23, Eric V. Smith wrote:

On 5/20/2021 3:24 PM, Ronald Oussoren via Python-Dev wrote:

One example of this is the definition of dataclasses.field:

|dataclasses.||field|(/*/, /default=MISSING/, 
/default_factory=MISSING/, /repr=True/, /hash=None/, /init=True/, 
/compare=True/, /metadata=None/)


Completely agree. I'm opposed to Ellipsis as a sentinel for this 
reason, at least for dataclasses. I can easily see wanting to store an 
Ellipsis in a field of a dataclass that's describing a function's 
parameters. And I can even see it being the default= value. Not so 
much default_factory=, but they may as well be the same.


And this argument also works for any other single value.
Including the original None.

(It just might not be obvious at first, before that single value starts 
being used in lots of different contexts.)


I think it's a fairly obvious case, and a legitimate one to acknowledge 
(the array one less so - we're talking about a "missing parameter" 
sentinel, so you ought to be testing for it immediately and replacing 
with your preferred/secret default value, not using it for indexing 
without even checking it).


In the example above, it's fairly easy to pass "lambda: ..." as the 
default_factory to work around it, and besides, the existence of rare 
edge cases doesn't mean you have to force everyone into acting like 
they're a rare edge case.


All the other situations where we want arguments with unspecified 
default values can use ..., and the few cases where ... is a valid value 
(semantically, for the API, not syntactically) can spend the time 
figuring out a different API design.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EJYSP7KBD2XA26QTE2FVQP4SS2FDLPB2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The repr of a sentinel

2021-05-14 Thread Steve Dower

On 14May2021 0622, micro codery wrote:


There was a discussion a while back ( a year or so?? ) on
Python-ideas that introduced the idea of having more "sentinel-like"
singletons in Python -- right now, we only have None. 

Not quite true, we also have Ellipsis, which already has a nice repr 
that both reads easily and still follows the convention of eval(repr(x)) 
== x. It also is already safe from instantiation, survives pickle 
round-trip and is multi-thread safe.
So long as you are not dealing with scientific projects, it seems a 
quick (if dirty) solution to having a sentinel that is not None.


I don't think using "..." to indicate "some currently unknown or 
unspecified value" is dirty at all, it seems perfectly consistent with 
how we use it in English (and indexing in scientific projects, for that 
matter, where it tends to imply "figure out the rest for me").


All that's really missing is some kind of endorsement (from python-dev, 
presumably in the docs) that it's okay to use it as a default parameter 
value. I can't think of any reason you'd need to accept Ellipsis as a 
*specified* value that wouldn't also apply to any other kind of shared 
sentinel.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VCKTZF45OAFYNIOL5IVNI5HG34BGJBEA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The repr of a sentinel

2021-05-13 Thread Steve Dower

On 13May2021 1248, Petr Viktorin wrote:

On 13. 05. 21 11:45, Antoine Pitrou wrote:


Le 13/05/2021 à 11:40, Irit Katriel a écrit :



On Thu, May 13, 2021 at 10:28 AM Antoine Pitrou > wrote:



  I agree that  is a reasonable spelling.


I initially suggested , but now I'm not sure because it 
doesn't indicate what happens when you don't provide it (as in, what 
is the default value).  So now I'm with  or .


"" makes think of a derived class, and leaves me confused. 
"" is a bit better, but doesn't clearly say what the default 
value is, either.  So in all cases I have to read the docstring in 
addition to the function signature.




Is  the term you're looking for?


Perhaps  or ?

Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RSRBWH2UK2MKZN7O3PHSNVZFZEE7JIVJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: pth file encoding

2021-03-17 Thread Steve Dower

On 3/17/2021 7:34 PM, Ivan Pozdeev via Python-Dev wrote:

On 17.03.2021 20:30, Steve Dower wrote:

On 3/17/2021 8:00 AM, Michał Górny wrote:

How about writing paths as bytestrings in the long term?  I think this
should eliminate the necessity of knowing the correct encoding for
the filesystem.


That's what we're trying to do, the problem is that they start as 
strings, and so we need to convert them to a bytestring.


That conversion is the encoding ;)

And yeah, for reading, I'd use a UTF-8 reader that falls back to 
locale on failure (and restarts reading the file). But for writing, we 
need the tools that create these files (including Notepad!) to use the 
encoding we want.




I don't see a problem with using a file encoding specification like in 
Python source files.

Since site.py is under our control, we can introduce it easily.

We can opt to allow only UTF-8 here -- then we wait out a transitional 
period and disallow anything else than UTF-8 (then the specification can 
be removed, too).


The only thing we can introduce *easily* is an error when the 
(exclusively third-party) tools that create them aren't up to date. 
Getting everyone to specify the encoding we want is a much bigger 
problem with a much slower solution.


This particular file is probably the worst case scenario, but preferring 
UTF-8 and handling existing files with a fallback is the best we can do 
(especially since an assumption of UTF-8 can be invalidated on a 
particular file, whereas most locale encodings cannot). Once we openly 
document that it should be UTF-8, tools will have a chance to catch up, 
and eventually the fallback will become harmless.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5B53GCQNYXFBYAHSJKI6I34XAV6S67HN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: pth file encoding

2021-03-17 Thread Steve Dower

On 3/17/2021 6:08 PM, Stefan Ring wrote:

A somewhat radical idea carrying this to the extreme would be to use
UTF-16 (LE) on Windows. After all, this _is_ the native file system
encoding, and Notepad will happily read and write it.


I'm not opposed to detecting a BOM by default (when no other encoding is 
specified), but that won't help most UTF-8 files which these days come 
with no marker at all.


I wouldn't change the default file encoding for writing though (except 
to unmarked UTF-8, and only with the compatibility approach Inada is 
working on). Everyone has basically come around to the idea that UTF-8 
is the only needed encoding, and I'm sure if it had existed when Windows 
decided to support a universal character set, it would have been chosen. 
But with what we have now, UTF-16-LE is not a good choice for anything 
apart from compatibility with Windows.


Cheers,
Steve

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LTEJSNOH6EHESXSMXSW352JFG2SF7ZMX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: pth file encoding

2021-03-17 Thread Steve Dower

On 3/17/2021 8:00 AM, Michał Górny wrote:

How about writing paths as bytestrings in the long term?  I think this
should eliminate the necessity of knowing the correct encoding for
the filesystem.


That's what we're trying to do, the problem is that they start as 
strings, and so we need to convert them to a bytestring.


That conversion is the encoding ;)

And yeah, for reading, I'd use a UTF-8 reader that falls back to locale 
on failure (and restarts reading the file). But for writing, we need the 
tools that create these files (including Notepad!) to use the encoding 
we want.


Cheers,
Steve

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MVD67FOAJRCNR2XXLJ4JDVFPYGZWYLDP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Have virtual environments led to neglect of the actual environment?

2021-02-24 Thread Steve Dower

On 2/24/2021 4:26 PM, Christian Heimes wrote:

On 24/02/2021 15.16, Random832 wrote:

On Wed, Feb 24, 2021, at 06:27, Christian Heimes wrote:

Separate directories don't prevent clashes and system breakage. But they
provide an easy way to *recover* from a broken system.


I think it could be turned into a way to prevent them by A) having 
site-packages always take precedence over dist-packages [i believe this is 
already the case] in normal usage and B) providing an option to the 
interpreter, used by system scripts, to exclude site-packages entirely from the 
path.

Basically, site-packages would effectively be layered on top of "Lib + dist-packages" in a similar way to how 
a venv is layered on top of the main python installation - the inverse of the suggestion someone else in the thread 
made for the system python to be a venv. This wouldn't *exactly* be a venv because it wouldn't imply the other things 
that entering a venv does such as "python" [and script names such as pip] being an alias for the correct 
version of python, but it would provide the same kind of one-way isolation, whereby the "system environment" 
can influence the "normal environment" and not vice-versa, in the same way that packages installed in the 
main installation affect a venv [unless system-site-packages is disabled] but the venv obviously has no effect on the 
main installation.


Yes, you are describing one major aspect of my idea for a system Python
interpreter. I'm happy to read that other users are coming to similar
conclusions. Instead of an option I'd create a new executable to lock
down additional things (e.g. isolated mode, code verification hook). A
separate executable would also allow distros to provide a stripped down
interpreter that does not cause bad user experience.


I mean, this is _precisely_ what PEP 370 defines (including the "-s" 
option and PYTHONNOUSERSITE env variable to provide that one-way isolation).


Is the problem that pip doesn't use it by default? Or that the distros 
went and made patches for the runtime rather than patching pip? (For 
Windows installs from the Store, where even admin rights can't do an 
all-users package install, I added a new config file location for pip to 
change this default, but a patch would also have worked.)


Maybe we need an easier way to patch the location of user site packages? 
I also had to do this for the Store install on Windows, and it's a 
little bit of a hack... but maybe having an official recommendation 
would help encourage distributors to use the mechanism?


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IHEECOXYDPJ6ZQJE2QTGPDOOCOP7J37A/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Move support of legacy platforms/architectures outside Python

2021-02-22 Thread Steve Dower

On 2/22/2021 5:18 PM, Matthias Klose wrote:

On 2/21/21 1:13 PM, Victor Stinner wrote:
Document what is supported, be inclusive about anything else.  Don't make a
distinction yet between legacy and upcoming new architectures.


I agree with this, and I don't see any reason why we shouldn't just use 
the list of stable buildbot platforms as the "supported" list. That 
makes it really clear what the path is to push support onto upstream 
(join up and bring a buildbot with you), and also means that we've got a 
physically restricted set of machines to prove work before doing a release.


Actively blocking anything at all seems unnecessary at the source/build 
level. That's for pre-built binaries and other conveniences.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BKZTBXDYFIEBMVELBOVQ5KGM2ZEXVT2Z/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 624: Remove Py_UNICODE encoder APIs

2021-02-01 Thread Steve Dower

On 2/1/2021 5:16 PM, Christian Heimes wrote:

On 01/02/2021 17.39, M.-A. Lemburg wrote:

Can you explain where wchar_t* type is appropriate and how two
conversions is a performance bottleneck?


If an extension has a wchar_t* string, it should be easy
to convert this in to a Python bytes object for use in Python.


How much software actually uses wchar_t these days and interfaces with
Python? Do you have examples for software that uses wchar_t and would
benefit from wchar_t support in Python?

I did a quick search for wcslen in all shared libraries and binaries on
my system


Yeah, you searched the wrong kind of system ;)

Pick up a Windows machine, cross-platform code that originated on 
Windows, anything that interoperates with Java or .NET as well, or uses 
wxWidgets.


I'm not defending the choice of wchar_t over UTF-8 (but I can: most of 
these systems chose Unicode before UTF-8 was invented and never took the 
backwards-incompatible change because they were so popular), but if we 
want to pragmatically weigh the needs of our users above our desire for 
purity, then we should try and support both equally wherever possible.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GYUWANE7IMPU45A257UYQD4ZGUDE6QUX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Why aren't we allowing the use of C11?

2021-01-30 Thread Steve Dower

On 29Jan2021 1715, Victor Stinner wrote:

It seems like declaring a TLS in libpython and using it from an
extension module (another dynamic library) is causing issues depending
on the linker. It "should" work on macOS, but it doesn't.


I'm pretty sure this is not defined in any calling convention that would 
be able to cross dynamic library boundaries, or at least I've never seen 
it shown how this information would be enecoded.


In general, provided the features don't leak out of our compilation 
process (i.e. cross dynamic library boundaries or show up in 
public/implicit header files), I don't have any issue allowing them to 
be used.


Though I don't want to speak for the people who maintain CPython distros 
on smaller/slower/more focused platforms. I'm not sure how best to reach 
them, but I'd rather do that than find out when they complain on the 
issue tracker. Only thing I'm certain of is that we shouldn't assume 
that we know everyone who ever builds CPython.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NWK3O5DPDA2NIP5EEE6JOLKLENKBCAOL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Why aren't we allowing the use of C11?

2021-01-30 Thread Steve Dower

On 29Jan2021 2227, Random832 wrote:

On Thu, Jan 28, 2021, at 22:57, Emily Bowman wrote:
  
On Thu, Jan 28, 2021 at 1:31 PM MRAB  wrote:

I have Microsoft Visual Studio Community 2019 Version 16.8.4 (it's free)
and it supports C11 and C17.


While an upgrade for Community is free, for Pro/Enterprise without an
annual license it's not.


Something that's hard for me to find a clear answer for in a few minutes of 
searching:

Does the current version of the *Windows SDK* still come with a compiler, and 
can it be used without the Visual Studio IDE [the situation is confusing 
because it seems to install through the visual studio installer now] or at 
least without a license for Visual Studio Pro/Enterprise [i.e. by users who do 
not qualify for community edition] or with earlier versions of the Visual 
Studio IDE?


The Windows SDK contains system headers/libs, no compilers.

Visual Studio includes MSVC, and is one of only a few bundles that does 
(and if you have access to any of the others, you'll know by the massive 
hole in your bank account, so let's assume Visual Studio). It will also 
install the Windows SDK on your behalf if the box is checked, which it 
is by default when you select C++ support.


*Anyone* is allow to use Visual Studio Community to work on open-source 
software. So the only way to not qualify is to refuse to share whatever 
you're working on. Obviously some people will be in that position, but 
it's really not that onerous an ask (almost like the GPL, if you squint 
;) ). Anyone contributing to CPython is allowed to use VS Community to 
do so. [1]


Additionally, you can install VS Community alongside any other edition 
or version and they will be kept separate (as much as is possible, which 
is on Microsoft's side to deal with and shouldn't impact licensing).


Finally, the VS Build Tools are only licensed to you if you have a 
Visual Studio license, which means you need to have qualified for VS 
Community (or paid for a higher one) to use the build tools. Again, if 
you're building open-source software, you've qualified, so it's 
basically a non-issue for us.


Hopefully that settles some concerns. IANAL, but I am on the Visual 
Studio team and have been involved in discussions around how VS 
Community licensing applies to open source ecosystems ever since it was 
first created. If you're still concerned, go pay a lawyer to give you 
more opinions, because you honestly aren't going to find a more informed 
opinion for free on the internet ;)


Cheers,
Steve

1: See page 8 of 
https://visualstudio.microsoft.com/wp-content/uploads/2020/03/Visual-Studio-Licensing-Whitepaper-Mar-2020.pdf

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WBVMFEGGL4NOGDHUWZWM2C6PDYAIAIHD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: New sys.module_names attribute in Python 3.10: list of all stdlib modules

2021-01-26 Thread Steve Dower

On 1/26/2021 8:32 PM, Steve Holden wrote:
If the length of the name is any kind of issue, since the stdlib 
only contains modules (and packages), why not just sys.stdlib_names?


And since the modules can vary between platforms and builds, why 
wouldn't this be sysconfig.stdlib_names rather than sys.stdlib_names?


"Modules that were built into the stdlib" sounds more like sysconfig, 
and having an accurate list seems better than one that specifies (e.g.) 
distutils, ensurepip, resource or termios when those are absent.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7JBPSATSJMONLAGEU5PKTJHZ72MFRXBK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: The SC is accepting PEP 632: deprecating distutils

2021-01-22 Thread Steve Dower

On 1/21/2021 6:30 PM, Brett Cannon wrote:
On behalf of the SC, I'm happy to announce that we have chosen to accept 
PEP 632. Congrats, Steve, and thanks for the work on the PEP!


I'll let Steve outline what the next steps are for implementing the PEP.


Thanks. Work from this point on will be tracked at 
https://bugs.python.org/issue41282


The next immediate step is finishing off 
https://github.com/python/cpython/pull/23142 to do the biggest task of 
bringing sysconfig up to compatibility with distutils.sysconfig. Once 
merged, we'll add the deprecation warning on import 
(Lib/distutils/__init__.py) and close all the current issues in the 
tracker (and any associated PRs). That should be done before 3.10 Beta 1.


From there, I'll spend time publicising the deprecation as much as 
possible (and everyone is welcome to help out) to find enough cases for 
retaining functionality that we can make sensible decisions on where to 
move things. From discussions so far, most of the functionality that 
people expect is well covered by either setuptools or the standard 
library. The rest, officially, should be copied into your own project as 
needed.


Eventually, my plan is to move distutils into our Tools folder and 
update the setup.py we use for building stdlib extension modules (and 
the PEG parser tests) to point at it there. However, if someone else 
updates our build system to do it all through the Makefile, that won't 
be necessary, and I know a few people were keen to do it. I'm not 
*requiring* that it be done, and we have an easy fallback plan, but I'll 
happily *encourage* anyone who wants to update the Makefile to go ahead.


One thing I do *not* intend to do is go through and thoroughly document 
everything that's being deleted. Some have asked or implied that this 
should be done, but it's just not going to happen. Everything beyond the 
package build support and sysconfig is helper functions, and mostly 
pretty rough ones to be honest. There are almost certainly 1st or 3rd 
party modules with better quality implementations, and if not then I 
expect there will be (and if you're one of the really concerned people, 
the best thing you can do to help now that the PEP is accepted is to 
create a new library with these helpers).


Feel free to raise any questions here or on the bug linked above.

Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PD2XJQTNSOA4HNCL3ZHOKPFAC3N4P5M2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Enhancement request for PyUnicode proxies

2021-01-04 Thread Steve Dower

On 12/29/2020 5:23 PM, Antoine Pitrou wrote:

The third option is to add a distinct "string view" protocol.  There
are peculiarities (such as the fact that different objects may have
different internal representations - some utf8, some utf16...) that
make the buffer protocol suboptimal for this.

Also, we probably don't want unicode-like objects to start being usable
in contexts where a buffer-like object is required (such as writing to
a binary file, or zlib-compressing a bunch of bytes).


I've had to deal with this problem in the past as well (WinRT HSTRINGs), 
and this is the approach that would seem to make the most sense to me.


Basically, reintroduce PyString_* APIs as an _abstract_ interface to 
str-like objects.


So the first line of every single one can be PyUnicode_Check() followed 
by calling the _concrete_ PyUnicode_* implementation. And then we 
develop additional type slots or whatever is necessary for someone to 
build an equivalent native object.


Most "is this a str" checks can become PyString_Check, provided all the 
APIs used against the object are abstract (PyObject_* or PyString_*). 
Those that are going to mess with internals will have to get special 
treatment.


I don't want to make it all sound too easy, because it probably won't 
be. But it should be possible to add a viable proxy layer as a set of 
abstract C APIs to use instead of the concrete ones.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TC3BZJX4DGC2WV32AHIX7A57HQNJ2EMO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Where is the SQLite module maintainer

2021-01-04 Thread Steve Dower

On 12/28/2020 11:32 AM, Erlend Aasland wrote:
On 27 Dec 2020, at 22:38, Christian Heimes > wrote:

How about you put your name in the expert index instead of him? :)


Thanks for your confidence, but I'm far from an expert :)


Neither is anyone else :)

I'm not looking to start any mentoring right now, but if someone else is 
and you're interested, it should be easy to get you connected. I am more 
than happy to endorse you as a good candidate for becoming our SQLite 
expert.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AM7AVQH6L4AQVTB5UFOBYXICDPXYZBKQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Advice / RTFM needed for tool setup to participate in python development from a Windows host

2020-12-17 Thread Steve Dower

On 16Dec2020 2114, pjfarl...@earthlink.net wrote:

If anyone has or knows of step-by-step instructions on how to set that debug 
environment up and start the outer-level script with debug breakpoints in the 
DLL I would greatly appreciate it.  I'm also doing my own searches for 
tutorials on debugging python with VS20xx, but have not read/viewed one of 
those yet.


Hi Peter

Hopefully this document is able to help you get set up: 
https://docs.microsoft.com/en-us/visualstudio/python/debugging-mixed-mode-c-cpp-python-in-visual-studio?view=vs-2019


It gives you a few options, depending on whether you've created a Python 
project, a C++ project, or are attaching to a running process. Hopefully 
one of those will suit you. There are some videos out there as well, but 
they're a little older. (If you don't see Python projects, you'll need 
to go back to the Visual Studio installer, select the Python workload 
and the Python Native Development option.)


If you run into problems, there's a "Report a problem" (under Help/Send 
Feedback) inside VS that will get directly to the team, though most 
people are on vacation for the next few weeks. (And yes, "the official 
docs aren't good enough for me to follow" is a problem ;) )


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5WV7VN3TSIZ5SAPMDLMIWQBMEWRT6OMU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Who is target reader of tutorial?

2020-11-06 Thread Steve Dower

Hopefully that was a dataclass :)

But yes, point taken. Classes need to be there. And now I've gone and 
re-read the table of contents for the tutorial, I really don't have any 
complaints about the high-level ordering. It does seem to go 
unnecessarily deep in some areas (*very* few people will ever need 
position-only parameters, for example, and I'd say all special 
parameters matter more in the tutorial because of how you _call_ them, 
rather than how you define them).


Cheers,
Steve

On 06Nov2020 1714, Guido van Rossum wrote:
I agree with you that the tutorial should focus at users, not library 
developers. But assuming that users will never write a class seems 
wrong. For example, while ago I went through a PyTorch tutorial, which 
assumes barely any programming knowledge, and yet the first or second 
example has the user write a class, as this is apparently the 
conventional way to store parameters for ML models.


--Guido

On Fri, Nov 6, 2020 at 8:32 AM Steve Dower <mailto:steve.do...@python.org>> wrote:


It would also be nice for the tutorial to separate between "things you
need to know to use Python" vs "things you need to write a Python
library".

For example, the fact that operators can do different things for
different values (e.g. int, str, list, pathlib) would be in the first
category, while the details of how to override operators can wait for
the second.

I see many people suffer from content that goes too deep too quickly,
and I'm more and more convinced over time that this is the right place
to draw a separator for Python. Many devs are just using the language
and never implementing a class (or often, even writing a function).
Having a canonical tutorial to get these users through this stage first
before going deeper would be great.

Cheers,
Steve

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VVJOMFE44ZO6UTA432EQGPSHZI23ULQU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Who is target reader of tutorial?

2020-11-06 Thread Steve Dower
It would also be nice for the tutorial to separate between "things you 
need to know to use Python" vs "things you need to write a Python library".


For example, the fact that operators can do different things for 
different values (e.g. int, str, list, pathlib) would be in the first 
category, while the details of how to override operators can wait for 
the second.


I see many people suffer from content that goes too deep too quickly, 
and I'm more and more convinced over time that this is the right place 
to draw a separator for Python. Many devs are just using the language 
and never implementing a class (or often, even writing a function). 
Having a canonical tutorial to get these users through this stage first 
before going deeper would be great.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/E7Y5MB4JJEB3VW2J24HA7JQZH6JRAOMO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Speeding up CPython

2020-10-22 Thread Steve Dower

On 22Oct2020 1341, Marco Sulla wrote:
On Thu, 22 Oct 2020 at 14:25, Mark Shannon > wrote:


MSVC seems to do better jump fusion than GCC.


Maybe I'm wrong, since I only take a look at dict, tuple and set C code, 
but it does not seems to me that there's more than 1-2 GOTOs in every 
CPython function, and they can't be merged.


There are vastly more jumps generated than what you see in the source 
code. You'll need to compare assembly language to get a proper read on this.


But I don't think that's necessary, since processors do other kinds of 
clever things with jumps anyway, that can variously improve/degrade 
performance from what the compilers generate.


Benchmarks on consistent hardware are what matter, not speculation about 
generated code.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JEEZNIP4TPLIA2ZS3QIRWZGXBKPDOVBF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 640: Unused variable syntax.

2020-10-20 Thread Steve Dower

On 20Oct2020 1309, Thomas Wouters wrote:
The reason for this PEP is that pattern matching will make '_' (but not 
any other names) have the behaviour suggested in this PEP, but *only* in 
pattern matching.


Then why is this PEP proposing a different syntax?

At the very least, wait for pattern matching to get in before proposing 
an expansion, so then you won't be caught out suggesting the wrong thing :)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RXIACSYMIP22ILGYYFVPKHIUH62N7233/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 640: Unused variable syntax.

2020-10-20 Thread Steve Dower

On 20Oct2020 1021, Steven D'Aprano wrote:

In my opinion, having a convention to treat certain variables as
"unused" is great (I'm partial to `__` myself, to avoid clobbering the
special variable `_` in the REPL). But having that be a pseudo-variable
which is *actually* unused and unuseable strikes me as being an
attractive nuisance.


I agree entirely. The convention is fine, and the workaround for when 
you don't want to overwrite a legitimate `_` variable is also fine.


Making `_` enough of an official convention (but how?) that linting 
tools stop warning about it (e.g. type checkers might warn about 
multiple conflicting assignments) seems like an overall happier path, 
that neither makes existing code forwards-incompatible nor future code 
backwards-incompatible.


I don't think we have to codify every emergent coding pattern in syntax.

Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6GV3KPPPRNF5ISZK4YSAIUUTCQRMX77H/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: os.scandir bug in Windows?

2020-10-20 Thread Steve Dower

On 20Oct2020 0520, Rob Cliffe wrote:

On 19/10/2020 12:42, Steve Dower wrote:

On 15Oct2020 2239, Rob Cliffe via Python-Dev wrote:
TLDR: In os.scandir directory entries, atime is always a copy of 
mtime rather than the actual access time.


Correction - os.stat() updates the access time to _now_, while 
os.scandir() returns the last access time without updating it.


Eryk replied with a deeper explanation of the cause, but fundamentally 
this is what you are seeing.


Feel free to file a bug, but we'll likely only add a vague note to the 
docs about how Windows works here rather than changing anything. If 
anything, we should probably fix os.stat() to avoid updating the 
access time so that both functions behave the same, but that might be 
too complicated.


Cheers,
Steve

Sorry - what you say does not match the behaviour I observe, which is that


Yes, I posted a correction already (immediately after sending the first 
email).


What you are seeing is what Windows decided was the best approach. If 
you want to avoid that, os.stat() will get the latest available 
information. But I don't want to penalise people who don't need it by 
slowing down their scandir calls unnecessarily.


A documentation patch to make this difference between os.stat() and 
DirEntry even clearer would be fine.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NAR7LTW2XMBKAPKLVBQQFVK6EA4ZWQZP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: os.scandir bug in Windows?

2020-10-19 Thread Steve Dower

On 19Oct2020 1846, Eryk Sun wrote:

On 10/19/20, Steve Dower  wrote:

On 15Oct2020 2239, Rob Cliffe via Python-Dev wrote:

TLDR: In os.scandir directory entries, atime is always a copy of mtime
rather than the actual access time.


Correction - os.stat() updates the access time to _now_, while
os.scandir() returns the last access time without updating it.


os.stat() shouldn't affect st_atime because it doesn't access the file
data. That has me curious if it can be reproduced.

With NTFS in Windows 10, I'd expect the os.stat() st_atime to change
immediately when the file data is read or modified. With other
filesystems, it may not be updated until the kernel file object that
was used to access the file's data is closed.


I thought I got my self-correction fired off quickly enough to save you 
from writing this :)



For details, download the [MS-FSA] PDF [1] and look for all references
to the following sections:



[1] 
https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-fsa/860b1516-c452-47b4-bdbc-625d344e2041


Thanks for the detailed reference.


Going back to my initial message, I can't stress enough that this
problem is at its worst when a file has multiple hardlinks. If a
particular link in a directory wasn't the last link used to access the
file, its duplicated metadata may have the wrong file size, access
time, modify time, and change time (the latter is not reported by
Python). As is, for the current implementation, I'd only rely on the
basic attributes such as whether it's a directory or reparse point
(symlink, mountpoint, etc) when using scandir() to quickly process a
directory. For reliable stat information, call os.stat().

I do think, however, that os.scandir() can be improved in Windows
without significant performance loss if it calls GetFileAttributesExW
to get st_file_attributes, st_size, st_ctime (create time), st_mtime,
and st_atime. This API call is relatively fast because it doesn't
require opening the file via CreateFileW, which is one of the more
expensive operations in os.stat(). But I haven't tried modifying
scandir() to benchmark it.


Resolving the path is the most expensive part, even if the file is not 
opened (I've been working with the NTFS team on this area, and we've 
been benchmarking/analysing all of it). There are a few improvements 
coming across the board, but I'd much rather just emphasise that 
os.scandir() is as fast as we can manage using cached information 
(including as cached by the OS). Otherwise we prevent people from using 
the fastest available option when they can, if they don't need the 
additional information/accuracy.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MMRMLWGEV2ZGIACXQTSEQC6TPWGL3UZ3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: os.scandir bug in Windows?

2020-10-19 Thread Steve Dower

On 19Oct2020 1652, Gregory P. Smith wrote:
I'm sure this is covered in MSDN.  Linking to that if it has it in a 
concise explanation would make sense from a note in our docs.


Probably unlikely :) I'm pretty sure this started "perfect" and was then 
wound back to improve performance. But it's almost certainly an option 
somewhere, which means you can't rely on it being either true nor false. 
You just have to be explicit for certain pieces of information.


If I'm understanding Steve correctly this is due to Windows/NTFS storing 
the access time potentially redundantly in two different places. One 
within the directory entry itself and one with the file's own metadata.  
Those of us with a traditional posix filesystem background may raise 
eyeballs at this duplication, seeing a directory as a place that merely 
maps names to inodes with the inode structure (equiv: file entry 
metadata) being the sole source of truth.  Which ones get updated when 
and by what actions is up to the OS.


So yes, just document the "quirk" as an intended OS behavior.  This is 
one reason scandir() can return additional information on windows vs 
what it can return on posix.  The entire point of scandir() is to return 
as much as possible from the directory without triggering reads of the 
inodes/file-entry-metadata. :)


Yeah, I'd document it as a quirk of scandir. There's also a race where 
if you scandir(), then someone touches the file, then you look at the 
cached stat you get the wrong information too (an any platform). Making 
clearer that it's for non-time sensitive queries is most accurate, 
though we could also give an example of "access times may not be up to 
date depending on OS-level caching" without committing us to being 
responsible for OS decisions.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EBWUDEQEPRWJN36FLUUJQWP5EWLPWRPD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: os.scandir bug in Windows?

2020-10-19 Thread Steve Dower

On 19Oct2020 1242, Steve Dower wrote:

On 15Oct2020 2239, Rob Cliffe via Python-Dev wrote:
TLDR: In os.scandir directory entries, atime is always a copy of mtime 
rather than the actual access time.


Correction - os.stat() updates the access time to _now_, while 
os.scandir() returns the last access time without updating it.


Let me correct myself first :)

*Windows* has decided not to update file access time metadata *in 
directory entries* on reads. os.stat() always[1] looks at the file entry 
metadata, while os.scandir() always looks at the directory entry metadata.


My suggested approach still applies, other than the bit where we might 
fix os.stat(). The best we can do is regress os.scandir() to have 
similarly poor performance, but the best *you* can do is use os.stat() 
for accurate timings when files might be being modified while your 
program is running, and don't do it when you just need names/kinds (and 
I'm okay adding that note to the docs).


Cheers,
Steve

[1]: With some fallback to directory entries in exceptional cases that 
don't apply here.

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QHHJFYEDBANW7EC3JOUFE7BQRT5ILL4O/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: os.scandir bug in Windows?

2020-10-19 Thread Steve Dower

On 15Oct2020 2239, Rob Cliffe via Python-Dev wrote:
TLDR: In os.scandir directory entries, atime is always a copy of mtime 
rather than the actual access time.


Correction - os.stat() updates the access time to _now_, while 
os.scandir() returns the last access time without updating it.


Eryk replied with a deeper explanation of the cause, but fundamentally 
this is what you are seeing.


Feel free to file a bug, but we'll likely only add a vague note to the 
docs about how Windows works here rather than changing anything. If 
anything, we should probably fix os.stat() to avoid updating the access 
time so that both functions behave the same, but that might be too 
complicated.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NGMVB7GWDBCPYHL4IND2LBZ2QPXLWRAX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 632: Deprecate distutils module

2020-09-11 Thread Steve Dower

On 9/11/2020 12:24 PM, Matthias Klose wrote:

On 9/4/20 1:28 PM, Steve Dower wrote:

Hi all.

setuptools has recently adopted the entire codebase of the distutils module, so
that they will be able to make improvements directly without having to rely on
patching the standard library. As a result, we can now move forward with
official deprecation (in 3.10) and removal (in 3.12) (noting that the distutils
docs already recommend switching to setuptools).


it would be nice to already announce that with the 3.9 release.


I think the announcement will have an immediate effect regardless of 
what versions it's tied to.



At the 2018 Language summit, I had a lightning talk to report about the
experience splitting out distutils into a separate binary package, showing some
unexpected usages:

Unexpected / Creative usages:

  - distutils.version
Used “everywhere” ...

  - distutils.spawn: find_executable
Replace with shutil.which

  - distutils.util: strtobool
Rewrite, no equivalent in the stdlib?

  - distutils.sysconfig:
Mostly replaced by sysconfig


(Aside - I've put out a call on Discourse for people to contribute the 
parts of distutils that are used but wouldn't be suitable for using 
setuptools instead. These are good, but don't seem drastic enough to not 
deprecate the module. Thoughts?)



It really would be nice to have recommended replacements, especially for the
version stuff (packaging?)


Considering that the PEP won't be updated after acceptance, and all of 
these recommendations would be to third-party packages (except possibly 
strtobool), how relevant would you predict these to be in three years time?


Setuptools has already been the official recommendation for the whole 
module for a long time, and their page at 
https://setuptools.readthedocs.io/en/latest/distutils-legacy.html 
already lists a few replacements (and I'm sure they'll accept 
contributions :) ). So I'd prefer to point towards that as the most 
up-to-date source of recommendations. Thoughts?


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HDCK3LFOZWDNWLH3NZ2QFNYUPNJAGNP5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 632: Deprecate distutils module

2020-09-08 Thread Steve Dower

On 07Sep2020 1602, Stefan Krah wrote:

I'm under the impression that distutils has effectively been frozen for
the last decade, except for the substantial improvements you made for the
MSVC part.

For Unix, no one has addressed e.g. C++ support. The underlying reason
has always been that we cannot change anything lest it breaks someone's
package.


So I have some trouble understanding why we have exercised that kind
of restraint (I might have committed a very small C++ patch otherwise)
if we can now break everything at once.


Others have contributed a range of little fixes over time. Maybe they 
blur into the background noise of regular maintenance, but it does add up.


The main reason we end up needing to fix things is because platforms 
change things. Since we don't control _when_ platforms will change 
things, we either do a single controlled break at a planned Python 
version, or we allow it to break for different users at different times 
that are not detectable from version number alone. It won't take long 
for an autoconf-style approach to be necessary for figuring out how to 
use distutils (though hopefully anyone who sees that as the solution 
will also find https://packaging.python.org/ and find better solutions ;) )



Unless you're offering to take over distutils maintenance? In which case,
I'll happily withdraw the PEP :)


No, thanks. :)


Okay, maybe it is a little bit more than background noise ;)


I've looked at the log, most maintenance patches are from a variety of
active core devs (this includes, of course, the MSVC patches from you).

Will they submit patches to setuptools from now on?


If you look at the setuptools history, a variety of contributors have 
submitted much the same (and often better) fixes there. I expect I 
likely will as well, inasmuch as they are needed to help Windows users 
be successful, though I also have my own backend that will likely be 
where big features that interest me end up ;)


It's easier and more satisfying to submit the patches to setuptools, as 
the release comes out much sooner (and can be installed on earlier 
versions), and it will only get easier when _all_ the code can be fixed 
there. Right now, one of the biggest pains there is having to do loads 
of careful monkeypatching to fix one poor choice in the standard library.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7MESZKL5KFX6YBPOZNYTE5PS4PE5OJZM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 632: Deprecate distutils module

2020-09-07 Thread Steve Dower

On 07Sep2020 1424, Stefan Krah wrote:

On Mon, Sep 07, 2020 at 11:43:41AM +0100, Steve Dower wrote:

Rest assured, I am very aware of air-gapped and limited network systems, as
well as corporate policies and processes. Having distutils in the standard
library does not help with any of these.


Of course it helps.  You can develop extensions without any third party
packages or install them.

Same situation if you are on mobile Internet or in a cafe without Internet
and you want to try something out.


Or if you moved and you don't have cable Internet yet.  Or if you are in
a country where there is no cable Internet.


Air-gapped systems were just an illustration of the problem. I did not
anticipate that people would take it as the centerpiece of my arguments.


Sorry, I didn't mean to imply that - I used "limited network" to capture 
the rest. I make exactly the same arguments at work from your side for 
many other situations, so if you'd like I can give you an even more 
exhaustive list of scenarios where we can't rely on highly available 
internet :)


And yet despite that, I think in this case you're clutching at straws. 
Sure, you can develop extensions without any third party packages, but 
you can't develop them with *supported* first party packages.


Unless you're offering to take over distutils maintenance? In which 
case, I'll happily withdraw the PEP :)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2HBDAXQA6HQF4QKSKAVUZLXZXSZ5SU75/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 632: Deprecate distutils module

2020-09-07 Thread Steve Dower

Thanks everyone for your input so far.

Rest assured, I am very aware of air-gapped and limited network systems, 
as well as corporate policies and processes. Having distutils in the 
standard library does not help with any of these.


Do not forget that pip (and presently, though not permanently, 
setuptools) are bundled with a recommended CPython distribution through 
ensurepip. I call out "recommended" because distributors (including 
those who are transferring CPython into an air-gapped system) can choose 
to do whatever they like - including leaving out distutils already!


As for defining standards for installation, the sysconfig module 
specifies the paths to install things (though even those can be taken 
with a grain of salt, as it's really the import system and site module 
that determine where things should be installed), but we already defer 
to third parties to do the actual installation (including, yes, those 
who use distutils to help define their install script). All other 
packaging and distribution specifications are listed at 
https://packaging.python.org/specifications/


Also, distutils has never been a "standard" for how to define a package 
(i.e. write a setup.py), just as argparse is not a "standard" for how to 
define a CLI. It's always been possible to do it in other ways, and the 
current standard definition for this is now PEP 517. Since distutils is 
not compatible with PEP 517, it is explicitly *breaking* the standard.


I also wanted to call out this excellent point from Emily:


If you can update to a breaking Python version, but aren't allowed one single 
point version of an external module, you have a process problem.


This is absolutely the case. CPython, and the core team, cannot take 
responsibility for bad policies, and (arguably) should not help users 
work around those policies. It is better to provide useful advice to 
organisations implementing these policies to help them do things 
properly (this is a large part of what I get paid to do these days), 
because otherwise they very quickly just dictate that Python cannot be 
used at all.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/V2JKVPEDQPJJHF5W4MJRER5FBH3I4WLU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] PEP 632: Deprecate distutils module

2020-09-04 Thread Steve Dower

Hi all.

setuptools has recently adopted the entire codebase of the distutils 
module, so that they will be able to make improvements directly without 
having to rely on patching the standard library. As a result, we can now 
move forward with official deprecation (in 3.10) and removal (in 3.12) 
(noting that the distutils docs already recommend switching to setuptools).


Full text and discussion at 
https://discuss.python.org/t/pep-632-deprecate-distutils-module/5134


I'm including the original text below, but won't be responding to all 
discussions here (though I'll periodically check in and skim read it, 
assuming things don't go way off track).


Also be aware that I already have some minor changes lined up that are 
not in this text. Refer to the discussion on Discourse if you need to 
see those.


Cheers,
Steve

---

PEP: 632
Title: Deprecate distutils module
Author: Steve Dower 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 03-Sep-2020
Post-History:


Abstract


The distutils module [1]_ has for a long time recommended using the
setuptools package [2]_ instead. Setuptools has recently integrated a
complete copy of distutils and is no longer dependent on the standard
library [3]_. Pip has silently replaced distutils with setuptools when
building packages for a long time already. It is time to remove it
from the (public part of the) standard library.


Motivation
==

distutils [1]_ is a largely undocumented and unmaintained collection
of utilities for packaging and distributing Python packages, including
compilation of native extension modules. It defines a configuration
format that describes a Python distribution and provides the tools to
convert a directory of source code into a source distribution, and
some forms of binary distribution. Because of its place in the
standard library, many updates can only be released with a major
release, and users cannot rely on particular fixes being available.

setuptools [2]_ is a better documented and well maintained enhancement
based on distutils. While it provides very similar functionality, it
is much better able to support users on earlier Python releases, and
can respond to bug reports more quickly. A number of platform-specific
enhancements already exist in setuptools that have not been added to
distutils, and there is been a long-standing recommendation in the
distutils documentation to prefer setuptools.

Historically, setuptools has extended distutils using subclassing and
monkeypatching, but has now taken a copy of the underlying code. [3]_
As a result, the second last major dependency on distutils is gone and
there is no need to keep it in the standard library.

The final dependency on distutils is CPython itself, which uses it to
build the native extension modules in the standard library (except on
Windows). Because this is a CPython build-time dependency, it is
possible to continue to use distutils for this specific case without
it being part of the standard library.

Deprecation and removal will make it obvious that issues should be
fixed in the setuptools project, and will reduce a source of bug
reports and test maintenance that is unnecessary. It will also help
promote the development of alternative build backends, which can now
be supported more easily thanks to PEP 517.


Specification
=

In Python 3.10 and 3.11, distutils will be formally marked as
deprecated. All known issues will be closed at this time.
``import distutils`` will raise a deprecation warning.

During Python 3.10 and 3.11, uses of distutils within the standard
library may change to use alternative APIs.

In Python 3.12, distutils will no longer be installed by ``make
install`` or any of the first-party distribution. Third-party
redistributors should no longer include distutils in their bundles or
repositories.

This PEP makes no specification on migrating the parts of the CPython
build process that currently use distutils. Depending on
contributions, this migration may occur at any time.

After Python 3.12 is started and when the CPython build process no
longer depends on distutils being in the standard library, the entire
``Lib/distutils`` directory and ``Lib/test/test_distutils.py`` file
will be removed from the repository.

Other references to distutils will be cleaned up. As of Python 3.9's
initial release, the following modules have references in code or
comments:

* Lib/ctypes/util.py
* Lib/site.py
* Lib/sysconfig.py
* Lib/_aix_support.py
* Lib/_bootsubprocess.py
* Lib/_osx_support.py
* Modules/_decimal/tests/formathelper.py

As the distutils code is already included in setuptools, there is no
need to republish it in any other form. Those who require access to
the functionality should use setuptools or an alternative build
backend.

Backwards Compatibility
===

Code that imports distutils will no longer work from Python 3.12.

The suggested migration path is to use the equivalent (though

[Python-Dev] Re: Procedure for trivial PRs

2020-08-13 Thread Steve Dower

On 8/13/2020 9:56 PM, Mariatta wrote:

  a) is it ok to touch 3.9, as it's in rc1?


Yeah bug fixes are accepted to the maintenance branches. I think your PR 
does count as documentation bug fix, so it should be ok to backport to 3.9


At this stage, changes to the 3.9 branch won't go into the 3.9.0 release 
unless you specifically ask Łukasz to include them (though he does have 
a habit of acting out of his own initiative and including them anyway :) ).


Which means you can freely merge anything you like for 3.9.1, provided 
it's okay for a maintenance branch (in your judgement, which is part of 
being a core dev, but feel free to ask others for advice if you're not 
sure).


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5RKZ5CP2SSCD43TUZLHN2F72Z7X4XMYX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 622 version 2 (Structural Pattern Matching)

2020-08-07 Thread Steve Dower

On 07Aug2020 2133, Joao S. O. Bueno wrote:

Enough cheaptalk - links are here:

tests:
https://github.com/jsbueno/terminedia/blob/fa5ac012a7b93a2abe26ff6ca41dbd5f5449cb0b/tests/test_utils.py#L356

Branch comparison for the match/case version:
https://github.com/jsbueno/terminedia/compare/patma


I haven't been following this thread too closely, but that looks pretty 
nice to me. Not obvious enough for me to write my own just from reading 
an example, and I'd hesitate before trying to modify it at all, but I 
can at least read the pre- and post-conditions more easily than in the 
original.



(all said, I think I still miss a way to mark variables that
are assigned in the case clauses, just for the record :-)  )


Yeah, the implicit variable assignments seem like the most confusing bit 
(based solely on looking at just one example). I think I'd be happy 
enough just knowing that "kw" matches the pattern, and then manually 
extracting individual values from it. (But I guess for that we'd only 
need a fancy "if_match(kw, 'expression')" function... hmm...)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AU6YADGMSOYOIPKNKFWFGC5XEEVOYDO7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Fwd: [pypi-announce] upgrade to pip 20.2 -- plus changes coming in 20.3

2020-08-03 Thread Steve Dower
Thanks. It looks like we can do it later this week and make the next 
round of releases. Please let us know asap if anything comes up that you 
wouldn't want to be released.


Cheers,
Steve

On 01Aug2020 1632, Sumana Harihareswara wrote:

Steve Dower asked:

Do you think we should be updating the version of pip bundled with 
Python 3.9 at this stage (for the first RC)?


Similarly, is there a need to update Python 3.8 for its next release?


Answered now in 
https://github.com/pypa/pip/issues/6536#issuecomment-666715283 -- yes, 
please do do. However, you may want to wait till Tuesday or so for our 
bugfix release 
https://github.com/pypa/pip/issues/8511#issuecomment-60644 .



___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AAJVOQELS4KEHRCMSB2TSSHSZSSRLJSI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Fwd: [pypi-announce] upgrade to pip 20.2 -- plus changes coming in 20.3

2020-07-30 Thread Steve Dower
Do you think we should be updating the version of pip bundled with 
Python 3.9 at this stage (for the first RC)?


Similarly, is there a need to update Python 3.8 for its next release?

Thanks,
Steve

On 30Jul2020 2119, Sumana Harihareswara wrote:
A new pip is out. Please see below, upgrade, and let us know if you/your 
users start to have trouble. In particular, we need your feedback on the 
beta of the new dependency resolver, because we want to make it the 
default in the October release.


best,
Sumana Harihareswara, pip project manager


 Forwarded Message 
Subject: [pypi-announce] upgrade to pip 20.2 -- plus changes coming in 20.3
Date: Thu, 30 Jul 2020 11:24:58 -0400
From: Sumana Harihareswara 
Reply-To: distutils-...@python.org
Organization: Changeset Consulting
To: pypi-annou...@python.org

On behalf of the Python Packaging Authority, I am pleased to announce 
the release of pip 20.2. Please upgrade for speed improvements, bug 
fixes, and better logging. You can install it by running python -m pip 
install --upgrade pip.


We make major releases each quarter, so this is the first new release 
since 20.1 in April.


NOTICE: This release includes the beta of the next-generation dependency 
resolver. It is significantly stricter and more consistent when it 
receives incompatible instructions, and reduces support for certain 
kinds of constraints files, so some workarounds and workflows may break. 
Please test it with the `--use-feature=2020-resolver` flag. Please see 
our guide on how to test and migrate, and how to report issues
. 



The new dependency resolver is *off by default* because it is *not yet
ready for everyday use*.

For release highlights and thank-yous, please see 
 . 
The full changelog is at .


Future:

We plan to make pip's next quarterly release, 20.3, in October 2020. We 
are preparing to change the default dependency resolution behavior and 
make the new resolver the default in pip 20.3.



___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FNH5SOS7HVK26BKCZZJZZM4OHCNTW7UB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 626: Precise line numbers for debugging and other tools.

2020-07-28 Thread Steve Dower

On 25Jul2020 2014, Jim J. Jewett wrote:

But it sounds as though you are saying the benefit is irrelevant; it is just 
inherently too expensive to ask programs that are already dealing with 
internals and trying to optimize performance to make a mechanical change from:
 code.magic_attrname
to:
 magicdict[code]

What have I missed?


You've missed that debugging and profiling tools that operate purely on 
native memory can't execute Python code, so the "magic" has to be easily 
representable in C such that it can be copied into whichever language is 
being used (whether it's C, C++, C#, Rust, or something else).


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PE44CTX6NG6KOUPIJUFRXJHNFSFMN2TK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 626: Precise line numbers for debugging and other tools.

2020-07-23 Thread Steve Dower

On 22Jul2020 1319, Mark Shannon wrote:

On 21/07/2020 9:46 pm, Gregory P. Smith wrote:


Q: Why can't we have the information about the entire span of lines 
rather than consider a definition to be a "line"?


Pretty much every profiler, coverage tool, and debugger ever expects 
lines to be natural numbers, not ranges of numbers.

A lot of tooling would need to be changed.


As someone who worked on apparently the only debugger that expects 
_character_ ranges, rather than a simple line number, I would love to 
keep full mapping information somewhere.


We experimented with some stack analysis to see if we could tell the 
difference between being inside the list comprehension vs. outside the 
comprehension, or which of the nested comprehension is currently 
running. But it turned out to be too much trouble.


An alternative to lnotab that includes the full line/column range for 
the expression, presumably taken from a particular type of node in the 
AST, would be great. But I think omitting even line ranges at this stage 
would be a missed opportunity, since we're breaking non-Python debuggers 
anyway.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MKX3TW2DCQ5XCOWP2C4XBREENQKFIFH3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: How to customize CPython to a minimal set

2020-07-21 Thread Steve Dower

On 21Jul2020 0633, Huang, Yang wrote:

Yes. Micropyhton is also in consideration.

But sqlite3 is the first usage. There should be some additional features 
like numpy, scipy... Not sure if micropython supports well?


Or is there a feasible way to strip CPython ?


Only by manually removing modules from your own build. However, if you 
do that but want to continue using third-party packages, you'll find 
that there is very little you can remove (and nothing of significance).


The most interesting modules to omit are ssl and ctypes, because of the 
exposure to likely security issues (via OpenSSL and libffi), but 
practically every module out there will have some sort of dependency on 
these. So you'll end up only removing small/niche modules that might 
save you a couple of kilobytes of Python code.


Of the modules you suggested initially, I think you'll find that you 
need all of them for SQLite, let alone more complex libraries such as numpy.


In theory, we could have a smaller set of modules, but in practice there 
are so many cross-dependencies already out there that there isn't much 
to gain right now. It would take a bit of an ecosystem "reset" (i.e. 
develop completely new popular libraries with more narrowly scoped 
dependencies) to narrow things down at this point, and there's not much 
chance of that happening.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TNFG5E6MRLZV6VAYNKZKWOMSBY2WE6UH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: os.add_dll_directory and DLL search order

2020-06-22 Thread Steve Dower

On 22Jun2020 1646, Steve Dower wrote:
DLLs should not be in the search path at all - it's searched by sys.path 
when importing .pyd files, which are loaded by absolute path and their 
dependencies found adjacent.


To clarify this - by "DLLs" I meant the DLLs directory, not DLLs in 
general (hence the singular "it's").


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YIIMGEADBMT2CTOMIOPNNWZZ7UBE5JWC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: os.add_dll_directory and DLL search order

2020-06-22 Thread Steve Dower

On 22Jun2020 1039, Seth G wrote:

However, there is one feature of using the Windows PATH that I can't seem to 
replicate with add_dll_directory.
The MapServer DLL builds are linked to sqlite3 3.24.0, whereas Python 3.8.2 is 
linked to 2.6.0. Building with matching versions is not something I can easily 
change.

When using Windows PATH the folder with the newer version could be added to the 
front of the list to take priority and import worked correctly. This does not 
seem to be possible with add_dll_directory - the Python sqlite3.dll in 
C:\Python38\DLLs always takes priority leading to:

ImportError: DLL load failed while importing _mapscript: The specified 
procedure could not be found.

I presume I can't remove the C:\Python38\DLLs path, is there another solution 
to this issue?


DLLs should not be in the search path at all - it's searched by sys.path 
when importing .pyd files, which are loaded by absolute path and their 
dependencies found adjacent.


What is likely happening here is that _sqlite3.pyd is being imported 
before _mapscript, and so there is already a SQLITE3 module in memory. 
Like Python, Windows will not attempt to import a second module with the 
same name, but will return the original one.


So your best option here is probably to rebuild sqlite3.dll with as much 
of the version number (or some unique string) in the name as you need. 
Or you can statically link it into your extension module, assuming you 
aren't relying on shared state with other modules in your package. The 
latter is probably easier.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PMBOWXNIXBGPPOOTDWU3LGR6QSP73AXW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Design and Architecture of python

2020-06-17 Thread Steve Dower
If you're not willing to slog through the source code and many PEPs and 
past discussions, Anthony Shaw has done it for you and written a book: 
https://realpython.com/products/cpython-internals-book/


All the other write-ups I'm aware of are very dated, so I don't have any 
free suggestions I'm afraid.


Cheers,
Steve

On 17Jun2020 1437, Shakil Khan wrote:

I am well versed in Python and now trying to understand and learn the Design 
philosophy of Python as a Programming Language.
Is there any document related to Design/Archoitecture of Python language itself 
where I can learn about implementation of garbage collection,
top level of python objects and different other features.

Bottom line is I want to understand and get hold of the Python source code 
itself in C. Any document or pointer towards that would be very much 
appreciated.

Thanks
Shakil

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AA2RRW4RMFM2KZ7V2CAATWVVI4DQGPZZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: When can we remove wchar_t* cache from string?

2020-06-16 Thread Steve Dower

On 16Jun2020 1641, Inada Naoki wrote:

* This change doesn't affect to pure Python packages.
* Most of the rest uses Cython.  Since I already report an issue to Cython,
   regenerating with new Cython release fixes them.


The precedent set in our last release with tp_print was that 
regenerating Cython releases was too much to ask.


Unless we're going to overrule that immediately, we should leave 
everything there and give users/developers a full release cycle with 
updated Cython version to make new releases without causing any breakage.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/KJV2FT367LV62WO4A3VXTRCYNMSIF53K/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: My take on multiple interpreters (Was: Should we be making so many changes in pursuit of PEP 554?)

2020-06-12 Thread Steve Dower

On 12Jun2020 1008, Paul Moore wrote:

On Fri, 12 Jun 2020 at 09:47, Mark Shannon  wrote:

Starting a new process is cheap. On my machine, starting a new Python
process takes under 1ms and uses a few Mbytes.


Is that on Windows or Unix? Traditionally, process creation has been
costly on Windows, which is why threads, and in-process solutions in
general, tend to be more common on that platform. I haven't done
experiments recently, but I do tend to avoid multiprocess-type
solutions on Windows "just in case". I know that evaluating a new
feature based on unsubstantiated assumptions informed by "it used to
be like this" is ill-advised, but so is assuming that everything will
be OK based on experience on a single platform :-)


It's still like that, though I'm actively involved in trying to get it 
improved. However, it's unlikely at this point to ever get to 
equivalence with Unix - Windows just sets up too many features 
(security, isolation, etc.) at the process boundary rather than other 
parts of the lifecycle.


It's also *incredibly arrogant* to insist that users rewrite their 
applications to suit Python, rather than us doing the work to fit their 
needs. That's not how being a libraries/runtime developer works. Our 
responsibility is to humbly do the work that will benefit our users, not 
to find ways to put in the least possible effort and use the rest for 
blame-shifting. Some of us do much more talking than listening, and it 
does not pass unnoticed.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YYBRXYCIQE4B2NDOP3UT7AYR54DQVZCQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 618: Add Optional Length-Checking To zip

2020-06-02 Thread Steve Dower

On 02Jun2020 1430, David Mertz wrote:
On Tue, Jun 2, 2020 at 8:07 AM Chris Angelico > wrote:


 > Given that the only input parameters are the iterables
themselves, it's a stretch to even consider the first two as
possibilities.

Why? I can conceivably imagine that zip(iter1, iter2, truncate=5)
would consume at most 5 elements from each iterable. It's not much of
a stretch. It doesn't happen to be what's proposed, but it's a
reasonable interpretation. (Though then the default would probably be
truncate=None to not truncate.)


This was exactly my thought, that Chris wrote very well.  I can easily 
imagine a 'truncate=5' behavior.  In fact, if it existed, it is 
something I would have used multiple times.  As is, I use islice() or a 
break inside a loop, but that hypothetical parameter might be a helpful 
convenience.


However, it is indeed NOT the current proposal or discussion.


Besides, "zip(iter1, iter2, range(5))" is the same length once you 
include the extra unpack, plus it works well with earlier versions.


Cheers,
Steve


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/H2D7KJNOSTBORG5BT22JX3XTRG5AATO6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: A PEP PR that I closed until someone discusses context

2020-05-06 Thread Steve Dower

On 06May2020 2204, joannah nanjekye wrote:
I saw a PR on the PEP repository that looked like a joke here : 
https://github.com/python/peps/pull/1396


The author can give context to re-open if it was intentional.


Given there isn't a real email address on the PEP, I'd assume it was 
meant as a joke.


I wouldn't put any more time into this.

Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2D72HPN7XAKERRCI4TEASGOMWJNNAIGK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Remove ctypes from uuid

2020-05-04 Thread Steve Dower
For those who haven't looked in a while, the uuid module uses ctypes to 
look up libuuid for uuid_generate_time_safe() and uuid_generate_time() 
functions.


I've run into scenarios where I need to remove this from our own builds, 
but it seems like it's probably unnecessary anyway? It's certainly a 
security risk, though in most cases the _uuid module should provide them 
anyway.


I'm proposing to remove the ctypes fallbacks: 
https://bugs.python.org/issue40501


If anyone knows that they are reliant on not having libuuid at compile 
time, but being able to load it later via ctypes, it would be great if 
you could drop by the issue and explain the scenario.


Thanks,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EVM6NU7FXKENNEVOZJWO7HLV57CLVEUE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: killing static types (for sub-interpreters?)

2020-04-28 Thread Steve Dower

On 28Apr2020 2006, Steve Dower wrote:
(For those who aren't following it, there's a discussion with a patch 
and benchmarks going on at https://bugs.python.org/issue40255 about 
making objects individually immortal. It's more focused around 
copy-on-write, rather than subinterpreters, but the benefits apply to 
both.)


More precisely, the benefits are different, but the implementation 
provides each to each scenario.


I also want to draw attention to one specific post 
https://bugs.python.org/issue40255#msg366577 where some additional 
changes (making more objects immortal) brought the benchmarks well back 
within error margins, after initial checks found more than 10% 
regression on about 1/4 of the performance suite.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VGIO4GB25DEDJ5FXOUBJV5JMKFDDD5JY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: killing static types (for sub-interpreters?)

2020-04-28 Thread Steve Dower
If the object is going to live until the "end of time" 
(process/runtime/whatever) then there'll never be a need to deallocate 
it, and so there's no point counting how many references exist (and 
ditto for anything that it references).


Currently, statically allocated types include references to 
heap-allocated objects, and since different interpreters may use 
different heaps (via different allocators), this means they can't share 
the static types either. These references are for freelists, weak 
references, and some others that I forget but apparently make it 
unfixable. Those with a __dict__ object also need to be per-interpreter.


If statically allocated types were truly constant, that would be great! 
Then they could be freely reused. The same applies for many of our 
built-in non-container types too, in my opinion (and my goal would be to 
make code objects fully shareable, so you don't have to recompile/reload 
them for each new interpreter).


(For those who aren't following it, there's a discussion with a patch 
and benchmarks going on at https://bugs.python.org/issue40255 about 
making objects individually immortal. It's more focused around 
copy-on-write, rather than subinterpreters, but the benefits apply to both.)


Cheers,
Steve

On 28Apr2020 1949, Paul Ganssle wrote:

I don't know the answer to this, but what are some examples of objects
where you never change the refcount? Are these Python objects? If so,
wouldn't doing something like adding the object to a list necessarily
change its refcount, since the list implementation only knows, "I have a
reference to this object, I must increase the reference count", and it
doesn't know that the object doesn't need its reference count changed?

Best,
Paul

On 4/28/20 2:38 PM, Jim J. Jewett wrote:

Why do sub-interpreters require (separate and) heap-allocated types?

It seems types that are statically allocated are a pretty good use for immortal 
objects, where you never change the refcount ... and then I don't see why you 
need more than one copy.

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MKMGRHYKA2WLQ6UPLJQS5TXCC7CFEN43/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-28 Thread Steve Dower

On 28Apr2020 1243, Petr Viktorin wrote:

On 2020-04-28 00:26, Steve Dower wrote:

On 27Apr2020 2311, Tom Forbes wrote:
Why not? It's a decorator, isn't it? Just make it check for number 
of arguments at decoration time and return a different object.


It’s not that it’s impossible, but I didn’t think the current 
implementation doesn’t make it easy 


This is the line I'd change: 
https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323c97/Lib/functools.py#L763 



At this point, you could inspect the user_function object and choose a 
different wrapper than _lru_cache_wrapper if it takes zero arguments. 
Though you'd likely still end up with a lot of the code being replicated.


Making a stdlib function completely change behavior based on a function 
signature feels a bit too magic to me.
I know lots of libraries do this, but I always thought of it as a cool 
little hack, good for debugging and APIs that lean toward being simple 
to use rather than robust. The explicit `call_once` feels more like API 
that needs to be supported for decades.


I've been trying to clarify whether call_once is intended to be the 
functional equivalent of lru_cache (without the stats-only mode). If 
that's not the behaviour, then I agree, magically switching to it is no 
good.


But if it's meant to be the same but just more efficient, then we 
already do that kind of thing all over the place (free lists, strings, 
empty tuple singleton, etc.). And I'd argue that it's our responsibility 
to select the best implementation automatically, as it saves libraries 
from having to pull the same tricks.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/J6G33EDWEH6ZAFW4BRH2EBYG77DNX6OI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-27 Thread Steve Dower

On 27Apr2020 2311, Tom Forbes wrote:
Why not? It's a decorator, isn't it? Just make it check for number of 
arguments at decoration time and return a different object.


It’s not that it’s impossible, but I didn’t think the current 
implementation doesn’t make it easy 


This is the line I'd change: 
https://github.com/python/cpython/blob/cecf049673da6a24435acd1a6a3b34472b323c97/Lib/functools.py#L763


At this point, you could inspect the user_function object and choose a 
different wrapper than _lru_cache_wrapper if it takes zero arguments. 
Though you'd likely still end up with a lot of the code being replicated.


You're probably right to go for the C implementation. If the Python 
implementation is correct, then best to leave the inefficiencies there 
and improve the already-fast version.


Looking at 
https://github.com/python/cpython/blob/master/Modules/_functoolsmodule.c 
it seems the fast path for no arguments could be slightly improved, but 
it doesn't look like it'd be much. (I'm deliberately not saying how I'd 
improve it in case you want to do it anyway as a learning exercise, and 
because I could be wrong :) )


Equally hard to say how much more efficient a new API would be, so 
unless it's written already and you have benchmarks, that's probably not 
the line of reasoning to use. An argument that people regularly get this 
wrong and can't easily get it right with what's already there is most 
compelling - see the recent removeprefix/removesuffix discussions if you 
haven't.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HTOONP2GW3WCMWHEKHOBWNGJUYGUCACS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-27 Thread Steve Dower

On 27Apr2020 2237, t...@tomforb.es wrote:

2. Special casing "lru_cache" to account for zero arity methods isn't trivial and we 
shouldn't endorse lru_cache as a way of achieving "call_once" semantics


Why not? It's a decorator, isn't it? Just make it check for number of 
arguments at decoration time and return a different object.


That way, people can decorate their functions now and get correct 
behaviour (I assume?) on 3.8 and earlier, and also a performance 
improvement on 3.9, without having to do any version checking.


This part could even be written in Python.


3. Implementing a thread-safe (or even non-thread safe) "call_once" method is 
non-trivial


Agree that this is certainly true. But presumably we should be making 
lru_cache thread safe if it isn't.



4. It complements the lru_cache and cached_property methods currently present 
in functools.


It's unfortunate that cached_property doesn't work at module level (as 
was pointed out on the other threads - thanks for linking those, BTW).


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JNIOOBGOMTNGQTSRCBDBS7WAT4H65A4P/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-14 Thread Steve Dower

On 14Apr2020 1557, André Malo wrote:

Stefan Behnel wrote:

André Malo schrieb am 14.04.20 um 13:39:

A good way to test that promise (or other implications like performance)
might

  also be to rewrite the standard library extensions in Cython and

see where it leads.



Not sure I understand what you're saying here. stdlib extension modules are
currently written in C, with a bit of code generation. How is that
different?


They are C extensions like the ones everybody could write. They should use the
same APIs. What I'm saying is, that it would be a good test if the APIs are
good enough (for everybody else). If, say, Cython is recommended, some attempt
should be made to achieve the same results with Cython. Or some other sets of
APIs which are considered for "the public".

I don't think, the current stdlib modules restrict themselves to a limited
API. The distinction between "inside" and "outside" bothers me.


It should not bother you. The standard library is not a testing ground 
for the public API - it's a layer to make those APIs available to users 
in a reliable, compatible format. Think of it like your C runtime, which 
uses a lot of system calls that have changed far more often than libc.


We can change the interface between the runtime and the included modules 
as frequently as we like, because it's private. And we do change them, 
and the changes go unnoticed because we adapt both sides of the contract 
at once. For example, we recently changed the calling conventions for 
certain functions, which didn't break anyone because we updated the 
callers as well. And we completely reimplemented stat() emulation on 
Windows recently, which wasn't incompatible because the public part of 
the API didn't change (except to have fewer false errors).


Modules that are part of the core runtime deliberately use private APIs 
so that other extension modules don't have to. It's not any sort of 
unfair advantage - it's a deliberate aspect of the software's design.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RFYAK5NDBY4DYHKBYOQK5SUKMIT4VZZX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-13 Thread Steve Dower

On 13Apr2020 2308, André Malo wrote:

For one thing, if you open up APIs for Cython, they're open for everybody
(Cython being "just" another C extension).
More to the point: The ABIs have the same problem as they have now, regardless
how responsive the Cython developers are. Once you compiled the extension,
you're using the ABI and are supposedly not required to recompile to stay
compatible.

So, where I'm getting at is: Either you open up to everybody or nobody. In C
there's not really an in-between.


On a technical level, you are correct.

On a policy level, we don't make changes that would break users of the C 
API. Because we can't track everyone who's using it, we have to assume 
that everything is used and any change will cause breakage.


To make sure it's possible to keep developing CPython, we declare parts 
of the API off limits (typically by prepending them with an underscore). 
If you use these, and you break, we're sorry but we aren't going to fix it.


This line of discussion is basically saying that we would designate a 
broader section of the API that is off limits, most likely the parts 
that are only useful for increased performance (rather than increased 
functionality). We would then specifically include the Cython 
team/volunteers in discussions about how to manage changes to these 
parts of the API to avoid breaking them, and possibly do simultaneous 
releases to account for changes so that their users have more time to 
rebuild.


Effectively, when we change our APIs, we would break everyone except 
Cython because we've worked with them to avoid the breakage. Anyone else 
using it has to make their own effort to follow CPython development and 
detect any breakage themselves (just like today).


So probably the part you're missing is where we would give ourselves 
permission to break more APIs in a release, while simultaneously 
encouraging people to use Cython as an isolation layer from those breaks.


(Cython is still just a placeholder name here, btw. There are 1-2 other 
projects that could be considered instead, though I think Cython is the 
only one that also provides a usability improvement as well as API 
stability.)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BPFQKMXTMVVSFVFEAJRXAPVQEZE3HMFN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-13 Thread Steve Dower

On 13Apr2020 2105, Chris Meyer wrote:
How would I call a Python function from the C++ application that returns 
a Python object to C++ and then call a method on that Python object from 
C++?


My specific example is that I create Python handlers for Qt windows and 
then from the Qt/C++ I call methods on those Python objects from C++ 
such as “handle mouse event”.


You're in a bit of trouble here regardless, depending on how robust you 
need to be. If you've only got synchronous, single-threaded event 
handlers then you'll be okay. Anything more complex and you'll have some 
fun debugging sessions to look forward to.


I would definitely say look at PyBind11. A while ago I posted a sample 
using this to embed Python in a game engine at 
https://devblogs.microsoft.com/python/embedding-python-in-a-cpp-project-with-visual-studio/ 
(VS is not required, it just happened to be the hook to do the 
post/video ;) )


To jump straight to the code, go to 
https://github.com/zooba/ogre3d-python-embed/blob/master/src/PythonCharacter.cpp 
and search for "py::", and also 
https://github.com/zooba/ogre3d-python-embed/blob/master/src/ogre_module.h


PyBind11 is nice for avoiding the boilerplate and ref-counting, but has 
its own set of obscure error cases. It's also not as easy to debug as 
Cython or going straight to the Python C API, depending on what the 
issue is, as there's no straightforward generated code. Even stepping 
through the templated code interactively in VS doesn't help make it any 
easier to follow.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GVXH2AJC7Z2F5AIBIMCXEDKXLEYVCCU4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-13 Thread Steve Dower

On 13Apr2020 1325, Paul Moore wrote:

Personally, I'd say that "recommended 3rd party tools" reads as saying
"if you want a 3rd party tool to build extensions, these are good (and
are a lot easier than using the raw C API)". That's a lot different
than saying "we recommend that people writing C extensions do not use
the raw C API, but use one of these tools instead".


Yeah, that's fair. But at the same time, saying anything more strong is 
an endorsement that we might have to withdraw at some point in the 
future (if the project we recommend implodes, for example).



Also, if we *are* going to push people away from the raw C API, then I
think we should be recommending a particular tool (likely Cython) as
what people writing their first extension (or wanting to switch from
the raw C API for the first time) should use. Faced with the API docs,
and a list of 3rd party options, I know that *I* am likely to say
"yeah, leave that research for another day, I'll use what's in the
docs in front of me for now". Also, if we are expecting to push people
towards 3rd party tools, that seems to me to be a relatively
significant shift in emphasis, and one we should be publicising more
directly (via What's New, and blog postings / release announcements,
etc.) In the absence of anything like that, I think it's quite
reasonable for people to gravitate towards the traditional C API.


Right, except we haven't decided to do it yet. There's still a debate 
about whether the current third party tools are even sufficient (not to 
mention what "sufficient" means).



Having said all this, I *do* think that promoting some 3rd party tool
(as I say, I suspect this would be Cython) as the recommended means of
writing C extensions, is a reasonable approach to take. I just object
to it happening "quietly" via changes like this which make it harder
to use the raw C API, justifying themselves by saying "you shouldn't
do that anyway".


Agreed, I'd rather be up front about it.


On a related but different note, what is the recommended policy
(assuming it's not to use the C API) for embedding Python, and for
exposing the embedding app to Python as a C extension? My standard
example of this is the Vim interface to Python - see
https://github.com/vim/vim/blob/master/src/if_python3.c. I originally
wrote this back in the Python 1.5 days, so it's *very* old, and quite
likely not how I'd write it now, even using the C API. But what's the
recommendation for code like that in the face of these changes, and
the suggestion that using 3rd party tools is the normal way to write C
extensions?


I don't think any current 3rd party tools really help with embedding (I 
say that as a regular embedder, not as someone who skim-read their 
docs). In this case, you really do need low-level access to Python's 
thread and memory management, and the ability to interact directly with 
the rest of your application's data structures.


PyBind11 is the best I've used here - Cython insists on including all 
its boilerplate to make a complete module, which often is not what you 
want. But there's a lot of core things that need to be improved if 
embedding is going to get any better, as I've posted often enough. We 
can't rely on third-party tools here, yet.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/F6F6HQPSOEEQKLW2M6OQSSVMWXZHQ6Y3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-13 Thread Steve Dower

On 13Apr2020 1122, Steve Dower wrote:

On 11Apr2020 0111, Victor Stinner wrote:

Steve: the use case is to debug very rare Python crashes (ex: once
every two months) of customers who fail to provide a reproducer. My
*expectation* is that a debug build should help to reproduce the bug
and/or provide more information when the bug happens. My motivation
for this feature is also to show that the bug is not on Python but in
third-party C extensions ;-)


I think your expectation is wrong. If a stack trace of the crash doesn't 
show that it belongs to the third party module (which most of the ones 
that are sent back on Windows indeed show), then you need more invasive 
tracing to show that the issue came from the module. Until we actually 
have opaque, non-static objects, that doesn't seem to be possible.


All you've done right now is enable new inconsistencies and potential 
issues when mixing debug and release builds. That just makes things 
harder to diagnose.


I think what you really wanted to do here was have a build option 
_other_ than the debug flag to turn on additional checks. Like you did 
with tracemalloc.


The debug flag turns on additional runtime checks in the underlying C 
compiler and runtime on Windows (and I presume elsewhere? Is this such a 
crazy idea?), such as buffer overrun detection and memory misuse. The 
only way to make a debug build properly compatible with a release build 
is to disable these checks, which leaves us completely unable to take 
advantage of them. It also significantly speeds up compile time, which 
is very useful as a developer.


But if your goal is to have a release build that includes additional 
ABI-transparent checks, then I don't see why you wouldn't just build 
with those options? It's not like CPython takes that long to build from 
a clean working directory.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AY3QEYN6JEQFEZVJ2MUT5A2SJ5I72RAS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-13 Thread Steve Dower

On 13Apr2020 1157, Antoine Pitrou wrote:

On Mon, 13 Apr 2020 11:35:34 +0100
Steve Dower  wrote:

and this code
that they're using doesn't have any system dependencies that differ in
debug builds (spoiler: they do).


Are you talking about Windows?  On non-Windows systems, I don't think
there are "system dependencies that differ in debug builds".


Of course I'm talking about Windows. I'm about the only person here who
does, and I'm having to represent at least half of our overall userbase
(look up my PyCon 2019 talk for the charts).


Ok :-)  However, Victor's point holds for non-Windows platforms, which
is *also* half of our userbase.


True, though probably not the half sending him binary extension modules 
that nobody can rebuild ;)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CFJXEYIVPCJWCL27WN4BKEYN2RBKY3O5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-13 Thread Steve Dower

On 11Apr2020 0025, Antoine Pitrou wrote:

On Fri, 10 Apr 2020 23:33:28 +0100
Steve Dower  wrote:

On 10Apr2020 2055, Antoine Pitrou wrote:

On Fri, 10 Apr 2020 19:20:00 +0200
Victor Stinner  wrote:


Note: Cython and cffi should be preferred to write new C extensions.
This PEP is about existing C extensions which cannot be rewritten with
Cython.


Using Cython does not make the C API irrelevant.  In some
applications, the C API has to be low-level enough for performance.
Whether the application is written in Cython or not.


It does to the code author.

The point here is that we want authors who insist on coding against the
C API to be aware that they have fewer compatibility guarantees [...]


Yeah, you missed the point of my comment here.  Cython *does* call into
the C API, and it's quite insistent on performance optimizations too.
Saying "just use Cython" doesn't make the C API unimportant - it just
hides it from your own sight.


It centralises the change. I have no problem giving Cython access to 
things that we discourage every developer from using, provided they 
remain responsive to change and use the special access responsibly (e.g. 
by not touching reserved fields at all).


We could do a better job of helping them here.


**Backward compatibility:** backward incompatible on purpose. Break the
limited C API and the stable ABI, with the assumption that `Most C
extensions don't rely directly on CPython internals`_ and so will remain
compatible.


The problem here is not only compatibility but potential performance
regressions in C extensions.


I don't think we've ever guaranteed performance between releases.
Correctness, sure, but not performance.


That's a rather weird argument.  Just because you don't guarantee
performance doesn't mean it's ok to introduce performance regressions.

It's especially a weird argument to make when discussing a PEP where
most of the arguments are distant promises of improved performance.


If you've guaranteed compatibility but not performance, it means you can 
make changes that prioritise compatibility over performance.


If you promise to keep everything the same, you can never change 
anything. Arguing that everything is an implied contract between major 
version releases is the weird argument.



Fork and "Copy-on-Read" problem
...

Solve the "Copy on read" problem with fork: store reference counter
outside ``PyObject``.


Nowadays it is strongly recommended to use multiprocessing with the
"forkserver" start method:
https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods

With "forkserver", the forked process is extremely lightweight and
there are little savings to be made in the child.


Unfortunately, a recommendation that only applies to a minority of
Python users. Oh well.


Which "minority" are you talking about?  Neither of us has numbers, but
I'm quite sure that the population of Python users calling into
multiprocessing (or a third-party library relying on multiprocessing,
such as Dask) is much larger than the population of Python users
calling fork() directly and relying on copy-on-write for optimization
purposes.

But if you have a different experience to share, please do so.


Neither Windows not macOS support fork (macOS only recently).

Break that down however you like, but by number of *developers* (as 
opposed to number of machines), and factoring in those who care about 
cross-platform compatibility, fork is not a viable thing to rely on.



Separating refcounts theoretically improves cache locality, specifically
the case where cache invalidation impacts multiple CPUs (and even the
case where a single thread moves between CPUs).


I'm a bit curious why it would improve, rather than degrade, cache
locality. If you take the typical example of the eval loop, an object
is incref'ed and decref'ed just about the same time that it gets used.


Two CPUs can read the contents of a string from their own cache. As soon 
as one touches the refcount, the cache line containing both the refcount 
and the string data in the other CPU is invalidated, and now it has to 
wait for synchronisation before reading the data.


If the refcounts are in a separate cache line, this synchronization 
doesn't have to happen.



I'll also note that the PEP proposes to remove APIs which return
borrowed references... yet increasing the number of cases where
accessing an object implies updating its refcount.


Yeah, I'm more okay with keeping borrowed references in some cases, but 
it does make things more complicated. Apparently some developers get it 
wrong consistently enough that we have to fix it? (ALL developers get it 
wrong during development ;) )



and this code
that they're using doesn't have any system dependencies that differ in
debug builds (spoiler: they do).


Are you talking about Windows?  On non-Windows systems, I don't think
there are "

[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-13 Thread Steve Dower

On 11Apr2020 1156, Rhodri James wrote:

On 10/04/2020 18:20, Victor Stinner wrote:

Note: Cython and cffi should be preferred to write new C extensions.
This PEP is about existing C extensions which cannot be rewritten with
Cython.


If this is true, the documentation on python.org needs a serious 
rewrite.  I am in the throes of writing a C extension, and using Cython 
or cffi never even crossed my mind.




Sorry you missed the first two sections: "Recommended third party tools" 
and "Creating extensions without third party tools".


https://docs.python.org/3/extending/index.html

If you have any suggestions on how to make this recommendation more 
obvious, please open an issue and describe what would have helped.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7QZOSBXNNVQV6LYX3DPKI2RLSJ2K7XRY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-13 Thread Steve Dower

On 11Apr2020 0111, Victor Stinner wrote:

Steve: the use case is to debug very rare Python crashes (ex: once
every two months) of customers who fail to provide a reproducer. My
*expectation* is that a debug build should help to reproduce the bug
and/or provide more information when the bug happens. My motivation
for this feature is also to show that the bug is not on Python but in
third-party C extensions ;-)


I think your expectation is wrong. If a stack trace of the crash doesn't 
show that it belongs to the third party module (which most of the ones 
that are sent back on Windows indeed show), then you need more invasive 
tracing to show that the issue came from the module. Until we actually 
have opaque, non-static objects, that doesn't seem to be possible.


All you've done right now is enable new inconsistencies and potential 
issues when mixing debug and release builds. That just makes things 
harder to diagnose.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/66K6TAPXEDBGSZFJLRNZNKNPCHIPUAAY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-10 Thread Steve Dower

On 10Apr2020 2055, Antoine Pitrou wrote:

On Fri, 10 Apr 2020 19:20:00 +0200
Victor Stinner  wrote:


Note: Cython and cffi should be preferred to write new C extensions.
This PEP is about existing C extensions which cannot be rewritten with
Cython.


Using Cython does not make the C API irrelevant.  In some
applications, the C API has to be low-level enough for performance.
Whether the application is written in Cython or not.


It does to the code author.

The point here is that we want authors who insist on coding against the 
C API to be aware that they have fewer compatibility guarantees - maybe 
even to the point of needing to rebuild for each minor version if you 
want to insist on using macros (i.e. anything starting with "_Py").



Examples of issues to make structures opaque:

* ``PyGC_Head``: https://bugs.python.org/issue40241
* ``PyObject``: https://bugs.python.org/issue39573
* ``PyTypeObject``: https://bugs.python.org/issue40170


How do you keep fast type checking such as PyTuple_Check() if extension
code doesn't have access e.g. to tp_flags?


Measured in isolation, sure. But what task are you doing that is being 
held up by builtin type checks?


If the type check is the bottleneck, you need to work on more 
interesting algorithms ;)



I notice you did:
"""
Add fast inlined version _PyType_HasFeature() and _PyType_IS_GC()
for object.c and typeobject.c.
"""

So you understand there is a need.


These are private APIs.


**Backward compatibility:** backward incompatible on purpose. Break the
limited C API and the stable ABI, with the assumption that `Most C
extensions don't rely directly on CPython internals`_ and so will remain
compatible.


The problem here is not only compatibility but potential performance
regressions in C extensions.


I don't think we've ever guaranteed performance between releases. 
Correctness, sure, but not performance.



New optimized CPython runtime
==

Backward incompatible changes is such a pain for the whole Python
community. To ease the migration (accelerate adoption of the new C
API), one option is to provide not only one but two CPython runtimes:

* Regular CPython: fully backward compatible, support direct access to
   structures like ``PyObject``, etc.
* New optimized CPython: incompatible, cannot import C extensions which
   don't use the limited C API, has new optimizations, limited to the C
   API.


Well, this sounds like a distribution nightmare.  Some packages will
only be available for one runtime and not the other.  It will confuse
non-expert users.


Agreed (except that it will also confuse expert users). Doing "Python 
4"-by-stealth like this is a terrible idea.


If it's incompatible, give it a new version number. If you don't want a 
new version number, maintain compatibility. There are no alternatives.



O(1) bytearray to bytes conversion
..

Convert bytearray to bytes without memory copy.

Currently, bytearray is used to build a bytes string, but it's usually
converted into a bytes object to respect an API. This conversion
requires to allocate a new memory block and copy data (O(n) complexity).

It is possible to implement O(1) conversion if it would be possible to
pass the ownership of the bytearray object to bytes.

That requires modifying the ``PyBytesObject`` structure to support
multiple storages (support storing content into a separate memory
block).


If that's desirable (I'm not sure it is), there is a simpler solution:
instead of allocating a raw memory area, bytearray could allocate... a
private bytes object that you can detach without copying it.


Yeah, I don't see the point in this one, unless you mean a purely 
internal change. Is this a major bottleneck?


Having a broader concept of "freezable" objects may be a valuable thing 
to enable in a new runtime, but retrofitting it to CPython doesn't seem 
likely to have a big impact.



Fork and "Copy-on-Read" problem
...

Solve the "Copy on read" problem with fork: store reference counter
outside ``PyObject``.


Nowadays it is strongly recommended to use multiprocessing with the
"forkserver" start method:
https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods

With "forkserver", the forked process is extremely lightweight and
there are little savings to be made in the child.


Unfortunately, a recommendation that only applies to a minority of 
Python users. Oh well.


Separating refcounts theoretically improves cache locality, specifically 
the case where cache invalidation impacts multiple CPUs (and even the 
case where a single thread moves between CPUs). But I don't think 
there's been a convincing real benchmark of this yet.



Debug runtime and remove debug checks in release mode
.

If the C extensions are no longer tied to CPython internals, it becomes
possible to switch to a Python runtime built in debug mode 

[Python-Dev] Re: Need help with test_ctypes failing on Windows (test_load_dll_with_flags)

2020-04-07 Thread Steve Dower
FWIW, this test is meant to verify that the old, unsafe DLL load logic 
still works.


I suspect what has happened here is that a new VM image has been rolled 
out and another app has installed an incompatible _sqlite3.dll on PATH 
(most likely another copy of Python :) ), thereby proving why the old 
logic is unsafe.


We've disabled the test for now, so if you merge and resubmit it should 
be fine.


Now we just have to decide whether to disable this part of the test 
forever, or try and manipulate the test environment enough to make it 
pass (which I suspect is just setting PATH back to a sane value).


Sorry for the inconvenience!

Cheers,
Steve

On 07Apr2020 0420, Kyle Stanley wrote:
Looking over the commit history for the PR 
(https://github.com/python/cpython/pull/18239/commits), it looks like 
that specific Azure Pipelines failure did not start occurring until 
upstream/master was merged into the PR branch 
(https://github.com/python/cpython/pull/18239/commits/13d3742fd897e1ea77060547de6d8445877e820e). 
Therefore, I suspect that the failure is very likely unrelated to the 
PR; instead either an intermittent failure that was merged into master 
recently or a possible issue on Azure's end. For now, I'd suggest 
closing and re-opening the PR again tomorrow to see if the failure still 
occurs.


Note: I'm also seeing the same exact failure occur in the following 
separate CPython PRs that were opened recently:


https://github.com/python/cpython/pull/19403
https://github.com/python/cpython/pull/19402
https://github.com/python/cpython/pull/19399

Seeing as it was also occurring in entirely unrelated PRs, it seems to 
be unrelated to the PEP 585 PR. I'm not seeing a BPO issue for this 
failure, so I'll open a new one for it.


On Mon, Apr 6, 2020 at 10:24 PM Guido van Rossum > wrote:


I have a large PR (https://github.com/python/cpython/pull/18239, for
PEP 585) that's failing in the Azures pipeline on Win32 and Win64
only. My trusty assistant who has a Windows laptop couldn't
reproduce the failure. Can I buy a hint from someone? Steve?

The relevant failure output is:

==
ERROR: test_load_dll_with_flags
(ctypes.test.test_loading.LoaderTest) [WinDLL('_sqlite3.dll',
winmode=0)]
--
Traceback (most recent call last):
   File "d:\a\1\s\lib\ctypes\test\test_loading.py", line 140, in
should_pass
     subprocess.check_output(
   File "d:\a\1\s\lib\subprocess.py", line 420, in check_output
     return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
   File "d:\a\1\s\lib\subprocess.py", line 524, in run
     raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command
'['d:\\a\\1\\s\\PCbuild\\win32\\python.exe', '-c', "from ctypes
import *; import nt;WinDLL('_sqlite3.dll', winmode=0)"]' returned
non-zero exit status 1.

--

-- 
--Guido van Rossum (python.org/~guido )

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LNSRMTOJUHAU2JLC2OA4NXWHURGPO5LK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 616 -- String methods to remove prefixes and suffixes

2020-03-24 Thread Steve Dower

On 24Mar2020 1849, Brett Cannon wrote:

-1 on "cut*" because my brain keeps reading it as "cute".
+1 on "trim*" as it is clear what's going on and no confusion with preexisting 
methods.
+1 on "remove*" for the same reasons as "trim*".

And if no consensus is reached in this thread for a name I would assume the SC is going 
to ultimately decide on the name if the PEP is accepted as the burden of being known as 
"the person who chose _those_ method names on str" is more than any one person 
should have bear. ;)


-1 on "cut*" (feels too much like what .partition() does)
-0 on "trim*" (this is the name used in .NET instead of "strip", so I 
foresee new confusion)

+1 on "remove*" (because this is exactly what it does)

Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/KVU75BNXIUBIOYM6ZJSPZSKNRS7Y6CYU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [bpo-22699] Cross-compiling and fixing sysconfig

2020-03-19 Thread Steve Dower
Distutils is learning slowly, but this is about the setup.py that's used 
to build CPython's own extension modules (everything in 
lib/python3.8/lib-dynload).


Questions about building third-party packages go to distutils-sig, not 
python-dev :)


Cheers,
Steve

On 19Mar2020 2034, Ivan Pozdeev via Python-Dev wrote:
Last time I checked, distutils didn't support compilation for anything 
but the running Python instance, nor was it intended to. Should it?
If not, the efforts look misplaced, you should rather use a toolchain 
that does...


On 19.03.2020 23:22, Steve Dower wrote:
So over on https://bugs.python.org/issue22699 I've been talking to 
myself as I figure out all the ways that cross-compiling (on Ubuntu, 
targeting another Linux distro via an SDK) fails.


In short, it's either because sysconfig can't provide details about 
any CPython install other than the currently running one, or it's 
because our setup.py (for building the CPython extension modules) uses 
sysconfig.


Either way, I'm not about to propose a rewrite to fix either of them 
without finding those who are most involved in these areas. Please 
come join me on the bug.


Thanks,
Steve

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XMPNAGOCABV6LD7PO3ZRWH4KZJ6E72S2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] [bpo-22699] Cross-compiling and fixing sysconfig

2020-03-19 Thread Steve Dower
So over on https://bugs.python.org/issue22699 I've been talking to 
myself as I figure out all the ways that cross-compiling (on Ubuntu, 
targeting another Linux distro via an SDK) fails.


In short, it's either because sysconfig can't provide details about any 
CPython install other than the currently running one, or it's because 
our setup.py (for building the CPython extension modules) uses sysconfig.


Either way, I'm not about to propose a rewrite to fix either of them 
without finding those who are most involved in these areas. Please come 
join me on the bug.


Thanks,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6DBYMDCDLOS245XK57BD3E2GXGVDMBPX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proliferation of tstate arguments.

2020-03-17 Thread Steve Dower

On 17Mar2020 1803, Chris Angelico wrote:

On Wed, Mar 18, 2020 at 3:50 AM Mark Shannon  wrote:

The accessibility of a thread-local variable is a strict superset of
that of a function-local variable.

Therefore storing the thread state in a thread-local variable is at
least as capable as passing thread-state as a parameter.



And by that logic, globals are even more capable. I don't understand
your point. Isn't the purpose of the tstate parameters to avoid the
problem of being unable to have multiple tstates within the same OS
thread? I think I've missed something here.


You haven't. Separating the Python thread from the "physical" thread is 
indeed the point.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GJDHY2AEF2BSBMNWZRP5IXVEGSWB6IF4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proliferation of tstate arguments.

2020-03-17 Thread Steve Dower

On 17Mar2020 1447, Mark Shannon wrote:

On 16/03/2020 3:04 pm, Victor Stinner wrote:

In short, the answer is yes.


I said "no" then and gave reasons. AFAICT no one has faulted my reasoning.


I said "yes" then and was also not faulted.

Let me reiterate why using a thread-local variable is better than 
passing the thread state down the C stack.


1. Using a thread-local variable for the thread state requires much 
smaller changes to the code base.


Using thread-local variables enforces a threading model on the host 
application, rather than working with the existing threading model. So 
anyone embedding is forced into *significantly* more code as a result.


We can (and should) maintain a public-facing API that uses TLS to pass 
the current thread state around - we have compatibility constraints. But 
we can also add private versions that take the thread state (once you've 
started trying/struggling to really embed CPython, you'll happily take a 
dependency on "private" APIs).


If the only available API requires TLS, then you're likely to see the 
caller wrap it all up in a function that updates TLS before calling. Or 
alternatively, introduce dedicated threads for running Python snippets 
on, and all the (dead)locking that results (yes, I've done both).


Our goal as core CPython developers should be to sacrifice our own 
effort to reduce the effort needed by our users, not to do things that 
make our own lives easier but harm them.


2. Using a thread-local variable is less error prone. When passing 
tstate as a parameter, what happens if the tstate argument is from a 
different thread or is NULL? Are you adding checks for those cases?

What are the performance implications of adding those checks?


Undefined behaviour is totally acceptable here. We can assert in debug 
builds - developers who make use of this can test with debug builds.


3. Using a thread-local variable is likely to be a little bit faster. 
Passing an argument down the stack increases register pressure and spills.
Accessing a thread-local is slower at the point of access, but the cost 
is incurred only when it is needed, so is cheaper overall.


Compilers can optimise parameters/locals in ways that are far more 
efficient than they can do for anything stored outside the call stack. 
Especially for internal calls. Going through public/exported functions 
is a little more restricted in terms of optimisations, but if we 
identify an issue here then we can work on that then.


[OTHER POST]

Just to be clear, this is what I mean by a thread local variable:
https://godbolt.org/z/dpSo-Q


Showing what one particular compiler generates for one particular 
situation is terrible information (I won't bother calling it "evidence").



One motivation is to ease the implementation of subinterpreters (PEP
554). But PEP 554 describes more than public API than the
implementation.


I don't see how this eases the implementation of subinterpreters.
Surely it makes it harder by causing merge conflicts.


That's a very selfish point-of-view :)

It eases it because many more operations need to know the current Python 
"thread" in order to access things that used to be globals, such as 
PyTypeObject instances. Having the thread state easily and efficiently 
accessible does make a difference here.



The long-term goal is to be able to run multiple isolated interpreters
in parallel.


An admirable goal, IMO.
But how is it to be done? That is the question.


By isolating thread states properly from global state.


Sorry about that :-/ A lot of Python internals should be modified to
implement subinterpreters.


I don't think they *should*. When they *must*, then do that.
Changes should only be made if necessary for correctness or for a 
significant improvement of performance.


They must - I think Victor just chose the wrong English word there. 
Correctness is the first thing to fall when you access globals in 
multithreaded code, and the CPython code base accesses a lot of globals.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5FB5PNH2H72ZAAN7TEDR3NS45H34ELEY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Maintenance of multiprocessing module: there are a few stalled issues/patches

2020-02-13 Thread Steve Dower




On 13Feb2020 0156, Benjamin Peterson wrote:



On Wed, Feb 12, 2020, at 08:22, mailer@app.tempr.email wrote:

I've just been looking through the multiprocessing module and open
issues and wondered why there were some small bugs/patches not being
fixed/merged. Is this the "normal" patch cycle? Does it take years for
bugs to get fixed in Python, even though patches are submitted? Just
asking, I realize this sounds very negative, but I don't mean to be
criticizing. Doing volunteer work myself, I understand that time is
valuable and not always available. But I would have thought that there
was no shortage of volunteers for Python.


Sadly, that is not that case.


The challenge is that the final review/merge steps require *trusted* 
volunteers, not just anyone with a bit of time.


Anything that actually becomes part of Python is going to impact 
millions of people, so we have a responsibility to take that seriously 
and account for many more aspects than simply "does this fix my 
problem". As a result, volunteers have to develop a reputation before 
they are given permissions to sign off on changes by themselves, which 
becomes a bottleneck in taking on new volunteers. But it's also tough to 
build up a reputation when there's nobody to review your work, so the 
bottleneck becomes a cycle.


Unlike a business, we don't have legal protection/recourse for bad 
decisions. When things break, other volunteers just have to stand up and 
cop the blame on behalf of the whole team. So we all stake our 
reputations on every new core developer, which also slows down the process.


(FWIW, this is the same feedback I posted on that survey about hiring 
developers. "Sufficiently financially motivated" can be a reason to 
trust someone, but maybe not, and many in our community see it as a very 
good reason to openly *distrust* someone...)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5YWLOSICSB3JJ6DZUNVIK6EIMZWIONZZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Azure Pipelines PR: Unresponsive agent

2020-02-01 Thread Steve Dower
On 01Feb2020 1840, Kyle Stanley wrote:
> In a recent PR (https://github.com/python/cpython/pull/18057), I
> received the following error message in the Azure Pipelines build results:
> 
> ##[error]We stopped hearing from agent Azure Pipelines 5. Verify the
> agent machine is running and has a healthy network connection. Anything
> that terminates an agent process, starves it for CPU, or blocks its
> network access can cause this error. For more information, see:
> https://go.microsoft.com/fwlink/?linkid=846610
> 
> Build:
> https://dev.azure.com/Python/cpython/_build/results?buildId=57319=results
> 
> Is there something on our end we can do to bring the agent back online,
> or should I simply wait a while and then try to restart the PR checks?
> Normally I'd avoid doing that, but in this case it's entirely unrelated
> to the PR.

I think we're at the point where it's probably okay to disable Azure
Pipelines as a required check and replace it with the GitHub Actions checks.

But ignoring that for now, I think it's probably best to re-run CI
(close/reopen). Just in case the change did actually cause a problem
that may only show up on that particular configuration. The agent isn't
actually within our control, so it'll be recreated automatically.

(FWIW, the two failing buildbots on the PR are unsupported for 3.9, but
haven't been disabled yet.)

Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/S6R3KH2IVOEXHU2GNESTOGZDZFUYZYTC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Request to postpone some Python 3.9 incompatible changes to Python 3.10

2020-01-27 Thread Steve Dower
On 25Jan.2020 0348, Guido van Rossum wrote:
> On Fri, Jan 24, 2020 at 12:12 AM Serhiy Storchaka  > wrote:
> 
> I consider breaking unmaintained code is an additional benefit of
> removing deprecated features.
> 
> 
> I'd like to warn against this attitude (even though in the past I've
> occasionally said such things). I now  think core Python should not be
> so judgmental. We've broken enough code for a lifetime with the Python 2
> transition. Let's be *much* more conservative when we remove things from
> Python 3. Deprecation is fine, and we should look for other ways to
> handle the problem of unmaintained code. But we should not rush language
> or stdlib changes for this purpose.

I'd like to *strongly* agree with this sentiment.

Marking things as deprecated when we don't like them is a perfectly good
way to advise against their use (and give ourselves permission to let
bit-rot set in). But unless they are an actual maintenance burden, we
gain literally nothing by removing them, and actively hurt our
already-hurting users.

As much as we know that 3.x->3.y is a major version change, many of our
users don't think of them like that (in part because they come out so
often). The more we can keep things working between them, warts or not,
the better.

Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IYQNATVOQ5XVVMZPXTZMTS5JLRGAO2BV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we pass tstate in the VECTORCALL calling convention before making it public?

2020-01-10 Thread Steve Dower

On 09Jan2020 1659, Victor Stinner wrote:

Le jeu. 9 janv. 2020 à 19:33, Steve Dower  a écrit :

Requiring an _active_ Python thread (~GIL held) to only run on a single
OS thread is fine, but tying it permanently to a single OS thread makes
things very painful. (Of course, this isn't the only thing that makes it
painful, but if we're going to make this a deliberate design of CPython
then we are essentially excluding entire interesting classes of embedding.)


Do you have an use case where you reuse the same Python thread state
in different native threads?


I do, but I can't share exactly what it is (yet) :)

But essentially, the main application has its own threading setup where 
callbacks happen on its own threadpool. Some of those callbacks 
sometimes have to raise Python events, but all of those events should be 
serialized (i.e. *no* Python code should run in parallel, even though 
the app's callbacks can).


Note that this is not arbitrary CPython code. It is a very restricted 
context (only first-party modules, no threading/ctypes/etc.).


Ideally, we'd just lock the Python thread and run the Python code and 
get the result back directly. Instead, we've had to set up a parallel 
Python thread that is constantly running, and build a complicated 
cross-thread marshalling setup in order to simulate synchronous calls. 
(And also argue with the application architects who are very against 
mixing threading models in their app, unsurprisingly.)



Currently, the Python runtime expects to have a different Python
thread state per native thread. For example, PyGILState_Ensure()
creates one if there is no Python thread state associated to the
current thread.


And ultimately, everything above is just because of this assumption. 
Turns out that simply transferring the thread state and setting it in 
TLS is *nearly* okay most of the time, but not quite. Breaking down the 
assumption that each CPython thread is the only code running on its 
particular OS thread would help a lot here - making the CPython thread 
state a parameter rather than ambient state is the right direction.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Q7VIABF346X4GQRXDIQIHD4I74GDNF54/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we pass tstate in the VECTORCALL calling convention before making it public?

2020-01-09 Thread Steve Dower

On 09Jan2020 0429, Mark Shannon wrote:
There is a one-to-one correspondence between Python threads and O/S 
threads. So the threadstate can, and should, be stored in a thread local 
variable.


This is a major issue for embedders (and potentially asyncio), as it 
prevents integration with existing thread models or event loops.


Requiring an _active_ Python thread (~GIL held) to only run on a single 
OS thread is fine, but tying it permanently to a single OS thread makes 
things very painful. (Of course, this isn't the only thing that makes it 
painful, but if we're going to make this a deliberate design of CPython 
then we are essentially excluding entire interesting classes of embedding.)


Accessing thread local storage is fast. x86/64 uses the fs register to 
point to it, whereas ARM dedicates R15 (I think).


The register used is OS-specific. We do (and should) use the provided 
TLS APIs, but these add overhead.


Thread locals are not "global". Each sub-interpreter will have its own 
pool of threads. Each threadstate object should contain a pointer to its 
sub-interpreter.


I agree with this, but it's an argument for passing PyThreadState 
explicitly, which seems to go against your previous point (unless you're 
insisting that subinterpreters *must* be tied to specific and distinct 
"physical" threads, in which case let's argue about that because I think 
you're wrong :) ).


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QQRNUZMVGHGL5PLU6RCVKS74SKIYO22I/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we pass tstate in the VECTORCALL calling convention before making it public?

2020-01-08 Thread Steve Dower

On 08Jan2020 0746, Victor Stinner wrote:

The question is now if we should "propagate" tstate to function calls
in the latest VECTORCALL calling convention (which is currently
private). 
+1, for all the same reasons I'm +1 on passing it to C functions 
explicitly. (In short, embedding can be very painful when you have to 
manipulate ambient thread state to make Python code work, and this is a 
good step towards getting rid of that requirement.)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NK2AMXQXYRTG2JFBNXOP4XEUEGSVORFK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] GitHub Actions enabled (was: Travis CI for backports not working)

2019-12-16 Thread Steve Dower

On 13Dec2019 0959, Brett Cannon wrote:

Steve Dower wrote:

If people are generally happy to move PR builds/checks to GitHub
Actions, I'm happy to merge https://github.com/zooba/cpython/pull/7
into
our active branches (with probably Brett's help) and disable Azure
Pipelines?


I'm personally up for trying this out on master, making sure everything runs 
fine, and then push down into the other active branches.


This is now running on master (and likely 3.8 and 3.7, at a guess) - you 
can see it on my PR at https://github.com/python/cpython/pull/17628 
(adding badges and making a tweak to when the builds run)


The checks are not required yet - that requires admin powers.

Please just shout out, either here on at 
https://bugs.python.org/issue39041 if you see anything not working.


(Apart from post-merge coverage checks. We know they're not working - 
see https://github.com/python/cpython/runs/351136928 - and if you'd like 
to help fix it you're more than welcome to jump in!)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/R6VOXOL2HARK6ZHD7OWE4UP7PWTT5A4N/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should set objects maintain insertion order too?

2019-12-16 Thread Steve Dower

On 16Dec2019 0417, Petr Viktorin wrote:
Originally, making dicts ordered was all about performance (or rather 
memory efficiency, which falls in the same bucket.) It wasn't added 
because it's better semantics-wise.

Here's one (very simplified and maybe biased) view of the history of dicts:

* 2.x: Dicts are unordered, please don't rely on the order.
* 3.3: Dict iteration order is now randomized. We told you not to rely 
on it!
* 3.6: We now use an optimized implementation of dict that saves memory! 
As a side effect it makes dicts ordered, but that's an implementation 
detail, please don't rely on it.
* 3.7: Oops, people are now relying on dicts being ordered. Insisting on 
people not relying on it is battling windmills. Also, it's actually 
useful sometimes, and alternate implementations can support it pretty 
easily. Let's make it a language feature! (Later it turns out 
MicroPython can't support it easily. Tough luck.)


For the record, we missed out on a very memory efficient "frozendict" 
implementation because it can't maintain insertion order - Yury is 
currently proposing it as FrozenMap in PEP 603. 
https://discuss.python.org/t/pep-603-adding-a-frozenmap-type-to-collections/2318


Codifying semantics isn't always the kind of future-proof we necessarily 
want to have :)


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Y7YP7SKSQOGOCXNXV27ZGSQDUVZRPSPH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-13 Thread Steve Dower

On 13Dec2019 0233, Victor Stinner wrote:

Azure Pipelines were very unstable one year ago. It's getting better,
but there are still some random bugs sometimes. They are not really
blocking, so I didn't report them.


The only ones I'm aware of are macOS builds failing (which don't run on 
Travis CI, so the only thing stopping these bugs landing is Azure 
Pipelines) and network related issues. But I'm guessing since you said 
"random" bugs that they don't repro well enough to assign blame properly.



On Travis CI, it's possible to only restart a single job manually when
it's a CI issue (like random networking issue). On Azure Pipelines,
there is no way to restart even all jobs at once. The only workaround
is to close the PR and reopen it. But when you do that on a backport
PR, a bot closes the PR and removes the branch. The backport PR must
be recreated, it's annoying.


The UI is getting better here, but given GitHub Actions now has similar 
CI functionality I wouldn't expect Pipelines to focus as much on their 
integration with GitHub (in particular, being able to authorize an 
GitHub team to log in to our Pipelines instance - as we can with Travis 
- has been preventing people from rerunning individual jobs).


If people are generally happy to move PR builds/checks to GitHub 
Actions, I'm happy to merge https://github.com/zooba/cpython/pull/7 into 
our active branches (with probably Brett's help) and disable Azure 
Pipelines? (I'd like to keep it running for post-merge builds and the 
manually triggered ones I use for Windows releases and dependencies.)



In short, having multiple CIs is a good thing :-)


Agreed, though it would also be nice to have a way to dismiss a failure 
after investigating and merge anyway. Only repo administrators can do that.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/S4BVOIW2CPPZ5TIDFPH6CPPG5P3OXA34/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-06 Thread Steve Dower

On 06Dec2019 1023, Kyle Stanley wrote:

Steve Dower wrote:
 > As a related aside, I've been getting GitHub Actions support together
 > (which I started at the sprints).

Would adding support for GitHub Actions make it easier/faster to 
temporarily disable and re-enable specific CI services when they're 
having external issues? IIUC, that seems to be the primary concern to 
address.


Note that I'm not particularly well acquainted with GitHub Actions, 
other than briefly looking over https://github.com/features/actions.


GitHub Actions *is* a CI service now, so my PR is actually using their 
machines for Windows/macOS/Ubuntu build and test.


It doesn't change the ease of enabling/disabling anything - that's still 
very easy if you're an administrator on our repo and impossible if 
you're not :)


However, it does save jumping to an external website to view logs, and 
over time we can improve the integration so that error messages are 
captured properly (but you can easily view each step). And you don't 
need a separate login to rerun checks - it's just a simple button in the 
GitHub UI.


Cheers,
Steve

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/4UYJBJ3KQYASAHPXROFZHXIJAHHPLWPK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-06 Thread Steve Dower

On 06Dec2019 0620, Victor Stinner wrote:

What's the status? Right now, I see Travis CI jobs passing on 3.7,
3.8 and master branches so I don't understand the problem. Maybe the
issue has been fixed and Travis CI can be made mandatory again?


They've been passing fine for me too, I'm not quite sure what the issue was.

As a related aside, I've been getting GitHub Actions support together 
(which I started at the sprints). My test PR is at 
https://github.com/zooba/cpython/pull/7 if anyone wants to check out 
what it could look like. Feel free to leave comments there.


(One of these days I'll have to join core-workflow I guess...)

Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MTD2EQC3BWEGEV3LNKPF3KEJBSQN24LS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-03 Thread Steve Dower

On 03Dec2019 0815, Mark Shannon wrote:

Hi Everyone,

I am proposing a new PEP, still in draft form, to impose a limit of one 
million on various aspects of Python programs, such as the lines of code 
per module.


I assume you're aiming for acceptance in just under four months? :)


Any thoughts or feedback?


It's actually not an unreasonable idea, to be fair. Picking an arbitrary 
limit less than 2**32 is certainly safer for many reasons, and very 
unlikely to impact real usage. We already have some real limits well 
below 10**6 (such as if/else depth and recursion limits).


That said, I don't really want to impact edge-case usage, and I'm all 
too familiar with other examples of arbitrary limits (no file system 
would need a path longer than 260 characters, right? :o) ).


Some comments on the specific items, assuming we're not just going to 
reject this out of hand.



Specification
=

This PR proposes that the following language features and runtime values 
be limited to one million.


* The number of source code lines in a module


This one feels the most arbitrary. What if I have a million blank lines 
or comments? We still need the correct line number to be stored, which 
means our lineno fields still have to go beyond 10**6. Limiting total 
lines in a module to 10**6 is certainly too small.



* The number of bytecode instructions in a code object.


Seems reasonable.


* The sum of local variables and stack usage for a code object.


I suspect our effective limit is already lower than 10**6 here anyway - 
do we know what it actually is?



* The number of distinct names in a code object


SGTM.


* The number of constants in a code object.


SGTM.


* The number of classes in a running interpreter.


I'm a little hesitant on this one, but perhaps there's a way to use a 
sentinel for class_id (in your later struct) for when someone exceeds 
this limit? The benefits seem worthwhile here even without the rest of 
the PEP.



* The number of live coroutines in a running interpreter.


SGTM. At this point we're probably putting serious pressure on kernel 
wait objects/FDs anyway, and if you're not waiting then you're probably 
not efficiently using coroutines anyway.



Having 20 bit operands (21 bits for relative branches) allows instructions
to fit into 32 bits without needing additional ``EXTENDED_ARG`` 
instructions.
This improves dispatch, as the operand is strictly local to the 
instruction.

Using super-instructions would make that the 32 bit format
almost as compact as the 16 bit format, and significantly faster.


We can measure this - how common are EXTENDED_ARG instructions? ISTR we 
checked this when switching to 16-bit instructions and it was worth it, 
but I'm not sure whether we also considered 32-bit instructions at that 
time.



Total number of classes in a running interpreter


This limit has to the potential to reduce the size of object headers 
considerably.


This would be awesome, and I *think* it's ABI compatible (as the 
affected fields are all put behind the PyObject* that gets returned, 
right?). If so, I think it's worth calling that out in the text, as it's 
not immediately obvious.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/OQ4N23YAVNOYZKHZEPBO37PJFO4XX7HP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Restricted Entry Point from PEP-551/578

2019-11-21 Thread Steve Dower

On 21Nov2019 1337, Jason Killen wrote:
I'm good, not discouraged.  Thank you for the explanation I've got my 
bearings now.  I will try and figure out what's missing with the new 
config system.  If you have tips or reading material or anything else I 
should know just send it on otherwise I'll start googling.


Googling won't get you too far right now; you won't find much besides 
what's in the docs and PEPs (there are a couple of blog posts, but 
nothing that really adds much colour or detail yet).


FWIW, we added the cpython.run_* events to make it easier to 
handle/block configuration options like "-c" and "-m" without having to 
parse the entire command line. You can see all the events at 
https://docs.python.org/3.8/library/audit_events.html (though some won't 
be raised until 3.8.1... we should probably mark those, or at least 
update that page to warn that events may have been added over time).


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7ZD62K2U6VSP6W5JXFFKSYWKFFFHITMF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Restricted Entry Point from PEP-551/578

2019-11-21 Thread Steve Dower

On 21Nov2019 0927, Jason Killen wrote:
I sent in a couple of PRs, accepted and merged (Thanks!), lately that 
switch to using io.open_code when appropriate.  In the process of making 
those PRs I spent a bit of time reading the two related PEPs.  In 
PEP-551 there's a suggestion that people use a restricted entry point in 
production environments.  I googled around a bit and couldn't find any 
evidence that there was an existing implementation, at least not made 
public, that people were using in a general sense.  So I created a 
branch from my fork and over the last few days have implemented part of 
what's suggested in PEP-551.  Specifically my changes remove most of the 
command line options, ignore envvars (except for a possible logging 
filename for the audit hooks), and registers an audit hook that logs 
everything to the defined envvar when provided or stderr if not.


Hi Jason. Great to see that you're interested!

Right now there isn't an existing implementation, mainly because the 
value is very limited unless you also integrate into other system 
security services. Since CPython runs on so many platforms, there was no 
way for us to support all the possible combinations upstream, so instead 
we made it easy to extend so that distributors can customise it for 
their own platform (including sysadmins who want to customise for their 
own internal setups).


You can watch the presentations I gave on this recently (top few links 
at https://stevedower.id.au/speaking), or read my whitepaper at 
https://aka.ms/sys.audit (which will be rolled into PEP 551 soon).


I have also some samples at https://github.com/zooba/spython (that are 
shown and discussed in the whitepaper).



Now the questions:
1) Does anybody care?  Is anyone currently doing this or planning on 
doing this?


Yes, quite a few people care. It's mostly being discussed with me on 
backchannels right now though, and I haven't even connected everyone 
together yet. If you're interested, I'll include you when I do?


2) Do we want to provide an "official" version of a restricted entry 
point that could be used as-is or easily modified per specific needs?  
Seems kinda silly to make everyone roll their own version but I'm happy 
to yield to the will of the people.


As I said above, an "official" one can only go so far. I'd prefer to see 
the Linux distros have their own official one (e.g. if I'm on RHEL then 
I use Red Hat's restricted entry point, etc.)


3) What's the chance we wanna merge something like this into the 
official master branch?  I accomplished what I wanted to do using a few 
#ifdef's and some funky makefile magic.  I think it would merge easily.  
Maintaining a fork sounds like a lot of work to me.


The fork is very minimal now that the hooks are available in core. I've 
been involved in maintaining a true fork (based on 3.6 without the 
hooks) and *that* is a lot of work, but now that it can all be done from 
the entry-point it's actually pretty simple.



And here's the code:
I'm very open to suggestions.  I basically have no idea what I'm doing.
I haven't touched C in about 7 years so don't expect the Mona Lisa.
https://github.com/python/cpython/compare/master...jsnklln:PEP551_restricted_entry_point


I think this looks almost exactly like what we would merge if we were 
going to merge anything. My concern is that I think if we offer anything 
at all it will discourage people/distros from actually implementing it 
properly for their context, and so we make things worse. Making it easy 
to extend without actually doing it seems like a better place to be.


And I'm totally in favour of publishing ready-to-build samples (again, 
see https://github.com/zooba/spython). I just don't want people to think 
we've "solved" security for them, when there's honestly no way for us to 
do it from here.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QRIZNJTTL54DE3CYUIGTOHWO2SPBHOJW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [RELEASE] Python 3.9.0a1 available for testing

2019-11-20 Thread Steve Dower

On 19Nov2019 1708, Łukasz Langa wrote:

Go get it here: https://www.python.org/downloads/release/python-390a1/ 



Is it intentional that this link does not appear on 
https://www.python.org/download/pre-releases/ ?


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6C2G5LH7YAEEKPSX7LFJFIDNS6GVOQAH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Pass the Python thread state to internal C functions

2019-11-14 Thread Steve Dower

On 13Nov2019 1954, Larry Hastings wrote:


On 11/13/19 5:52 AM, Victor Stinner wrote:

Le mer. 13 nov. 2019 à 14:28, Larry Hastings  a écrit :

I did exactly that in the Gilectomy prototype.  Pulling it out of TLS was too 
slow,

What do you mean? Getting tstate from a TLS was a performance
bottleneck by itself? Reading a TLS variable seems to be quite
efficient.


I'm pretty sure you understand the sentence "Pulling it out of TLS was 
too slow".  At the time CPython used the POSIX APIs for accessing thread 
local storage, and I didn't know about and therefore did not try this 
"__thread" GCC extension.  I do remember trying some other API that was 
purported to be faster--maybe a GCC library function for faster TLS 
access?--but I didn't get that to work either before I gave up on it out 
of frustration.


Also, I dimly recall that I moved several things from globals into the 
ThreadState structure, and probably added one or two of my own.  So 
nearly every function call was referencing ThreadState at one point or 
another.  Passing it as a parameter was a definite win over calling the 
POSIX TLS APIs.


Passing it as a parameter is also a huge win for embedders, as it gets 
very complicated to merge locking/threading models when the host 
application has its own requirements.


Overall, I'm very supportive of passing context through parameters 
rather than implicitly through TLS.


(Though we've got a long way to go before it'll be possible for 
embedders to not be held hostage by CPython's threading model... one 
step at a time! :) )


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TLMDK7JZQIUWQUUKFHOPNEFQCJKFL5JM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Cleaning the stable ABI [was Accepting PEP 602 -- Annual Release Cycle for Python]

2019-10-30 Thread Steve Dower

On 30Oct2019 1226, Brett Cannon wrote:

* Now that the stable ABI has been cleaned, extension modules should feel more 
comfortable targeting the stable ABI which should make supporting newer 
versions of Python much easier


I'm taking this as an indication that we should finish 
https://bugs.python.org/issue23903 to actually clean the stable ABI and 
make it usable on Windows.


Specifically:

* existing APIs that accidentally shipped in the stable ABI have to 
remain in the stable ABI if they are currently listed in python3.def
* _all_ current stable ABI APIs according to the header files will be 
added to python3.def, whether they were intended to be stable or not 
(with a grace period of however long it takes us to finish issue23903)
* in future, the file will be either auto-generated or validated against 
the actual headers, to ensure it doesn't get out of sync again


Hopefully this won't be as upsetting to people as last time we tried to 
fix it, but if anyone still has concerns about the stable ABI not being 
properly clean, you need to speak up :)


Also, PEP 602 makes no statement about when stable ABI APIs are 
"committed", and nor does PEP 384, so should we assume that the stable 
ABI becomes fixed at beta 1 (or RC 1)? That is, it is not allowed to 
remove or change any stable ABI APIs from beta/RC 1, even if they 
weren't in the previous release? Or will we hold it until the final 
release and allow breaking the stable ABI during prereleases.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RA7SCIRIS5WJ6AZDFLMNJRWSYIDA2Y4L/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [RELEASE] Python 3.8.0 is now available

2019-10-15 Thread Steve Dower

On 15Oct2019 1143, MRAB wrote:

On 2019-10-15 19:03, MRAB wrote:

I've installed pywin32 on Python 3.8, but when I try to import
win32clipboard it says it can't find it:

Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64
bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
  >>> import win32
  >>> import win32clipboard
Traceback (most recent call last):
    File "", line 1, in 
ImportError: DLL load failed while importing win32clipboard: The
specified module could not be found.
  >>>

Does anyone else have this problem?

I found the solution: copy pywintypes38.dll and pythoncom38.dll from 
site-packages/pywin32_system32 into site-packages/win32.


The new os.add_dll_directory() function [1] is a way for pywin32 to work 
around this themselves without having to relocate files. The note in the 
doc also explains the cause.


Cheers,
Steve

[1]: https://docs.python.org/3.8/library/os.html#os.add_dll_directory
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/SIXLMFOZ6KZUMVWT2RY5XBMRSMUBFCG5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 587 (Python Initialization Configuration) updated to be future proof again

2019-09-30 Thread Steve Dower

On 30Sep2019 1625, Nick Coghlan wrote:
After merging your PR and closing mine, I had an idea for Python 3.9: 
what if we offered a separate public "int 
Py_CheckVersionCompatibility(uint64_t header_version)" call? (64 bit 
input rather than 32 to allow for possible future changes to the version 
number formatting scheme)


The basic variant of that API would be what I had in my PR: release 
candidates and final releases allow an inexact match, other releases 
require the hex version to match exactly.


As I posted on the issue tracker, I don't think this kind of version 
verification is necessary unless we're genuinely going to get into the 
business of adding built-in shims for API changes.


Every supported platform is going to make you link against a dynamic 
library with the version number in its name. (Every approach that avoids 
this is not supported or does not support embedding.) So you get 
run-time checking for free, and you can add a compile-time check in 
three or four different ways depending on what makes the most sense for 
your build system.


Let's not complicate the embedding API further with unnecessary new 
APIs. I'm keen to design something broadly useful and simpler than what 
we have now, and this is not going to help with it.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/35WNLKQL4DWZZ6MS7OEZEDV7AM2IZFKV/


[Python-Dev] Re: The Python 2 death march

2019-09-26 Thread Steve Dower

On 25Sep2019 2140, Benjamin Peterson wrote:



On Wed, Sep 25, 2019, at 17:25, Rob Cliffe via Python-Dev wrote:

I additionally share the bemusement of some other commentators on this thread to the idea of Python 2 
"support", which is not something ever promised to Python 2 (or 3) users by CPython core developers. 
Essentially, next year, we're changing our "support" policy of Python 2.7 from "none, but we're nice 
people" to "none".

I understand, but I hope that if a clear bug (perhaps especially a
security bug) is found in Python 2.7 (perhaps one that is also in Python
3.x) the core devs will not be in principle opposed to fixing it.  At
least if one of them (or someone else sufficiently qualified) is
prepared to do the work.  Especially as you're "essentially" (and you
ARE :-) -:) ) "such nice people".


Before 2.7.18, sure. After that, in principle and practice, we're opposed.


The biggest thing that will change is that all our CI systems will stop 
testing 2.7, and there's a good chance we'll lock (or delete?) the 2.7 
branch from our repo.


So you may find someone nice enough (or willing enough to accept money 
(or willing to accept enough money)) to fix an issue, but the fix will 
have to go somewhere other than the main repo and someone else will have 
to verify and release it.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6DINL4MYBCHEVVA6GV44EFJJG2SJ6Y6G/


[Python-Dev] Re: Python for Windows (python-3.7.4.exe) location confusing

2019-09-11 Thread Steve Dower

On 10Sep2019 2120, Jim J. Jewett wrote:

Is it possible for the installer to check whether or not there is a
pre-existing system-wide launcher, and only do the complicated stuff
if it is actually there?


I'm not sure what you're referring to here.

If there's a pre-existing launcher of the same or higher version, it 
won't try to install it and won't ask for administrative privileges. 
This has worked since Python 3.5.


But you mention "only do ... stuff if ... there" - what stuff are you 
referring to? The installer should do *less* if it's partially installed 
already, not more.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IRULRSHYJUWENLUESGO7ZHIUJPHLJZ6R/


[Python-Dev] Re: Python for Windows (python-3.7.4.exe) location confusing

2019-09-10 Thread Steve Dower

On 09Sep2019 1950, Glenn Linderman wrote:

On 9/9/2019 2:48 AM, Steve Dower wrote:
User with administrative privileges are by implication better able to 
handle decisions such as this. If they are not, they should not be 
administrating a machine. 
Most home machines are administered by people that should not be 
"administrating" a machine.


Correct, and by defaulting to per-user installs we help them avoid 
having to do it.


I've trained the home "administrators" in my family to be skeptical of 
every UAC prompt they deal with, because they _shouldn't_ be 
administrators, as they're not prepared to make machine-wide decisions. 
It's worked out fine. (Unfortunately, Windows S and other ideas for 
further reducing their need to act as administrators have not taken off, 
probably because of the smugness of those who think that because *they* 
can handle full control of their machine, everyone else ought to be able 
to as well.)


Perhaps one day when the Store install overtakes the traditional 
installer in popularity, we can consider adapting the traditional 
installer to assume that the person using it is actually prepared to be 
an administrator. Until then, I'm going to assume the person installing 
Python is representative of our user base, which (on Windows) is *not* 
pre-existing experts.


If that means that pre-existing experts need to make an extra click or 
two, in my mind that's worth it to simplify the experience for those who 
are less experienced, and to avoid further ingraining poor decision 
making into our wide range of users.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ACJYH7AGQEV5VZPG644VWMNP53UI4ENZ/


  1   2   3   4   5   6   7   >