[Python-ideas] Re: Python with braces formal proposal?
On Thu, 7 Jan 2021 at 12:18, Paul Sokolovsky wrote: > On Thu, 7 Jan 2021 22:59:41 +1100 > Chris Angelico wrote: > > > On Thu, Jan 7, 2021 at 9:12 PM Paul Sokolovsky > > wrote: > > > Well, I don't write PEPs. I write "pseudoPEPs". Wrote 2 so far: "+=" > > > operator for io.BytesIO/StringIO and "Strict Mode, part1". More > > > recently, I'm trying to rebrand those as "PycoEPs", for my Pycopy > > > dialect. > > > > Can you discuss them on pycopy-ideas then? > > No, we first would need to discuss that idea on python-ideas-ideas list. Why? You've just explicitly said this is an idea for Pycopy, not for Python. Therefore posting here is essentially just wasting the time of everyone who is not interested in Pycopy (likely the vast majority of subscribers to this list). Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/V4OZGTINIRKGH27KR5Q5QX5BJN7P3YPF/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Support reversed(itertools.chain(x, y, z))
On Sat, 9 Jan 2021 at 13:29, Oscar Benjamin wrote: > The argument to reversed either needs to be a sequence with __len__ > and __getitem__ or an object with a __reversed__ method that returns > an iterator. The arguments to chain have to be iterables. Every > sequence is an iterable so there is a significant intersection between > the possible inputs to chain and reversed. Also some non-sequences > such as dict can work with reversed. > > You say it's hard to see how it could be made to work but you've shown > precisely how it can already be done above: > > reversed(chain(*args)) == chain(*map(reversed, reversed(args))) > > We can try that out and it certainly seems to work: > > >>> from itertools import chain > >>> args = [[1, 2], [3, 4]] > >>> list(chain(*args)) > [1, 2, 3, 4] > >>> list(chain(*map(reversed, reversed(args > [4, 3, 2, 1] > > This wouldn't work with chain.from_iterable without preconsuming the > top-level iterable but in the case of chain the iterables are already > in a *args tuple so flipping that order is always possible in a lazy > way. That means the operation works fine if each arg in args is > reversible. Otherwise if any arg is not reversible then it should give > a TypeError just like reversed(set()) does except the error would > potentially be delayed if some of the args are reversible and some are > not. > > I haven't ever wanted to reverse a chain but I have wanted to be able > to reverse an enumerate many times: > > >>> reversed(enumerate([1, 2, 3])) > ... > TypeError > > The alternative zip(range(len(obj)-1, -1, -1), reversed(obj)) is > fairly cryptic in comparison as well as probably being less efficient. > There could be a __reversed__ method for enumerate with the same > caveat as for chain: if the underlying object is not reversible then > you get a TypeError. Otherwise reversed(enumerate(seq)) works fine for > any sequence seq. > > The thornier issue is how to handle reversed if the chain/enumerate > iterator has already been partially consumed. If it's possible just to > give an error in that case then reversed could still be useful in the > common case. I think you're about right here - both chain and enumerate could reasonably be expected to be reversible. There are some fiddly edge cases, and some potentially weird situations (as soon as we assume no-one would ever expect to reverse a partially consumed iterator, I bet someone will...) which probably warrant no more than "don't do that then" but will end up being the subject of questions/confusion. The question is whether the change is worth the cost. For me: 1. enumerate is probably more important than chain. I use enumerate a *lot* and I've very rarely used chain. 2. Consistency is a benefit - as we've already seen, people assume things work by analogy with other cases, and waste time when they don't. 3. How easy it is to write your own matters. If chain or enumerate objects exposed the iterables they were based on, you could write your own reverser more easily. 4. How problematic are the workarounds? reversed(list(some_iter)) works fine - is turning the iterator into a concrete list that much of an issue? And of course, the key point - how often do people want to do this anyway? If someone wants to do the work to implement this, I would say go for it - raise a bpo issue and create a PR, and see what the response is. Getting "community support" via this list is probably not crucial for something like this. It's more of a quality of life change than a big feature. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/P2R4KOFMDL2JELV3UGQDGRVCNKAIMFYU/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: timeit improvement idea: add an option to measure a script execution time
On 2021-01-10 at 18:38:12 +0100, Alex Prengère wrote: > 3. Use timeit. The scripts have no side effects so repeating their > execution the way timeit does, works for me. The only issue is that, > as far as I know, timeit only allows statements as input parameters, > not the whole script, like for example: > $ python -m timeit --script script.py There's always py -m timeit -s "from pathlib import Path; data = Path('your_script.py').read_text()" "exec(data, globals())" Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/L3L45EBH7MX4CRDHVKYUVMMZQBKN37D4/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: timeit improvement idea: add an option to measure a script execution time
On Sun, 10 Jan 2021 at 19:24, Chris Angelico wrote: > > On Mon, Jan 11, 2021 at 6:06 AM Paul Moore wrote: > > > > On 2021-01-10 at 18:38:12 +0100, > > Alex Prengère wrote: > > > 3. Use timeit. The scripts have no side effects so repeating their > > > execution the way timeit does, works for me. The only issue is that, > > > as far as I know, timeit only allows statements as input parameters, > > > not the whole script, like for example: > > > $ python -m timeit --script script.py > > > > There's always > > > > py -m timeit -s "from pathlib import Path; data = > > Path('your_script.py').read_text()" "exec(data, globals())" > > > > Depending on what's being measured, that might not be an accurate > measurement. After the first execution, all imports will be resolved > out of sys.modules. Obviously, yes. The point here is mainly that there's a few ways of doing what the OP asked, and which is better depends on precisely what they want (which they didn't state in detail). As regards the suggestion of adding this functionality to the timeit module, the fact that there *are* multiple options with different trade-offs is precisely why building one particular choice in as "the way to do it" is probably a mistake. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/GDNXAX6EUFMWD2JHLYYHUHEP2WFEXXIN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Additional LRU cache introspection facilities
On Tue, 12 Jan 2021 at 17:04, Steven D'Aprano wrote: > My use-case is debugging functions that are using an LRU cache, > specifically complex recursive functions. I have some functions where: > > f(N) > > ends up calling itself many times, but not in any obvious pattern: > > f(N-1), f(N-2), f(N-5), f(N-7), f(N-12), f(N-15), f(N-22), ... > > for example. So each call to f() could make dozens of recursive calls, > if N is big enough, and there are gaps in the calls. > > I was having trouble with the function, and couldn't tell if the right > arguments where going into the cache. What I wanted to do was peek at > the cache and see which keys were ending up in the cache and compare > that to what I expected. > > I did end up get the function working, but I think it would have been > much easier if I could have seen what was inside the cache and how the > cache was changing from one call to the next. > > So this is why I don't care about performance (within reason). My use > case is interactive debugging. For debugging, could you not temporarily write your own brain-dead simple cache? CACHE = {} def f(n): if n not in CACHE: calculate result CACHE[n] = result return CACHE[n] (make a decorator, if you feel like). You can then inspect and instrument as much as you like, and once you've got things working, replace the hand-written cache with the stdlib one. Personally, I don't use lru_cache enough to really have an opinion on this. My first reaction was "that would be neat", but a quick look at the code (the Python version, I didn't go near the C version) and Serhiy's comments made me think that "it's neat" isn't enough justification for me, at least. In practice, I'd just do as I described above, if I needed to. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/JAU7CDKTU2HUVN33JI7Y56QXVD7C76K5/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Additional LRU cache introspection facilities
On Tue, 12 Jan 2021 at 17:16, Christopher Barker wrote: > > Is the implementation of lru_cache too opaque to poke into it without an > existing method? Or write a quick monkey-patch? > > Sorry for not checking myself, but the ability to do that kind of thing is > one of the great things about a dynamic open source language. I've only looked at the Python implementation, but the cache is a local variable in the wrapper, unavailable from outside. The cache information function is a closure that references that local variable, assigned as an attribute of the wrapped function. It's about as private as it's possible to get in Python (not particularly because it's *intended* to hide anything, as far as I can tell, more likely for performance or some other implementation reason). Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/C6YULKSV5RV36WSWMGKWPMLUFDL7YN2Y/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Additional LRU cache introspection facilities
On Tue, 12 Jan 2021 at 19:28, Sebastian Kreft wrote: > > Actually you can get the cache data from the python version, for that you > need to force the use of the python version of functools "as private as you can get" != "private" :-) I had a feeling there would be a way to do it, but wasn't bothered to work out exactly how. Thanks for the example. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/2LIH4BSQ2QFFL7GKDYYLDPFFMUBZ4ZRB/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Changing the default text encoding of pathlib
On Mon, 25 Jan 2021 at 20:02, Christopher Barker wrote: > using a system setting as a default is a really bad idea in this day of > interconnected computers. I'd mildly dispute this. There are (significant) downsides with the default behaviour being system-dependent, yes, but there are *also* disadvantages in having Python not behave consistently with other tools/programs on the same system. However, on POSIX, things are generally consistent, and *already* default to UTF-8. So the proposal is mostly going to affect Windows. And on Windows, there's not much consistency even on a single machine at the moment. Between OEM and ANSI codepages, and other tools that default to UTF-8 "because that's the future", there's not much platform consistency for Python to conform to anyway... > But back to PEP 597, and how to get there: > > 1) We need to start with a consensus about where we want Python to be in N > versions. That is not specifically laid out in the PEP but it does imply that > in the sometime-long-in-the-future: > > - TextIOWrapper will have utf-8 as the default, rather than > `locale.getpreferredencoding(False)` > this behaviour will then be inherited by: > - `open()` without a binary flag in the mode > > - `Path.read_text` > - there will be a string that can be passed to encoding that will indicate > that the system default should be used. > > (and any other utility functions that use TextIOWrapper) > > Forgive me if there is already a consensus on this -- but this discussion has > brought up some thoughts. There's a fundamental assumption here that I think needs to be made explicit. Which is that we're assuming that whatever N happens to be, we anticipate that `locale.getpreferredencoding(False)` will still be something other than UTF-8. That's *already* false on most POSIX systems, and TBH I get the impression that Microsoft is pushing quite hard to move Windows 10 to a UTF-8 by default position (although "fast" in Microsoft terms may still be slow to the rest of us ;-)) So I think that the real question here is "do we want to move Python to "UTF8-by-default" faster than the OS vendors are going? And I think that the answer to that is much less obvious. It probably also depends heavily on your locale - I doubt it's an accident that Inada-san¹ is proposing this, and he's from Japan :-) Personally, as an English speaker based in the UK, I'll be happy when UTF-8 is the default everywhere, but I can live with the status quo until that happens. But I'm not the main target for this change. > 1) As TextIOWrapper is an "implementation detail" for most Python developers, > maybe it shouldn't have a default encoding at all, and leave the default > implementation(s) up to the helper functions, like open() and > Path.read_text() -- that would mean changes in more places, but would allow > different utility functions to make different choices. *shrug*. That sounds plausible, but it's a backward compatibility break that doesn't offer any significant benefits, so I suspect it's not worth doing in practice. > 2) Inada proposed an open_text() function be introduced as a stepping stone, > with the new behaviour. This led to one person asking if that would imply a > open_binary() function as well. An answer to that was no -- as no one is > suggesting any changes to open()'s behavior for binary files. > However, I kind of like the idea. We now have two (at least) different file > objects potentially returned by open(): TextIOWrapper, and > BufferedReader/Writer. And the TextIOWrapper has some pretty different > behavior. I *think* that in virtually all cases, when the code is written, > the author knows whether they want a binary or text file, so it may make > sense to have two different open() functions, rather than having the Type > returned be a function of what mode flags are passed. > > This would make it easier for people (and tools) to reason about the code > with static analysis: > > e.g.: > > open_text().read() would return a string > open_binary().read() would return bytes These are good arguments for having explicit open_text and open_binary functions. I don't *like* the idea, because they feel unnecessarily verbose to me, but I can accept that this might just be because I'm used to open(). I do think that having open_text, but *not* having open_binary, would be a bit confusing. Particularly as pathlib has read_text and read_binary, so it would be inconsistent as well. > This would also make the path to a future with different defaults smoother -- > plain "open" gets deprecated -- any new code uses one of the open_* > functions, and that new code will never need to be changed again. > > Back in the day, a single open() function made more sense. After all, the > only difference in the result for binary mode was that linefeed translation > was turned off (and the C legacy of course). In fact, this did lead to > errors, when folks accidentally left off the 'b', and tested only
[Python-ideas] Re: Make UTF-8 mode more accessible for Windows users.
On Tue, 9 Feb 2021 at 17:32, Inada Naoki wrote: > > On Tue, Feb 9, 2021 at 7:42 PM M.-A. Lemburg wrote: > > > > Here's a good blog post about setting env vars on Windows: > > > > https://www.dowdandassociates.com/blog/content/howto-set-an-environment-variable-in-windows-command-line-and-registry/ > > > > It's not really much harder than on Unix platforms. > > > > But it affects to all Python installs. Can teachers recommend to set > PYTHONUTF8 environment variable for students? Why is that an issue? In the first instance, do the sorts of "beginner" we're discussing here have multiple python installs? Would they need per-interpreter configuration of UTF-8 mode? Honestly, I find it far harder to configure environment variables on Unix (I have to do it per *shell*, for a start). Windows users don't often set environment variables, because Windows-native applications often use other means to determine their configuration - but it's not because the user *can't* set environment variables, or because it's "too hard". > > The only catch is that Windows users will often not know about such > > env vars or how to use them, because on Windows you typically set up > > your configuration via the application and using the registry. > > > > Perhaps we could have both: an env var to enable UTF-8 mode and > > a registry key set by the installer. > > > > I don't want to recommend env vars and registry for conda and portable > Python users... I'm not sure what you mean here. Why is this different from (say) PYTHONPATH? How would conda and portable python users configure PYTHONPATH? Why is UTF-8 mode any different? Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/BDOV2F3LDSC2YDUR4LCPNHZFJWEZNX5U/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Make UTF-8 mode more accessible for Windows users.
On Wed, 10 Feb 2021 at 07:14, Christopher Barker wrote: > > On Tue, Feb 9, 2021 at 1:04 PM Paul Moore wrote: >> >> Why is that an issue? In the first instance, do the sorts of >> "beginner" we're discussing here have multiple python installs? Would >> they need per-interpreter configuration of UTF-8 mode? > > yes -- many, many tutorials, particularly about web frameworks, start with > "make a new virtual environment". To the point that many of my students have > thought that was a requirement to use, e.g. flask. So get PYTHONUTF8 added to the environment activate script. That's a simple change to venv. And virtualenv, and conda - yes, it need to happen in multiple places, but that's still easier IMO than proposing a change to Python's already complex (and slower than many of us would like) startup process. > Personally, I do not start out with environments with my beginning students > -- they really only need one at the early stages. But other instructors do. > > Others have to work with a locked down system provided by their employer that > might be an older version of Python, or need some particular configuration > that they don't want to override. > > And all the examples given here of how to set environment variables and > shortcuts, etc on Windows is EXACTLY the kind of information I don't want to > have to provide for my students :-( -- I'm teaching Python, not Windows > administration. So teach Python as it actually is, surely? If you teach people how to use "Python-with-UTF8-mode", won't they struggle when introduced to the real world where UTF8 mode isn't set? Won't they assume the default encoding for open() is UTF-8, and be confused when they are wrong? Yes, I know your job as an instructor is to omit confusing details, and UTF8 mode would help with that. I get that. But that's just one case. And anyway, would you not have to explain how to set UTF-8 mode for the training environment one way or another anyway? Sure, you may not have to explain how to set an environment variable. But you have to explain how to configure an ini file instead. Unless UTF-8 mode is the default, you have to explain how to configure the training environment one way or another - unless you provide a pre-packaged environment (in which case we're back to why not just set an env variable). >> > I don't want to recommend env vars and registry for conda and portable >> > Python users... > > and a lot of newbies learning Python for data science are starting out with > conda as well ... So conda could set UTF-8 mode with "conda env --new --utf8". No changes to core Python interpreter startup needed. >> I'm not sure what you mean here. Why is this different from (say) >> PYTHONPATH? How would conda and portable python users configure >> PYTHONPATH? Why is UTF-8 mode any different? > > > It's not -- using PYTHONPATH is a "bad idea" I never recommend it to anyone. > It was a nightmare when folks have Python 2 and 3 on the same machine, but > now, in the age of environments, it's still a really bad idea. Sure, PYTHONPATH was just an example. Environment variables are how you configure Python in many ways. I'm asking why UTF-8 mode is so special it needs a different configuration mechanism than every other setting for Python. > It's really important to support configuration per environment these days. > Ideally with any of the "environment" tools. That's a completely different discussion, and as you stated it, doesn't just apply to UTF-8 mode. It should be a different thread. And my immediate answer would be that you can do this by changing the activation scripts. Yes, that means each environment tool needs to be updated individually, but that would be a reasonable start. If the feature proves important, it could later be migrated into a core feature. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/DTKCHIOAOL2TFSUMNHGVUNMP5SUZFZQN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Make UTF-8 mode more accessible for Windows users.
On Wed, 10 Feb 2021 at 11:01, Inada Naoki wrote: > > On Wed, Feb 10, 2021 at 5:33 PM Paul Moore wrote: > > > > So get PYTHONUTF8 added to the environment activate script. That's a > > simple change to venv. And virtualenv, and conda - yes, it need to > > happen in multiple places, but that's still easier IMO than proposing > > a change to Python's already complex (and slower than many of us would > > like) startup process. > > I am not sure this idea works fine. Is the activate script always > called when venv is used on Windows? > > When I use venv on Unix, I often just execute .venv/bin/some-script > without activating the venv. So in your training course, tell users to activate the environment. Experienced users (like you) who can run scripts directly aren't the target of this change, are they? This is one of the frustrating points here, I'm not clear who the target is. When I say it wouldn't help me, I'm told I'm not the target. When I suggest an alternative, it apparently isn't useful because it wouldn't work for you... > Students may need to learn about encoding at some point. > But when they learn "how to read/write file" first time, they don't > need to know what encoding is. Agreed. > VSCode, notepad, PyCharm use UTF-8 by default. > Students don't need to learn how to use encoding other than UTF-8 > until really need it. If they only use ASCII files and a system codepage that is the same as ASCII for the first 127 characters, they it's irrelevant. If they read data from a legacy system, that is quite likely to be in the system codepage (most of the local files I use at work, for example, are not UTF-8). So I'd say that many students don't need to learn how to use *any* encoding until they need it. But I'm not a professional trainer, so my experience is limited. > We can add "Enable the UTF-8 mode" checkbox to the installer. > And we can have "Enable the UTF-8 mode" tool in the start menu. > So students don't need to edit the ini file manually. Those options could set the environment variable. After all, that's what "Add Python to PATH" does, and people seem OK with that. No need for an ini file (that adds an extra file read to the startup time, as has already been mentioned as a downside). > The problem is; should we recommend to enable UTF-8 mode globally by > setting environment variable, or provide a per-site UTF-8 mode > setting? What precisely do you mean by "per site"? Do you mean "per Python interpreter"? Do you view separate virtual environments as "sites"? Again, I don't understand who the target audience is here. > > >> > I don't want to recommend env vars and registry for conda and portable > > >> > Python users... > > > > > > and a lot of newbies learning Python for data science are starting out > > > with conda as well ... > > > > So conda could set UTF-8 mode with "conda env --new --utf8". No > > changes to core Python interpreter startup needed. > > > > They may not want to promote UTF-8 mode until official Python promote > UTF-8 mode. > So I think venv should support UTF-8 mode first. That's fair enough. Although I'd like to point out the parallel here - you're saying "environment tools might not want to make UTF8 the default until Python does". I'm saying "Python might not want to make UTF8 the default until the OS does". I'm not completely sure why your argument is stronger than mine :-) > Because it solves many real world problem that many Windows users suffer. OK. My experience differs, but that's fine. But why wasn't this a consideration when UTF8 mode was first designed? At that point, an interpreter flag and an environment variable were considered sufficient. Why is that no longer true? Is it because the initial design of UTF8 mode ignored Windows? Why, if this is such a Windows-specific problem? Sigh. To be honest, I don't have the time (or the interest) to go back over all the history here. I think I'm just going to have to drop this discussion and wait to comment when a concrete proposal is put forward. PEP 597 is the only actual PEP on the table at the moment, everything else is just speculation, and I really can't keep up with the volume of discussion in the various threads. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/2GNE2IOHDN2XPTBLEDPMTLRVTUK56IFV/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Make UTF-8 mode more accessible for Windows users.
On Wed, 10 Feb 2021 at 13:31, Inada Naoki wrote: > > I'm sorry about it. I have not chose actual implementation yet so I > can not write concrete PEP yet. It's not a problem. I appreciate all of the time you're putting into considering the responses and keeping the discussion going. (And please don't think I was criticising the decision over UTF-8 mode, I genuinely didn't know the background, and "it was targeted at server environments" answers that question for me). I'm dropping out of the discussion because I can't afford the time to make sure I'm not forcing you to go over things that have already been discussed, and I don't want to waste your time by doing that. But I await the results of the discussion with interest :-) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/VEWFVM6PJZAPDXGDVZZO6I2L5JNPY7QC/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Alternate lambda syntax
On Thu, 11 Feb 2021 at 15:09, J. Pic wrote: > > I think you also need return, and double space pound space qa to pass linters: > > def foo(): return 1 # noqa > > Instead of > > foo = lambda: 1 > > And this proposal: > > foo(): 1 > > The benefit is just to get more out of the 80 characters when we want to > define a short callback. > > I understand this is not a life changing proposal, but was wondering if it > was "nice enough" to be worth proposing. Honestly, I doubt it. If you look at the history on this list, you'll find a lot of proposals whose main justification was "convenience" or "it's shorter". They rarely get very far. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/2VRIPCDKDGZVR7UZBT63XJKBOTH4IEWB/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Alternate lambda syntax
On Fri, 12 Feb 2021 at 09:26, Abdulla Al Kathiri wrote: > > I actually like the “(x, y=7) => x + y” and “async (x, y) => asyncio.sleep(x > + y)” for both normal and async anonymous functions respectfully. I think it's a reasonable syntax, although it could be over-used. That's not an issue with the syntax, though - *anything* can be over-used :-) Experience with similar syntax in other languages suggests this would be useful. > I have natural aversion to the word lambda for some reason. It does seem to cause people (including me!) a lot more problems than one would expect, on the face of it. Maybe because it's not a common term outside of computer science? But the endless debates over alternative keywords have never come up with anything else that people can agree on. > The normal anon function has “return” implicitly, and the async anon function > has “return” implicitly as well. Usage: > f1 = (x, y=7) => x + y > f1(3) outputs 10 > f2 = async (x, y) => asyncio.sleep(x + y) > async def main(): > coro = f2(3, 7) > await coro > return “waited 10 seconds” > result = asyncio.run(main()) > print(result) # outputs “waited 10 seconds” I'm not sure what the use cases would be for an async lambda - the key is that it's not named, so the above isn't a good example as it's just as easy to write async def f2: return asyncio.sleep(x+y) (Excuse any errors here, I'm not that familiar with asyncio). If there are good use cases, I think it makes sense to include async lambda in any proposal. I'm not saying that it isn't useful, just that it doesn't have all the history that (non-async) lambda does, so the argument in favour of async lambda isn't as strong. Use cases are key - adding it "just because we can" would just make the proposal more controversial, and then the whole thing could collapse over something secondary. Ultimately, this would need someone to write a PEP. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/7WIDJHIDIQTPZNF2DCTF7D4C3UFC3XLY/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Alternate lambda syntax
Fair enough. Whoever writes a PEP for this will need to do the relevant research and present the arguments in detail. But until someone's ready to write that PEP, we can continue discussing on the assumption that if someone finds the async version useful, they'll speak up. Paul On Fri, 12 Feb 2021 at 11:24, Abdulla Al Kathiri wrote: > > > I am not that familiar with asyncio either. I only wrote a few utility > scripts that runs concurrent subprocesses mixed with some blocking functions > running in concurrent.ProcessPoolExecutor pool (using > asyncio.run_in_executor). That is all what I did with regard asyncio. Your > function f2 and my function f2 could be actually normal functions. > f2 = (x, y) => asyncio.sleep(x + y). f2(3, 7) will just return a coroutine > that can be awaited on just fine. To be honest, i don’t even know what the > purpose would be with async lambda unless someone with more experience can > give us a use case. My guess is that it may be useful to use it as an > argument to another async function. > > Since we can write shortened normal functions (lambda), shortened generator > function (lambda with generator expression), people might ask why Python > doesn’t have shortened async function? But maybe that is not a good question > to begin with? > Lambda reminds me of the half life time of isotopes. The other day, I was > struggling to teach this to my cousin in elementary school. I just told him > to imagine it like def _(args): return something. Assign it to a variable and > that variable replaces the underscore. He got it but he found it weird. > Sent from my iPhone > > > On 12 Feb 2021, at 1:41 PM, Paul Moore wrote: > > > > I'm not sure what the use cases would be for an async lambda - the key > > is that it's not named, so the above isn't a good example as it's just > > as easy to write > > > > async def f2: > >return asyncio.sleep(x+y) > > > > (Excuse any errors here, I'm not that familiar with asyncio). ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/6VCPPUJQ6ZL2PAKIC3JMSEZE6VTDR6V5/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Alternate lambda syntax
On Tue, 16 Feb 2021 at 16:40, Ned Batchelder wrote: > > "lambda" is unnecessarily obscure. > > Beginner: "why is it called lambda?" > > Teacher: "Don't worry about it, just use it to define a function" > > I'm not taking a side on whether to change Python, but let's please not > lose sight of just how opaque the word "lambda" is. People who know the > background of lambda can easily understand using a different word. > People who don't know the background are presented with a "magic word" > with no meaning. That's not good UI. Agreed. When lambda was introduced, "anonymous functions" were not as common in programming, and the most obvious example of their usage was in lisp, where "lambda" was the accepted term. Since then, lisp has not gained much additional popularity, but anonymous functions have appeared in a number of mainstream languages. The syntax is typically some form of "a, b -> a+b" style, and *never* uses the term "lambda". So someone coming to Python with any familiarity with other languages will now find Python's form atypical and obscure. People coming with no experience of other languages will need to have a history lesson to understand why the term is "lambda" rather than "something more obvious". People can, and will, learn Python's syntax. This isn't a major disaster. But if this were a new feature, we'd not be having this discussion, and "lambda" wouldn't even be a consideration. I'm also not taking a side on whether a change is worth the disruption. Personally, I prefer the arrow syntax, but we're not making a decision in a vacuum here. Someone has to make a case (probably in a PEP) if this is going to change, and the trade-offs should be clarified there. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/M25C6O55JCYLFOUW7DPPFGAETVOGN6R5/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Arrow functions polyfill
On Tue, 23 Feb 2021 at 09:25, Steven D'Aprano wrote: > > On Mon, Feb 15, 2021 at 09:29:45AM -0800, Guido van Rossum wrote: > > > Proposals like this always baffle me. Python already has both anonymous and > > named functions. They are spelled with 'lambda' and 'def', respectively. > > What good would it do us to create an alternate spelling for 'def'? > [...] > > > I think that the desire probably comes from thinking of short > mathematical functions where the body is a simple expression, like in > maths: [...] > So there is definitely some aesthetic advantage to the arrow if you're > used to maths notation, and if Python had it, I'd use it. Yes, this is precisely my instinct as well. In addition, for whatever reason (and it's purely subjective, I won't try to defend it) I find "lambda" as a keyword somewhat alien in Python, where most other keywords are short terms or abbreviations of terms (and the terms are mostly non-technical English words - "with", "for", "def(ine)", "class", ...). So every time I use a lambda expression, it feels mildly unnatural to me, and I look for a "better way" of expressing my intent. But having a mathematical background, functional and comprehension styles feel natural to me, which introduces a tension - so, for example, I recently wrote some code that included the following: squares = set(takewhile(lambda sq: sq <= 2 * end, map(lambda x: x * x, count( A procedural version would be squares = set() x = 1 while True: sq = x * x if sq > 2 * end: break squares.add(sq) x += 1 and a *lot* of people would say this is easier to read, but IMO, both should have a comment explaining what "squares" is, and the itertools-based version took me less time to write, and ensure I'd got correct. I'd find a version using "arrow notation" squares = set(takewhile(sq => (sq <= 2 * end), map(x => x * x, count( easier to read than the lambda version (personally I prefer => over ->, but that's a detail) even though I needed to add parentheses to (sq <= 2 * end) to make it clear what was the expression and what was the argument. Honestly, none of these scream to me "the set of squares less than end * 2", though, so it's difficult to say any one of them is the "obvious" choice. But I'd probably order them as =>, lambda, procedural (from best to worst). This is when writing that code in a Jupyter notebook as part of some exploratory work. If this were production-level code where others would be maintaining it, I'd write a standalone function that wrapped the procedural code, give it a name and a docstring, and add some tests to it. But not all code is like that (not even all "important" code) so sometimes the one-liners make sense. > But it doesn't scale up to multi-statement functions, and doesn't bring > any new functionality into the language, so I'm not convinced that > its worth adding as a mere synonym for def or lambda or both. Yep, it's a difficult call. I suspect that if Python had been developed now, it would have followed the trend of using arrow notation. But adding it as a synonym for the existing lambda is nowhere near as obvious a choice. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/F4RPU5KZPNP3A6JWMORL4VC5TBTZJQMZ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Arrow functions polyfill
On Tue, 23 Feb 2021 at 14:10, M.-A. Lemburg wrote: > > The natural way in Python to write an anonymous function would > be to simply drop the name in a regular function definition: > > def (a): return a**2 > > The lambda notation is less verbose and closer to computer > science theory, though: > > lambda a: a**2 > > FWIW: I don't understand why people are so unhappy with lambdas. > There isn't all that much use for lambdas in Python anyway. Most > of the time, a named function will result in more readable code. Typically because they are simple expressions like the a**2 you used above. def a_squared(a): return a**2 is way over the top. Thinking about it, maybe the *real* solution here is to use one of the "placeholder variable" libraries on PyPI - there's "placeholder" which I found on a quick search: from placeholder import _ # single underscore _.age < 18 # lambda obj: obj.age < 18 _[key] ** 2# lambda obj: obj[key] ** 2 Some people will hate this sort of thing - probably the same people who can't see why anyone has a problem with lambda - but it doesn't need a language change, and it's available now. I guess I've convinced myself here - we already have shorter alternatives to lambda, so why add a new built-in one? Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/64IFHG42SG5Q6Q3AAQVOD7JWE7ZRDTOQ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Arrow functions polyfill
On Tue, 23 Feb 2021 at 15:52, M.-A. Lemburg wrote: > > On 23.02.2021 15:29, Paul Moore wrote: > > On Tue, 23 Feb 2021 at 14:10, M.-A. Lemburg wrote: > >> > >> The natural way in Python to write an anonymous function would > >> be to simply drop the name in a regular function definition: > >> > >> def (a): return a**2 > >> > >> The lambda notation is less verbose and closer to computer > >> science theory, though: > >> > >> lambda a: a**2 > >> > >> FWIW: I don't understand why people are so unhappy with lambdas. > >> There isn't all that much use for lambdas in Python anyway. Most > >> of the time, a named function will result in more readable code. > > > > Typically because they are simple expressions like the a**2 you used above. > > > > def a_squared(a): > > return a**2 > > > > is way over the top. > > Fair enough. Although as soon as you use the same such function > more than once in your application, giving it a name does make > sense :-) > > > Thinking about it, maybe the *real* solution here is to use one of the > > "placeholder variable" libraries on PyPI - there's "placeholder" which > > I found on a quick search: > > > > from placeholder import _ # single underscore > > > > _.age < 18 # lambda obj: obj.age < 18 > > _[key] ** 2# lambda obj: obj[key] ** 2 > > > > Some people will hate this sort of thing - probably the same people > > who can't see why anyone has a problem with lambda - but it doesn't > > need a language change, and it's available now. > > > > I guess I've convinced myself here - we already have shorter > > alternatives to lambda, so why add a new built-in one? > > People should have a look at the operator module. It's full of > short (and fast) functions for many things you often write > lambdas for: > > https://docs.python.org/3/library/operator.html Definitely. But in cases like this (where brevity and matching the form of an expression is considered important) `partial(add, 2)` doesn't really compare to `lambda x: x+2` (or `x -> x+2`, or `_+2` if you like placeholders). And even using functools.partial, the operator module won't be able to replace `lambda x: x*x` (you can't even transform it to `x**2` and use partial, because the constant is the right hand argument). It's all very subjective, of course. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/QHFAJGNSV5FMOAF52SDBQ4EN3MY6LJG2/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: [Python-Dev] Re: Have virtual environments led to neglect of the actual environment?
On Wed, 24 Feb 2021 at 10:55, Stéfane Fermigier wrote: > There is probably a clever way to reuse common packages (probably via clever > symlinking) and reduce the footprint of these installations. Ultimately the problem is that a general tool can't deal with conflicts (except by raising an error). If application A depends on lib==1.0 and application B depends on lib==2.0, you simply can't have a (consistent) environment that supports both A and B. But that's the rare case - 99% of the time, there are no conflicts. One env per app is a safe, but heavy handed, approach. Managing environments manually isn't exactly *hard*, but it's annoying manual work that pipx does an excellent job of automating, so it's a disk space vs admin time trade-off. As far as I know, no-one has tried to work on the more complex option of sharing things (pipx shares the copies of pip, setuptools and wheel that are needed to support pipx itself, but doesn't extend that to application dependencies). It would be a reasonable request for pipx to look at, or for a new tool, but I suspect the cost of implementing it simply outweighs the benefit ("disk space is cheap" :-)) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/4EAGDI66MG4WPNCHCOVHT4VHBWQUNRNP/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: [Python-Dev] Re: Have virtual environments led to neglect of the actual environment?
On Wed, 24 Feb 2021 at 13:12, Antoine Pitrou wrote: > > On Wed, 24 Feb 2021 13:47:40 +0100 > Stéfane Fermigier wrote: > > The 3rd solution is probably the best of the 3, but the sharing mechanism > > still needs to be specified (and, if needed, implemented) properly. > > I wouldn't want to repeat myself too often, but conda and conda-based > distributions already have sharing through hardlinks (or, on Windows, > whatever is available) baked-in, assuming you install your software > from conda packages. > > That also applies to non-Python packages, and to python itself (which > is just a package like any other). I'm not sure conda solves the problem of *application* distribution, though, so I think it's addressing a different problem. Specifically, I don't think conda addresses the use case pipx is designed for. Although to be fair, this conversation has drifted *way* off the original topic. Going back to that, my view is that Python does not have a good solution to the "write your application in Python, and then distribute it" scenario. Shipping just the app to be run on an independently installed runtime results in the conflicting dependencies issue. Shipping the app with bundled dependencies is clumsy, mostly because no-one has developed tools to make it easier. It also misses opportunities for sharing libraries (reduced maintenance, less disk usage...). Shipping the app with a bundled interpreter and libraries is safest, but hard to do and even more expensive than the "bundled libraries" approach. I'd love to see better tools for this, but the community preferred approach seems to be "ship your app as a PyPI package with a console entry point" and that's the approach pipx supports. I don't use Linux much, and I'm definitely not familiar with Linux distribution tools, but from what I can gather Linux distributions have made the choices: 1. Write key operating system utilities in Python. 2. Share the Python interpreter and libraries. 3. Expose that Python interpreter as the *user's* default Python. IMO, the mistake is (3) - because the user wants to install Python packages, and not all packages are bundled by the distribution (or if they are, they aren't updated quickly enough for the user), users want to be able to install packages using Python tools. That risks introducing unexpected library versions and/or conflicts, which breaks the OS utilities, which expect their requirements to be respected (that's what the OS packaging tools do). Hindsight is way too easy here, but if distros had a "system Python" package that OS tools depend on, and which is reserved for *only* OS tools, and a "user Python" package that users could write their code against, we'd probably have had far fewer issues (and much less FUD about the "using sudo pip breaks your OS" advice). But it's likely way too late to argue for such a sweeping change. *Shrug* I'm not the person to ask here. My view is that I avoid using Python on Linux, because it's *way* too hard. I find it so much easier to work on Windows, where I can install Python easily for myself, and I don't have to fight with system package managers, or distribution-patched tools that don't work the way I expect. And honestly, on Windows, there's no "neglect of the system environment" to worry about - if you want to install Python, and use pip to install packages into that environment for shared use, it works fine. People (including me) use virtual environments for *convenience* on Windows, not because it's a requirement. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/J7U6525AZAW4P5ZYH5WLK5IDF6TCH73O/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: [Python-Dev] Re: Have virtual environments led to neglect of the actual environment?
On Thu, 25 Feb 2021 at 19:22, Mike Miller wrote: > Mr. Random had an interesting point to start this thread, that over-reliance > on > venvs may have slowed fixes and improvements on the standard tools and > distributions. I suspect there is some truth to the assertion. Arguably, your claim that using your main Python interpreter for everything "almost never" causes you problems would imply that there's no real *need* for fixes and improvements to handle that situation, so work on supporting virtual environments helps people who prefer them, and harms no-one. I suspect the truth is somewhere between the two. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/IHZOOMX3WNGJL74GQKOENQ6KGKWHU3V6/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Make list.reverse() more flexible
On Sat, 6 Mar 2021 at 07:52, Vincent Cheong wrote: > > So I thought, 'Why do we need to make a reversed copy to assign it to the > original part, when we can simply reverse the original part itself.' That's > the paradigm. A few points strike me here: 1. The question you asked ("why do we need to") has an obvious and trivial answer. "We don't need to". But so what? It's what we have, and the status quo tends to win. If you're arguing for change, you need to argue that the *cost* of that change is worth it, so you need to be looking at what we will *gain*. 2. To put this another way, as far as this list is concerned, you're phrasing the question backwards. Because backward compatibility and availability of people for implementation and maintenance are real costs for any proposal, the question that matters *here* is "Why do we need partial-reverse to be a built in operation?" 3. If you're interested in the idea, you can, of course, implement it and see how it works out. No-one is stopping you writing either an extension that implements this, or a patch to Python. That's basically the *whole point* of open source :-) And then, coming to this list saying "I made this patch that implements in-place partial reversal of lists, would it be worth submitting it as a PR?" would be a much easier place to start from (because you're offering something that's already eliminated some of those costs, as well as demonstrating that you've looked at the practical aspects of the proposal). Basically, even though this list is about "ideas", purely theoretical "wouldn't it be nice if..." discussions don't tend to get very far (or if they do, it's "far" in the sense of "way off-topic" :-)). A little bit of work or thinking on the practical aspects of a proposal tends to help a lot. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/7IWYFS2LVP5NAPSXW6UGQ4Z4DF5T23UL/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Make list.reverse() more flexible
On Sat, 6 Mar 2021 at 10:42, Vincent Cheong wrote: > > I see. > > You have coined the term exactly, partial-reverse. Nice. You have also put > forward a realistic question of 'why do we need'. Well, surely not everyone > needs it and definitely it's not urgently needed, but its just the > counterintuitive incompleteness such that 'it works for a whole, but not part > of it', you see. About the gain, of course it's unlike a monumental speed > boost, but its just a little spot that I saw lacking in power. > > What I had in mind was the algorithmic cost to the program itself, not the > cost in developing it. But now that you explained to me, I understand the > situation. > > Thanks for the information. To put this in context, I think that if you were to create a PR for Python, implementing this change, and post it as a feature request to bugs.python.org, it may well be accepted without (much) debate. It's a classic case of "actions speak louder than words", basically - it's fairly easy for a core dev to look at a PR, think "yes, this is a simple and logical enhancement" and focus on tidying up any technical details before simply merging it. Whereas coming to a discussion forum like this one, and opening with (in effect) "it would be nice if someone did X" tends to get everyone thinking about what it would need to persuade them to spend time writing that code, what they'd think about when writing it, etc, etc. And what you get is a long discussion, but little actual action. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/C3FOC2R2TMKOVCHWPQZ6XANCVUXM2L4Z/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: allow initial comma
On Fri, 12 Mar 2021 at 13:22, roland.puntaier--- via Python-ideas wrote: > > I had posted this as https://github.com/python/peps/issues/1867 > The discussion so far is below. > > Please make some arguments. > > The major point to me is, that the symmetry is broken, > which leads to extra editing actions, like removing the comma in the first > line. > I guess, this was the reason to allow the comma after the last line/entry: > `[1,2,]`. > ``[,1,2]`` should also be allowed, too. This layout style is not something I've ever seen used in "real life", and I don't think it's something that should be encouraged, much less added to the language. > But the comma is just a separator. Why did they allow to have the > comma before a closing bracket/parenthesis/brace? Because of symmetry > between lines, is my guess. More likely because there are two common schools of thought - lists have punctuation *separating* items, and lists have punctuation *terminating* items. I don't even know a commonly used term for the idea of having something *before* each item. So I think you need to find examples of other languages that support this style if you want to advocate for it, otherwise you'll need to demonstrate that it's important enough for Python to go against the norm here. > I personally also have a macro in the editor that evaluates a line in > the parameter list, but drops an initial comma before doing that. > Therefore this is my preferred formatting. But (1) "it's my preference" isn't sufficient to change the language, and (2) why not change your macro to remove a *trailing* comma instead? Overall, I don't think this is a good idea. -1 from me. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/5RENNY6WP5VSWKHAIVTWU4R3VA4BGWIC/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Implicit line continuation for method chaining
On Fri, 12 Mar 2021 at 15:53, Paul Bryan wrote: > > My inclination would be to cede code formatting to a tool like Black and > focus on function: > https://black.readthedocs.io/en/stable/ ... and if you try that out, what you'll find is that black adds parentheses: y = ( x.rstrip("\n") .split(":")[0] .lower() ) Which is a reasonable solution even if you don't like black's formatting choices. Black only wraps when the line length is exceeded, but I can see arguments for wrapping sooner than that in some cases, and personally I prefer the following indentation scheme: y = ( x.rstrip("\n") .split(":")[0] .lower() ) But the principle is the same - I think adding parentheses is an acceptable compromise, in the spirit of "Special cases aren't special enough to break the rules". I don't like needing to use backslashes to continue lines, but I find having to think about how to structure my code to avoid continuations that need backslashes tends to force me to come up with better/more readable ways of writing the expression. Chained method calls are right at the limit here. Sometimes (and only sometimes!) they are a nice way of expressing a computation, but there's no immediately natural way to line-wrap them. I can sympathise with the request here, but parentheses seem adequate to me. (And yes, I spotted the irony of suggesting paren-delimiters for extended expressions in a language that avoids using braces as delimiters for statement blocks :-)) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/W4A7TKADGRMHGYOKTI2FYR7N2JLMBEWN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: allow initial comma
On Fri, 12 Mar 2021 at 16:06, Ned Batchelder wrote: > > I think the only reason anyone ever used leading commas to begin with was > because of languages that didn't allow a final trailing comma. In those > worlds, to keep the editing smooth, people moved the commas to the beginning > of the line, breaking with every comma-tradition. Yes, I've seen it in SQL. But even there, it isn't used before the *first* element of a list. > I don't see a reason to make that odd style easier. Agreed. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/DA7K5IB5XBCJFQKERQWZKARCIZLCPHQC/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Add an export keyword to better manage __all__
On Mon, 12 Apr 2021 at 11:41, M.-A. Lemburg wrote: > > Right, except that in practice: > > > > 1) Many useful libraries are not documented or properly documented. > > In those cases, I'd argue that such libraries then do not really care > for a specific public API either :-) > > > 2) People don't read the docs (at least not always, and/or not in details). > > Uhm, how can you write code against a package without knowing how it > is used ? Is the problem here that we're trying to apply a technical solution to what is, in practice, a people problem? I don't think I've ever seen (in any language) a system of declaring names as public/private/whatever that substituted well for writing (and reading!) good documentation... At best, hiding stuff makes people work a bit harder to write bad code :-) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/NU23H5AOOCBGXJTEHLDXBXLYY6XOI6T6/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Revive: PEP 396 -- Module Version Numbers ?
On Mon, 12 Apr 2021 at 19:52, Christopher Barker wrote: > > Over the years, I've seen __version__ used very broadly but not *quite* in > all packages. I've always known it was a convention, not a requirement. But > it turns out it's not even a "official" convention. > > But it really would be nice if there was a consistent way that I could count > on to get the version of a package at run time. > > Turns out this was suggested in PEP 396 -- and deferred almost 13 years ago! > > https://www.python.org/dev/peps/pep-0396/ > > In the status note, it says: > > """ > Further exploration of the concepts covered in this PEP has been deferred for > lack of a current champion interested in promoting the goals of the PEP and > collecting and incorporating feedback, and with sufficient available time to > do so effectively. > """ > > Well, I may be willing to be that champion, if a core dev is willing to > sponsor, and I see some interest from this group. > > And, well, after 13 years, we've seen __version__ be very broadly, though > certainly not universally used. > > Honestly, I haven't looked to see to what extent setuptools supports it, but > will, of course, do so if folks think this is worth pursuing. > > Or maybe this is a settled issue in setuptools, and we just need to change > the status of the PEP. > > For my part, I FAR prefer the version info to be embedded in the code of the > package in a simple way, rather than hiding among the setuptools/egg/pip > metadata, and requiring setuptools.pkg_resources to get a version at runtime. > > I note that PEP8 uses __version__ as an example of a "module level dunder" -- > but only suggests where it should be put, not how it be used :-) > > Of course, there will be a need to update the PEP to match current practice > (and setuptools and pip) Having a __version__ attribute is fairly common these days, but definitely not universal even now. So the PEP still has a place, IMO. But a lot has changed in the last 13 years. Python packaging is built very much on packages having versions these days, so the *distribution* version (as covered in https://packaging.python.org/specifications/core-metadata/) is essential. And with importlib.metadata, that version is introspectable at runtime. But that's different from a *package* version as exposed via __version__. And I suspect there's still some debate over whether we need the two, so I wouldn't assume the PEP is self-evidently a good thing. There's a bunch of straightforward updating that needs to be done (all of the PEPs that this one references have since been superseded, and in the case off package metadata, it's no longer standardised via a PEP but at https://packaging.python.org/specifications/core-metadata/). Also, you should look at https://packaging.python.org/guides/single-sourcing-package-version/. The whole question of how to derive the distribution version from the package version is quite complex, with a number of common solutions, and also a number of people who will argue that you *shouldn't* single-source, but should just copy the value into the two places. The section of the PEP that covers this needs rewriting, and possibly even omitting, as there is no "standard" answer. As another data point, flit *requires* that the package has a __version__ attribute. FWIW, I'm personally ambivalent on the subject - having a __version__ feels like it's probably a good thing, but it's very rare that I actually care in practice as a package consumer, so as a package creator it feels a bit like boilerplate that I add "for the sake of it". I'd suggest keeping the scope of the PEP fairly limited - opinions vary in this area, and are held fairly strongly, so you'd stand more chance of getting something accepted if you keep things focused on the basics. > Or should this be brought up on The Distutils-SIG list? (That's the packaging category on Discourse these days). Honestly, I don't really know. It *could* be a packaging interoperability standard, but the rules it includes about stdlib modules push it into core python territory. And packaging standards tend to be more about distribution-level metadata than package-level. I suspect the best thing to do would be to check with the SC on their view, and if they want to toss it in my direction, I'm happy to make the decision. Sorry, but I'd rather not be a sponsor for this, as I'm pretty busy with other things at the moment. But I hope this helps. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/W6RDBPPM77QQRNJ3YI3MFTXSMSCRD6PW/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Revive: PEP 396 -- Module Version Numbers ?
On Tue, 13 Apr 2021 at 18:14, Christopher Barker wrote: > > I agree here -- I think what needs to be official is what is In an installed > package/distribution -- not how it gets there. But I do think the standard > approach should be easy to do, even without special tools. That's already standardised. See https://packaging.python.org/specifications/recording-installed-packages/ It describes how all the package (technically "distribution"[1]) metadata gets stored, and where. What it doesn't do, is make any statements about what should go in the files that make up that distribution. That's where this PEP differs, as it is specifically looking at that. >> Honestly, I don't really know. It *could* be a packaging >> interoperability standard, but the rules it includes about stdlib >> modules push it into core python territory. > > indeed, and that's actually the point here. However, I suspect that the core > devs will strongly rely on PyPA's thoughts on the matter. As both a core dev and the packaging PEP-delegate, I'll try to avoid holding self-contradictory opinions on the matter ;-) >> I suspect the best thing to do would be to check with the SC on their >> view, and if they want to toss it in my direction, I'm happy to make >> the decision. > > How does one "check with the SC"? A post to python-dev? I think there's a SC issue tracker - check in the devguide about the lifecycle of a PEP (or ask your PEP sponsor :-)) Paul [1] Yes, terminology gets confusing. We *tried* to formalise it but people couldn't get used to the distinctions so we gave up and went with the general mood which is to work out what you mean by context. ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/SWC26SMMPXMCUL4OAYVZHPREAJW7BVSI/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Support more conversions in format string
On Wed, 21 Apr 2021 at 09:01, Serhiy Storchaka wrote: > > Currently format strings (and f-string expressions) support three > conversions: !s -- str, !r -- repr and !a for ascii. > > I propose to add support of additional conversions: for int, float and > operator.index. It will help to convert automatically printf-like format > strings to f-string expressions: %d, %i, %u -- use int, %f -- use float, > %o, %x -- use operator.index. I've never had any particular need for these, but I can see that they would be logical additions. > For float the conversion letter is obvious -- !f. But I am not sure for > what !i should be used, for int or operator.index. If make it > operator.index, then !d perhaps should be used for int. If make it int, > then !I perhaps should be used for operator.index. Or vice verse? I don't have a particularly strong opinion here, other than to say I'm not sure I like the upper case "I". It looks far too much like a lower case "L" in the font I'm using here, which makes me think of C's "long", so it's easy to confuse. So of the two options, I prefer !f, !d, !i over !f, !i, !I. > Also I propose to support applying multiple conversions for the same > item. It is common when you output a path or URL object as quoted string > with all escaping, because in general it can contain special or > non-printable characters. Currently I write f"path = {repr(str(path))}" > or f"path = {str(path)!r}", but want to write f"path = {path!s!r}". This, I would definitely use. I use f"path = {str(path)!r}" quite a lot, and being able to reduce that to f"{path=!s!r}" would be really convenient for debugging (even if it does look a bit like a string of magic characters at first glance). > Do we need support of more standard conversions? Do we want to support > custom conversions (registered globally as encodings and error > handlers). re.escape, html.escape and shlex.quote could be very useful > in some applications. That appeals to me just because I like generic features in general, but I'm not sure there are sufficient benefits to justify the complexity for what would basically be a small convenience over calling the function directly. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/PWVCP2IHS22DK6FWEMBTHIGQPLAJBXAX/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: String comprehension
On Mon, 3 May 2021 at 04:00, David Álvarez Lombardi wrote: > This is the mindset that I had. I understand there are other ways to do what > I am asking. (I provided one in my initial post.) I am saying it relies on > what I believe to be a notoriously unintuitive method (str.join) and an even > more unintuitive way of calling it ("".join). I think this is something of an exaggeration. It's "notoriously difficult" (;-)) for an expert to appreciate what looks difficult to a newcomer, but I'd argue that while ''.join() is non-obvious at first, it's something you learn once and then remember. If it's really awkward for you, you can write `concat = ''.join` and use that (but I'd recommend against it, as it makes getting used to the idiom *other* people use that much harder). > Here are 73 of them that I found by grepping through Lib. Thank you. I only spot-checked one or two, but I assume from this list that your argument is simply that *all* occurrences of ''.join(something) can be replaced by c"something". Which suggests a couple of points: * If it doesn't add anything *more* than an alternative spelling for ''.join, is it worth it? * Is the fact that it's a quoted string construct going to add problematic edge cases? You can't use " inside c"..." without backslash-quoting it. That seems like it could be a problem, although I'll admit I can't come up with an example that doesn't feel contrived at the moment. In particular, is the fact that within c"..." you're writing a comprehension but you're not allowed to use unescaped " symbols, more awkward than using ''.join was originally? > > To me, the chosen syntax is problematic. The idea of introducing > > structural logic by using “” seems likely to cause confusion. Across all > > languages I use, quotes are generally and almost always used to introduce > > constant values. Sometimes, maybe, there are macro related things that may > > use quoting, but as a developer, if I see quotes, I’m thinking: the runtime > > will treat this as a constant. > > I think this is an over-simplification of the quotations syntax. > Python has several prefix characters that you have to look out for when you > see quotes, namely the following: r, u, f, fr, rf, b, br, rb. > Not only can these change the construction syntax, but they can even > construct an object of a completely different type (bytes). On the contrary, I think you're missing the point here. When I, as a programmer, see "..." (with any form of prefix) I think "that's a constant". That's common for all quoting. I'd argue that even f-strings are very careful to avoid disrupting this intuition any more than necessary - yes, {...} within an f-string is executable code, but the non-constant part is delimited and it's conventionally limited to simple expressions. Conversely, it's basically impossible to view your c-strings as "mostly a constant value". Also, how would c-strings be handled in conjunction with other string forms? Existing string types can be concatenated by putting them adjacent to each other: >>> a="hello" >>> f"{a}, " r"world" 'hello, world' >>> How would c-strings work? As code, I might want to format a generator over multiple lines. How would c-strings work with that? ( c"val.strip().upper() " c"for val in file " c"if val != '' " # Skip empty lines c"and not val.startswith(chr(34))" # And lines commented with " - chr(34) is ", but we can't use " directly without a backslash ) That doesn't feel readable to me. I could use a triple-quoted c-string, but then I have an indentation problem. Also, with triple quoting I couldn't include those comments (or could I??? You haven't said whether comments are valid *within* c-strings. But I assume not - the syntax would be a nightmare otherwise). > > Second, f-strings do not restrict the normal usage of strings for freeform > > text content (apart from making the curly brace characters special). > > Not to nit-pick too much, but the following is a valid string but not a valid > f-string. > > >>> s = f"This is a valid string but invalid f-string {}" > File "", line 1 > s = f"This is a valid string but invalid f-string {}" > ^ > SyntaxError: f-string: empty expression not allowed > >>> That comes under the heading of making curly braces special... > > Your proposal is focusing on strings as iterables and drawing a parallel > > with other kinds of iterables for which we have comprehensions. But strings > > aren't like other iterables because they're primarily vessels for freeform > > text content, not structured data. > > I view this as the strongest opposition to the idea in the whole thread, but > I think that seal was broken with f-strings and the {}-syntax. The proposed > syntax is different from those features only in *degree* (of deviation from > strict char-arrays) not in *type*. But I also recognize that the delimiters > {} go a long way in helpi
[Python-ideas] Re: Namespaces!
On Wed, 5 May 2021 at 11:33, Matt del Valle wrote: >> To give an example: >> >> def spam(): >> return "spam spam spam!" >> >> def eggs(): >> return spam() >> >> namespace Shop: >> def spam(): >> return "There's not much call for spam here." >> def eggs(): >> return spam() >> >> print(eggs()) >> # should print "spam spam spam!" >> print(Shop.eggs()) >> # should print "There's not much call for spam here." > > > I'm guessing this was a typo and you meant to type: > > print(spam()) > # should print "spam spam spam!" > print(Shop.spam()) > # should print "There's not much call for spam here." > > Because if you did, then this is precisely how it would work under this > proposal. :) I'm not the OP, but I read their question precisely as it was written. The global eggs() returns the value from calling spam() and should use the *global* spam. The eggs in namespace Shop calls spam and returns its value, and I'd expect that call to resolve to Shop.spam, using the namespace eggs is defined in. If that's not how you imagine namespaces working, I think they are going to be quite non-intuitive for at least a certain set of users (including me...) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/SCBZDQQT3A3OISOTSOB7WSQNI5HFHSBL/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: division of integers should result in fractions not floats
On Fri, 14 May 2021 at 16:29, David Mertz wrote: > > The memory simply blows up too fast for this to be practical (at least as a > default) a float is always 64 bits, a fraction is unboundedly large if > numerator and denominator are coprime. > > A toy example with a half dozen operations won't make huge fractions. A loop > over a million operations will often be a gigantic memory hog. +1 on this. My experience has been that fraction classes are a lot less useful in (general) practical situations than people instinctively assume. > That said, Chris's idea for a literal spelling of "Fraction" is very > appealing. One extra letter or symbol could indicate that you want to work in > the Fraction domain rather than floating point. That's a perfectly reasonable > decision for a user to make. Agreed, it is appealing. The problem (and this is not a new suggestion, it's come up a number of times) is that of having language syntax depend on non-builtin classes. So either you have to *also* propose making the fractions module a builtin, or you very quickly get sucked into "why not make this mechanism more general, so that libraries can define their own literals?" Scope creep is nearly always what kills these proposals. Or the base proposal is too specialised to gain enough support. Personally, I can see value in fraction, decimal and regex literals. But I can live without them. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/WYPIPADOCZ724W3CLP6X4ASMWWH7SWDX/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: division of integers should result in fractions not floats
On Fri, 14 May 2021 at 16:54, Martin Teichmann wrote: > That is absolutely what I would like to have. The fractions module is very > small, it can easily be made a builtin. This would also speed it up > significantly, I hope. Probably close to the speed of floats, given that most > of the time spent for floats is whithin the interpreter anyways. Builtins have to be in C, with very few exceptions. That makes it harder for alternative implementations, who now have to write their own implementation rather than just grabbing the pure Python stdlib version. It also makes maintenance harder, and means that bugs take longer to get fixed, as fewer people know how to maintain the code. > This is why I proposed not to make a new literal. With my proposal, I already > served two of your proposed use cases: fractions and decimal. Decimal(1/3) > would be perfectly fine. Plus symbolic math. But I don't want Decimal(1/3), I want Decimal(0.01). > Effectively what I am proposing is lazy evaluation of the divison operator, > using fractions as a mathematical tool. Well, you don't need fractions (as in the fractions *module* and the Fraction *type*) for lazy evaluation. You can just use a tuple of numerator and denominator. But that's just details. What's more critical is when you do the actual conversion to float. In other words, how lazy would this be in reality? What about isinstance(1/3, float) Decimal(1/3) If you convert to float too quickly, there's no benefit. If you delay, there's a period where the difference is detectable. And that's a breaking change. I'm very impressed that you made the actual interpreter change and checked both the test suite and real-world code like numpy, but that doesn't mean nothing changed. At a minimum, I'd suggest that you need to specify *exactly* when the division is actually executed. But actually, I suspect that you really *do* want 1/3 to be a `fractions.Fraction` object. So see above. And in addition, you still have the question about when, and how, the conversion to float happens. Because Fraction doesn't magically convert to a float - so the behaviour of the object returned from the expression 1/3 very definitely *isn't* the behaviour of Fraction(1,3). I guess I'm confused. I'm -1 on a proposal that I don't understand, so I'll have to wait for a clearer explanation of what you're suggesting. >From other emails: > actually have a new idea about how a fraction literal could look like: just > write 2/3 as opposed to 2 / 3, and you get a fraction. So: no spaces: > fraction, with spaces: float. That would break code that currently writes 2/3 and expects a float. I'm fine with speculation, but I think you need to be a *lot* more conscious of what counts as backward incompatibility here. I'm not saying we can't break backward compatibility, but we need to do so consciously, and having assessed the consequences, not just because we assumed "no-one will do that". Also consider the section in the PEP format "How would we teach this?" How would you explain to someone with no programming background, maybe a high school student, that 3/4 and 3 / 4 mean different things in Python? Your audience might not even know that there is a difference between "fraction", "decimal" and "float" at this stage. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/QCZZN3N2Z6QAHC3IV2BZOMGFMJPGUOOU/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: division of integers should result in fractions not floats
On Fri, 14 May 2021 at 20:06, Martin Teichmann wrote: > > Also consider the section in the PEP format "How would we teach this?" > > How would you explain to someone with no programming background, maybe > > a high school student, that 3/4 and 3 / 4 mean different things in > > Python? Your audience might not even know that there is a difference > > between "fraction", "decimal" and "float" at this stage. > > Well, I think a high school student would be the one with the least problems: > s/he would just realize "wow, that thing can do fractions! I can do my math > homework with that!" And I can tell you, kids will be the first ones to > figure out that if you type spaces you get decimals, if you do not type > spaces you get fractions. They are used to this kind of stuff from their math > class, from calculators (or their phones, I guess). OK, maybe I chose a bad example. Maybe I should have said "PL/SQL programmers who don't really understand much of the theory behind programming, but are just interested in getting the job done, who have picked up some Python code written by someone else and have to fix it because the automation script broke and the normal guy isn't in the office today". I can give you names, if you want :-) > So, if you show them (the following is fake) > > >>> 1/2 + 1/3 > 5/6 > 1 / 2 + 1 / 3 > 0.83 > > They will immediately spot what's going on. In effect you're saying "we don't need to teach it because it's obvious". I disagree. There's certainly some audiences who won't find it intuitive. How do we teach it to them? But I'm not really interested in going into ever more detail on this point to be honest. All I'm trying to say is "I think that having 1/2 mean something different than 1 / 2 is unacceptable, because it's too easy for people to misunderstand". You may choose to disagree with me, which is fine. But at some point, the responsibility has to be on you to persuade people that your idea is good, so you can't do that indefinitely. I'd still like to see a more precise explanation of your proposal. I'm finding that every time I try to guess details of what you mean, I'm guessing wrong, and that's not productive (for me or for you). Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/D2EPU33EPKEWQLZPN4EJXMPC3A4PH3H2/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Fractions vs. floats - let's have the cake and eat it
On Tue, 18 May 2021 at 15:16, Martin Teichmann wrote: > > Because reality. People would like to write 1/2 * m * v**2 to mean the > obvious thing, without having to think about the details. And there are many > people like this, this is why it shows up on this mailing list regularly. I > have never felt the urge to write two/three * m * v**two. I'd actually prefer to write (m*v**2)/2. Or (m/2)*v**2. But those wouldn't work, the way you describe your proposal. And I'd be very concerned if they behaved differently than 1/2 * m * v**2... Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/YNBWIHH5JB5HT7GILIRJXCBPK432RI3Z/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Fractions vs. floats - let's have the cake and eat it
On Tue, 18 May 2021 at 16:55, Martin Teichmann wrote: > > Hi Paul, > > > > I'd actually prefer to write (m*v**2)/2. Or (m/2)*v**2. But those > > wouldn't work, the way you describe your proposal. And I'd be very > > concerned if they behaved differently than 1/2 * m * v**2... > > Sure they do work, and they work exactly the same way. That is acually the > point: currently 1/2 * m * v**2 is not the same as (m/2) * v**2 (in sympy, > that is), with my proposal it would be exactly the same (again, from my > prototype, not fake): > > >>> m, v, r = symbols("m v r") > >>> 1/2 * m * v**2 > m*v**2/2 > >>> (m/2) * v**2 > m*v**2/2 > >>> (m * v**2) / 2 > m*v**2/2 > >>> 4/3 * pi * r**3 > 4*pi*r**3/3 But *not* in sympy, in normal Python, if m == 1 and v == 1, then 1/2 * m * v**2 is 0.5 (a float) currently, as is (m/2) * v**2. But in your proposal, the former will be a float/fraction hybrid, whereas the latter will be a float. And what about x = 1 a = 1/3 b = x/3 a == Fraction(1,3) b == Fraction(1,3) a == b Currently these are False, False, True. You'll change that to True, False, True and you've now broken the idea that things that are equal should compare the same to a 3rd value. Never mind. At the end of the day, I simply think your proposal is not viable. We can argue details all day, but I'm not going to be persuaded otherwise. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/PGIFQT6CYIRPJJ3KVDFDN7PRYJSVJKVA/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: symbolic math in Python
On Wed, 19 May 2021 at 07:41, Martin Teichmann wrote: > that worked well, but actually we would like to write the last line simply as > > >>> solve(x**2 == 1/2) > > as you might notice, this is fully legal Python syntax. Unfortunately the > semantics is such that sympy has no way to determine what is actually going > on, this is why they invented all those helper functions shown above. It's almost possible to do this right now. Using the compiler and ast module, you can write a version of solve that works like >>> solve("x**2 == 1/2") You have to quote the argument, and yes that probably means your editor/IDE won't help you as much with the expression syntax, but otherwise this could be made to work exactly as you want with no language changes. People don't often do this, so presumably there's a reason people don't like having to quote "stuff that's basically Python code". But it *is* technically possible. > My idea is now to start at the line above, "x = symbols('x')". I propose a > new syntax, "symbolic x", which tells the parser that x is a symbolic > variable, and expressions should not be executed, but handed over as such to > the calling functions. To stay with the example, the code would look like > this (this is fake, I did not prototype this yet): > > >>> from sympy import solve > >>> symbolic x > >>> solve(x**2 == 1/2) > [-sqrt(2)/2, sqrt(2)/2] > > Now to the details. Before you get to details of implementation, you should explain: 1. Why sympy doesn't have a solve("x**2 == 1/2") function taking a string like I described above? Why isn't that a good solution here? 2. Why new syntax is any more likely to be useful for sympy than a string-based function. The barrier for language changes, and in particular new syntax, is high (as you've discovered). So proposals need to be pretty thorough in examining ways to solve a problem within the existing language before proposing a new language feature. Particularly if the feature is focused on improving things just for a subset of the Python user base. You really should look at the history of the matrix multiplication operator in Python - start with PEP 465, https://www.python.org/dev/peps/pep-0465/, but look back at the history of all the proposals that *didn't* work leading up to that - to get a feel for how much work it needs to get a feature focused on the needs of a specialist community into the core language. And as others have pointed out, symbolic maths and sympy is nowhere near as big an audience as the numeric community. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/VNEVTZ5NSLTFNMUMWDQMVIOG5CX5PYML/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Introduce constants in Python (constant name binding)
On Wed, 26 May 2021 at 12:55, Shreyan Avigyan wrote: > > > What's a const *ptr and a const *ptr const? > > In C, a const pointer means a pointer that can only point to one value while > const pointer const means a pointer that can only point to one constant value. Python has names that bind to values. They are *not* equivalent to pointers that point to variables. So I suspect that the communication problem here is that you aren't thinking of what you're trying to propose in Python terms, hence everyone else (who is!) is misunderstanding you. You need to make sure that you properly understand Python's name binding mechanisms before proposing to change them... (Maybe you do, in which case can you please express your ideas in terms of those mechanisms, not in terms of C). Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/XMQR3BXRB5VBZIJZ5IKX2CFYJNTVD2A6/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Introduce constants in Python (constant name binding)
On Wed, 26 May 2021 at 12:59, Shreyan Avigyan wrote: > 4. constant pi = 3.14 > # later > pi = 3.1415 # Error Steven's already asked what error, and is it compile time or runtime. I'll add foo.py: constant x = 12 bar.py import foo foo.x = 19 You can only detect this at runtime. baz.py import foo name = 'x' setattr(foo, name, 22) You can only detect this at runtime. Also, if some_condition: constant a = 1 else: a = 2 a = 3 Is this allowed? Is a constant? What about for i in range(10): constant a = [] Is this allowed, or is it 10 rebindings of a? for i in range(10): constant a = [i] What about this? What is a "literal" anyway? You've used lists in examples (when discussing mutability) but list displays aren't actually literals. Lots of questions. I'm sure it's possible to answer them. But as the proposer, *you* need to give the answers (or at least give a complete and precise enough specification that people can deduce the answers). At the moment, everyone is just guessing. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/GJ6XAJMGXLWUYORFCFSADT2EBRORXFOI/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Introduce constants in Python (constant name binding)
On Wed, 26 May 2021 at 13:13, Shreyan Avigyan wrote: > > Reply to Paul Moore: > > In Python terms, a constant is a name that binds itself to a value in memory > and that name cannot bind itself to a different value now (unlike variables). > The value can be mutated (if mutable) but the name cannot bind to a different > value once it has bind itself to a value. "value in memory" isn't Python terminology. Nor is "variable". Please point me to the exact sections in the language reference if you feel I'm wrong here. In particular, Python refers to "identifiers" and "names" - see https://docs.python.org/3/reference/lexical_analysis.html#identifiers ("Identifiers (also referred to as names) are described by the following lexical definitions.") On Wed, 26 May 2021 at 13:31, Shreyan Avigyan wrote: > > Reply to Paul Moore: > > if some_condition: > constant a = 1 > else: > a = 2 > a = 3 > > Yes this is allowed. This is runtime. OK, so a=3 may raise an exception at runtime (depending on the value of some_condition). We need to decide exactly what exception would be used but I'm OK with that. It's specifically *not* a syntax exception, though, as that's a compile-time exception. > > for i in range(10): > constant a = [] > > Not sure. Though it's preferable to be runtime. Preferable is "not allowed". I don't know what you mean here. Are you saying that you'd prefer it not to be allowed? It's your proposal, the decision is up to you. > And lists are also literals. Any Python Object that is not assigned to a > variable is a literal. Python claims that itself. A preview - Read the language reference here: https://docs.python.org/3/reference/lexical_analysis.html#literals > [10] = [2] > SyntaxError: Can't assign to literal here. As Chris pointed out, this is flagging the assignment to the literal 10. Have you properly researched the existing semantics to make sure you understand what you're proposing to change, or are you just guessing? > Constants should have a similar error - > > constant x = 10 > x = [2] > SomeErrorType: Can't assign to constant here. But you just said it was runtime, so it definitely *isn't* similar to the syntax error "Can't assign to literal here". You're making inconsistent statements again :-( On Wed, 26 May 2021 at 13:53, Shreyan Avigyan wrote: > > I've already given one. Since Python is dynamically typed changing a critical > variable can cause huge instability. Want a demonstration? Here we go, > > import sys > sys.stdout = None However, there's a *huge* amount of extremely useful code out there that does assign to sys.stdout - pytest's mechanisms for capturing test output use this, for example. Making sys.stdout constant would break this. I don't think you've thought this proposal through at all, to be honest. You seem to be making up answers as the questions arise, which is *not* what people are asking for here. We are asking that you *explain* your proposal, assuming that you already know the answers and are simply struggling to communicate the details. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/DJSIL362DVSYXOLGZEM24NGAXJTOHGL3/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Introduce constants in Python (constant name binding)
On Wed, 26 May 2021 at 14:33, Shreyan Avigyan wrote: > > Reply to Paul Moore: > > > But you just said it was runtime, so it definitely *isn't* similar to > > the syntax error "Can't assign to literal here". You're making > > inconsistent statements again :-( > > That's exactly why I wrote SomeErrorType instead of SyntaxError. They are > never similar. I just said the behavior would seem similar. Your precise comment was > Constants should have a similar error - > > constant x = 10 > x = [2] > SomeErrorType: Can't assign to constant here. See above? You said "Constants should have a similar error" and then in your response to my reply, you said "That's exactly why I wrote SomeErrorType instead of SyntaxError. They are never similar. I just said the behavior would seem similar." How am I to interpret that? Your statements are directly contradictory ("similar error" vs "never similar"). And for information, I personally don't find compile time and runtime exceptions to "seem similar" at all, especially in the context of this discussion where we've repeatedly tried to get you to decide whether you're talking about compile time or runtime. > I'm trying my best. But sometimes there are few questions coming up that > confuses and I'm not sure of the answer and I'm ending up making inconsistent > statements. So stop replying instantly, read the feedback, think your proposal through, and when you have an updated proposal that actually addresses all of the flaws people have pointed out with your current proposal, post that (with a clear explanation of what you changed and why). Knee-jerk replies from you aren't respectful of the time people are putting into trying to follow what you're proposing :-( Paul PS I know there's a tendency for people to reply very quickly on this list, and I know that can leave people feeling under pressure to reply quickly themselves. But it's a false pressure - some people simply need time to digest feedback, others are having to handle a language they are not a native speaker of, still others may have other reasons for needing extra time. If you're in a situation like that, please *do* take that time. We won't complain if you take time to compose your answers. ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/DI6S7HTMNDAB7MK4BRWJ33UL3QWSHKTL/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Add static variable storage in functions
On Thu, 27 May 2021 at 14:22, Chris Angelico wrote: > Note that the statics *must* be defined on the function, NOT on the > code object. Just like function defaults, they need to be associated > with individual instances of a function. > > >>> f = [] > >>> for n in range(10): > ... def spam(n=n): > ... # static n=n # Same semantics > ... print(n) > ... f.append(spam) > ... > > Each spam() should print out its particular number, even though they > all share the same code object. This reminds me, if we ignore the performance aspect, function attributes provide this functionality, but there's a significant problem with using them because you can't access them other than by referencing the *name* of the function being defined. >>> def f(): ... print(f.i) ... >>> f.i = 1 >>> g = f >>> del f >>> g() Traceback (most recent call last): File "", line 1, in File "", line 1, in f NameError: name 'f' is not defined OK, you can, if you're willing to mess around with sys._getframe and make some dodgy assumptions: >>> def me(): ... parent = sys._getframe(1) ... for obj in parent.f_globals.values(): ... if getattr(obj, "__code__", None) == parent.f_code: ... return obj ... >>> def f(): ... print(me().i) ... >>> f.i = 1 >>> g = f >>> del f >>> g() 1 It would be nice to have a better way to reference function attributes from within a function. (This would also help write recursive functions that could be safely renamed, but I'm not sure many people would necessarily think that's a good thing ;-)) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/I4I2UEWB3CYPD3HNNTNX5A4GJMIVBTKC/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Add static variable storage in functions
On Thu, 27 May 2021 at 15:04, Chris Angelico wrote: > Hmm. > > def static(**kw): > def deco(func): > statics = types.SimpleNamespace(**kw) > @functools.wraps(func) > def f(*a, **kw): > func(*a, **kw, _statics=statics) > return f > return deco > > @statics(n=0) > def count(*, _statics): > _statics.n += 1 > return _statics.n > > Add it to the pile of clunky options, but it's semantically viable. > Unfortunately, it's as introspectable as a closure (that is: not at > all). Still arguably clunky, still doesn't have any performance benefits, but possibly a better interface to function attributes than just using them in their raw form. def static(**statics): def deco(func): for name, value in statics.items(): setattr(func, name, value) func.__globals__["__me__"] = func return func return deco @static(a=1) def f(): print(__me__.a) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/SILEUKPYQSJN6ISHRQA2GBSZ2JDR7WEH/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Add static variable storage in functions
On Thu, 27 May 2021 at 15:49, Chris Angelico wrote: > > On Fri, May 28, 2021 at 12:25 AM Paul Moore wrote: > > > > On Thu, 27 May 2021 at 15:04, Chris Angelico wrote: > > def static(**statics): > > def deco(func): > > for name, value in statics.items(): > > setattr(func, name, value) > > func.__globals__["__me__"] = func > > return func > > return deco > > > > @static(a=1) > > def f(): > > print(__me__.a) > > > > Can't use globals like that, since there's only one globals() dict per > module. It'd require some compiler magic to make __me__ work the way > you want. But on the plus side, this doesn't require a run-time > trampoline - all the work is done on the original function object. Rats, you're right. Hacking globals felt like a code smell. I considered trying to get really abusive by injecting some sort of local but that's not going to work because the code's already compiled by the time the decorator runs: >>> def f(): ... print(__me__.a) ... >>> dis.dis(f) 2 0 LOAD_GLOBAL 0 (print) 2 LOAD_GLOBAL 1 (__me__) 4 LOAD_ATTR2 (a) 6 CALL_FUNCTION1 8 POP_TOP 10 LOAD_CONST 0 (None) 12 RETURN_VALUE So yes, without compiler support you can only go so far (but you could always use the _getframe approach instead). > So, yeah, add it to the pile. Yep. It's an interesting exercise, is all, and honestly, I don't think I'd use static much anyway, so something "good enough" that works now is probably more than enough for me personally. I do think that having a compiler supported way of referring efficiently to the current function (without relying on looking it up by name) would be an interesting alternative proposal, if we *are* looking at actual language changes - it allows for something functionally equivalent to statics, without the performance advantage but in compensation it has additional uses (recursion, and more general access to function attributes). I'm not going to push for it though, as I say, I don't have enough use for it to want to invest the time in it. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/TFAU3JX3RH2RAQ3E2KMVD43HZ34454DH/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Add static variable storage in functions
On Fri, 28 May 2021 at 13:11, Steven D'Aprano wrote: > We might not even need new syntax if we could do that transformation > using a decorator. > > > @static(var=initial) > def func(): > body The problem here is injecting the "nonlocal var" statement and adjusting all of the references to the variable in body. I don't think that can be done short of bytecode manipulation. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/OSOQYJN5I5Z7II2OAQDO3FQZZCA3QJ54/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: A __decoration_call__ method for Callable objects (WAS: Decorators on variables)
On Tue, 1 Jun 2021 at 13:16, Steven D'Aprano wrote: > We can distinguish the two contexts by using different signatures. The > signature used depends entirely on the call site, not the decorator, so > it is easy for the interpreter to deal with. > > If the decorator is called on a function or class statement, a single > argument is always passed, no exceptions: > > # always calls decorate with one argument > @decorate > def func(): # or class > pass > > # --> func = decorate(func) > > If called on a variable, the number of arguments depends on whether it > is a bare name, or a value and annotation are provided. There are > exactly four cases: > > # bare name > @decorate > var > # --> var = decorate('var') > > # name with annotation > @decorate > var: annot > # --> var = decorate('var', annotation=annot) > > # name bound to value > @decorate > var = x > # --> var = decorate('var', value=x) > > # name with annotation and value > @decorate > var: annot = x > # --> var = decorate('var', annotation=annot, value=x) > > > Keyword arguments are used because one or both of the value and the > annotation may be completely missing. The decorator can either provide > default values or collect keyword arguments with `**kwargs`. I've yet to be convinced that variable annotations are sufficiently useful to be worth all of this complexity (and by "this" I mean any of the proposals being made I'm not singling out Steven's suggestion here). But if we do need this, I quite like the idea of making the distinction based on signature. > The only slightly awkward case is the bare variable case. Most of the > time there will be no overlap between the function/class decorators and > the bare variable decorator, but in the rare case that we need to use a > single function in both cases, we can easily distinguish the two cases: > > def mydecorator(arg, **kwargs): > if isinstance(arg, str): > # must be decorating a variable > ... > else: > # decorating a function or class > assert kwarg == {} > > So it is easy to handle both uses in a single function, but I emphasise > that this would be rare. Normally a single decorator would be used in > the function/class case, or the variable case, but not both. You don't need to do this. Just add another keyword argument "name": # bare name @decorate var # --> var = decorate(name='var') # name with annotation @decorate var: annot # --> var = decorate(name='var', annotation=annot) # name bound to value @decorate var = x # --> var = decorate(name='var', value=x) # name with annotation and value @decorate var: annot = x # --> var = decorate(name='var', annotation=annot, value=x) The single positional argument is reserved for function/class annotations, and will always be None for variable annotations. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/NJTZRP64SBWP5SJMKAGPNAYJ7MK6IKQO/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: An alternate idea for escaping in string interpolation
On Sun, 27 Jun 2021 at 08:11, Paul Bryan wrote: > > It looks like you're suggesting hard-coding specific language escape > conventions into f-strings? That's how I understood the proposal too. Hard coding specific conventions shouldn't be part of a language construct IMO. > What if instead you were to allow delegation to some filter function? Then, > it's generic and extensible. > def html(value: Any): > filtered = ... # filter here > return filtered > > f'{!!html}...' Well, there's already a way of handling that: f'...' So all you're saving is a bit of typing. Yes, I'm aware that there are nuances here that I'm dismissing, but this feels like what Nick was talking about in his post, where he pointed out that this is a variation on PEP 501, and the reasons for deferring that PEP still apply. It's not that the idea isn't attractive, it's just that once you've considered all the ways you can "nearly" do this already with existing tools, the benefits that remain are so small that they don't warrant a language change. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/IS2GKFL57CUCSEMG7NHRVPQLPQEJKRNM/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: writelines2?
On Tue, 13 Jul 2021 at 15:56, Jonathan Fine wrote: > > The interactive help message for writelines gives no help. I've made an > enhancement request to b.p.o. > > help(open('/dev/zero').writelines) gives no help > https://bugs.python.org/issue44623 > > Please take a look. Works for me on Python 3.9: >>> help(open("nul").writelines) Help on built-in function writelines: writelines(lines, /) method of _io.TextIOWrapper instance Write a list of lines to stream. Line separators are not added, so it is usual for each of the lines provided to have a line separator at the end. ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/EPRIT27NWJBDX2LGDJO2ZLP7FSN6V6KN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Nested Dictionary Pointers and Logic - "flatten / nested iteration" Proposal
You've given some examples of what flatten() does, but I still don't really understand how it's expected to work. Could you provide a specification, rather than just examples? Also, I've never needed a function like this in anything I've ever done - that may mean nothing, but it does suggest that you need to demonstrate that it's a generally useful tool (maybe by pointing to examples of "real world" code in various contexts that would be improved by using it). There's a lot of pressure to keep the stdlib for things that are widely applicable, and the burden of demonstrating that lies with the proposer. As the proposal stands here, I honestly don't see anything that suggests this wouldn't be better as a standalone function published on PyPI. Paul On Wed, 21 Jul 2021 at 13:43, Sven Voigt wrote: > > Hello Python-ideas, > > I am working with lots of JSON objects in my work and need to obtain JSON > pointers to particular components of a JSON object. These pointers are > defined as part of the JSON-LD specifications as relative IRIs and as part of > the JSON-Schema specifications. However, I am not aware of these pointers > being part of the JSON specification. The pointers are really straightforward > and follow the same syntax as other IRIs (URLs) or file paths. For example, > the following flatten function illustrates the mapping of nested dictionaries > and arrays to a single flat dictionary, where each nested key is mapped to a > unique path. > > d = {"a": 1, "b": {"ba": 2, "bb": [{"bba": 3, "bbb": None}]}} > flatten(d) > >> {'a': 1, 'b/ba': 2, 'b/bb/0/bba': 3, 'b/bb/0/bbb': None} > > Not only does this conversion help in generating JSON pointers, but it helps > with logic for nested data structures. Specifically, assume there is a > dictionary, which includes nested dictionaries, lists, and tuples, and any of > these elements are contained within another dictionary or that other > dictionaries elements. Then it is not sufficient to simply compare the items > of the two dictionaries to check whether the first is a subset of the other. > However, the flat structures can be compared directly. > > 1. A nested dictionary is a subset of another nested dictionary. The flatten > function ensures the nested dictionaries are not checked for equivalence but > subset of. > # Current > {"a": 1, "c": {"d": 4}}.items() <= {"a": 1, "b": 2, "c": {"d": 4, "e": > 5}}.items() > >> False > # Proposed > flatten({"a": 1, "c": {"d": 4}}).items() <= flatten({"a": 1, "b": 2, "c": > {"d": 4, "e": 5}}).items() > >> True > > 2. A nested list or tuple is a subset of a dictionary. > # Current > {"a": 1, "c": [3]}.items() <= {"a": 1, "b": 2, "c": [3,3]}.items() > >> False > # Proposed > flatten({"a": 1, "c": [3]}).items() <= flatten({"a": 1, "b": 2, "c": > [3,3]}).items() > >> True > > Note that these examples only have one level of nesting, but the flatten > function must handle any level of nesting. For the current version of the > flatten function, I essentially borrowed the JSON "dumps" source code. > However, the dumps code doesn't utilize any other functions and I believe > there may be a missing "nestedIter" function in python. This function could > be used for JSON dumps or flatten or anytime someone wants to visit all > nested elements of a dictionary and perform some operation. Therefore, I > think this function could be included in the JSON library if python-ideas > thinks that the pointers are very specific to JSON or the itertools library, > where the nested iteration function could be generally used for all nested > data types. > > Let me know what you think of this proposal and I am looking forward to your > responses. > > Best, > Sven > ___ > Python-ideas mailing list -- python-ideas@python.org > To unsubscribe send an email to python-ideas-le...@python.org > https://mail.python.org/mailman3/lists/python-ideas.python.org/ > Message archived at > https://mail.python.org/archives/list/python-ideas@python.org/message/V2DNTLXII2QNYSE3FJXRO45ZFTCMBZ5G/ > Code of Conduct: http://python.org/psf/codeofconduct/ ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/4LMSOSMOIYN6263FWCFFQQ22C6A7PC5X/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Nested Dictionary Pointers and Logic - "flatten / nested iteration" Proposal
Maybe your examples aren't making it clear what you're suggesting, then. I'm in agreement with Chris, your "nested iteration tool" sounds like something that would have so many configuration options and parameters that it would be harder to use than traversing "by hand". But maybe you imagine something simpler than I do. Again, can I suggest you post what you imagine the user documentation for the function would look like, and then we'll be able to stop misunderstanding each other? Paul On Wed, 21 Jul 2021 at 16:29, Sven Voigt wrote: > > I could say the same thing about JSON dumps then and most itertools > functions. The proposed functionality is in line with existing functionality > and extends it to the case of nested data structures. > > On Wed, Jul 21, 2021 at 11:25 AM Chris Angelico wrote: >> >> On Thu, Jul 22, 2021 at 1:14 AM Sven Voigt wrote: >> > >> > Paul, >> > >> > I agree that the mapping to a flat data structure might have limited use >> > cases. However, like you said it is only one example. What I am proposing >> > is a nested iteration tool that can map both keys and values to new keys >> > and new values. It only considers nesting in dictionaries, lists, and >> > tuples by default, but allows users to pass another function to continue >> > searching for pointers in other types. >> > >> >> This sounds like the sort of thing that's best coded up specifically >> to your purposes. There'll be myriad small variants of this kind of >> traversal, so it's easiest to just write the thing you want, rather >> than try to get the standard library to support every variant. >> >> ChrisA >> ___ >> Python-ideas mailing list -- python-ideas@python.org >> To unsubscribe send an email to python-ideas-le...@python.org >> https://mail.python.org/mailman3/lists/python-ideas.python.org/ >> Message archived at >> https://mail.python.org/archives/list/python-ideas@python.org/message/2IOA23Z33434YWPTQWDPMA5RGJBCPCRZ/ >> Code of Conduct: http://python.org/psf/codeofconduct/ > > ___ > Python-ideas mailing list -- python-ideas@python.org > To unsubscribe send an email to python-ideas-le...@python.org > https://mail.python.org/mailman3/lists/python-ideas.python.org/ > Message archived at > https://mail.python.org/archives/list/python-ideas@python.org/message/BRWPYXYXOFJAY7FKEGTFZO5T7XKBSCZ3/ > Code of Conduct: http://python.org/psf/codeofconduct/ ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/UR52XKA2I6AWKCELIDODF5IVBHMHBBW7/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Create a @deprecated decorator (annotation)
On Thu, 29 Jul 2021 at 15:39, Leonardo Freua wrote: > > Would it be interesting to create a @deprecated decorator to avoid adding > warnings.warn("deprecation message", DeprecationWarning, stacklevel=2) in > methods body? I don't see the value personally. > Using the decorator approach to indicate depreciation would make the methods > cleaner (leaving only their responsibilities in the body) and would be easier > to identify, as the cue would be close to the method signature and not mixed > with the logic inside the body. First line of the body vs line before the declaration doesn't feel like it makes much difference to me. > in some cases it will still be necessary to put warnings.warn (..) inside the > body of functions/methods because of some message display condition, or we > could also express the message display condition in the decorator in > @deprecated itself. But it would be interesting to have the possibility of > not putting this inside the method body. Even the decorator can come from the > notices module. Why would it be "interesting"? I don't see any practical advantage, and as soon as you need any form of logic you have to rewrite, so why bother? If you want this to get support, I think you need to argue the benefits far more than you have so far... > Example: > > (Before) > > def check_metadata(self): > """Deprecated API.""" > warn("distutils.command.register.check_metadata is deprecated, \ > use the check command instead", PendingDeprecationWarning) > check = self.distribution.get_command_obj('check') > check.ensure_finalized() > check.strict = self.strict > check.restructuredtext = 1 > check.run() > > (after) > > @deprecated("distutils.command.register.check_metadata is deprecated, \ > use the check command instead") > def check_metadata(self): > """Deprecated API.""" > check = self.distribution.get_command_obj('check') > check.ensure_finalized() > check.strict = self.strict > check.restructuredtext = 1 > check.run() TBH, the decorator version makes it harder to see the function signature. -1 from me, I'm afraid. Disclaimer: I've never actually deprecated an API in my own code, so my objections are mainly theoretical. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/6O7WJ3MLF3WEQ6XR7HEZPM6OBUZVY4PU/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Pattern matching in python function headers
I assume that such a feature would simply behave exactly the same as a match statement: def fib(arg): match arg: case 0: return 0 case 1: return 1 case n: return fib(n-1) + fib(n-2) # I know this is only a trivial example, # but you should probably also handle # n < 0. Does your proposal cover using # guards, or would they need a conditional # in the final case? def allow_entry(arg): match arg: case {"name": "Bob"}: return "Bob is not allowed in ever!" case {"name": "Charles", "day": "Tuesday"}: return "It's a Tuesday, so Charles is allowed in." case {"name": "Charles", "day": _}: return "Charles is only allowed in on a Tuesday." case {"name": name}: return f"Come in {name}, make yourself at home!" # I only skimmed the PEP, and I don't have a copy of Python 3.10 to check, # but I believe that if arg doesn't match any of the given clauses, the match # silently does nothing. I don't know if that was your expected behaviour for # the definition form, but you should probably be explicit about what you intend # in any case. Note that I created these by taking your examples and applying a purely mechanical translation - I didn't think about it at all, so this transformation could easily be applied mechanically. What's the benefit that would justify having this additional syntax? In addition, you'd need to consider the implications of possibilities you didn't cover in your examples. I don't know Elixir, so I can't say whether that language has similar scenarios that we could follow, but these are all valid in Python, so we need to consider them. Python's function definitions are executed at runtime, so you need to decide what behaviour you want from a function definition that's had *some* of its parts executed, but not others. So how would the following behave? def example(1): return "One" print(example(2)) def example(2): return "Two" print(example(2)) Worse, what if the example(2) definition were in a separate module? What about decorators? def example(1): return "One" @some_decorator def example(2): return "Two" That's just a very brief list of "things to think about". In functional languages, I like this style of function definition, but it would need some fairly careful design to be able to translate it to a language like Python. And honestly, I'm not sure I see the advantage (in Python). I'm not against the suggestion as such, but I think it will need a fair amount of work to flesh it out from a series of basic examples to a full-fledged proposal (which even then might end up getting rejected). Is that something you're considering doing yourself, or are you just offering the idea in case someone else is interested in picking it up? Paul On Thu, 5 Aug 2021 at 09:52, Sam Frances wrote: > > Following on from PEP 634, it would be good to be able to have Erlang / > Elixir-style pattern matching in function headers. > > (I don't know what the technical term is for this language feature, but > hopefully the examples will clarify.) > > Here's the obligatory fibonacci example: > > ``` > def fib(0): > return 0 > > def fib(1): > return 1 > > def fib(n): > return fib(n-1) + fib(n-2) > ``` > > Or, another example: > > ``` > def allow_entry({"name": "Bob"}): > return "Bob is not allowed in ever!" > > def allow_entry({"name": "Charles", "day": "Tuesday"}): > return "It's a Tuesday, so Charles is allowed in." > > def allow_entry({"name": "Charles", "day": _}): > return "Charles is only allowed in on a Tuesday." > > def allow_entry({"name": name}): > return f"Come in {name}, make yourself at home!" > ``` > > Thanks for considering my idea. > > Kind regards > > Sam Frances > ___ > Python-ideas mailing list -- python-ideas@python.org > To unsubscribe send an email to python-ideas-le...@python.org > https://mail.python.org/mailman3/lists/python-ideas.python.org/ > Message archived at > https://mail.python.org/archives/list/python-ideas@python.org/message/OPLBLWJPCN66QRUQNHBQSQOBNBFZRRBF/ > Code of Conduct: http://python.org/psf/codeofconduct/ ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/FYPEKJSXND4NU4XXBTF5I7JOV7MKOTHZ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: We should have an explicit concept of emptiness for collections
On Tue, 24 Aug 2021 at 12:07, Tim Hoffmann via Python-ideas wrote: > Just like length is. It's a basic concept and like __bool__ and __len__ it > should be upon the objects to specify what empty means. It feels like these arguments in the abstract are mostly going round in circles. It's possible something has been mentioned earlier in this thread, but I don't recall if so - but is there any actual real-world code that would be substantially improved if we had built into the language a protocol that users could override in their classes to explicitly define what "is empty" meant for that class? Some things to consider: 1. It would have to be a case where neither len(x) == 0 or bool(x) did the right thing. 2. We can discount classes that maliciously have bizarre behaviour, I'm asking for *real world* use cases. 3. It would need to have demonstrable benefits over a user-defined "isempty" function (see below). 4. Use cases that *don't* involve numpy/pandas would be ideal - the scientific/data science community have deliberately chosen to use container objects that are incompatible in many ways with "standard" containers. Those incompatibilities are deeply rooted in the needs and practices of that ecosystem, and frankly, anyone working with those objects should be both well aware of, and comfortable with, the amount of special-casing they need. To illustrate the third point, we can right now do the following: from functools import singledispatch @singledispatch def isempty(container): return len(container) == 0 # If you are particularly wedded to special methods, you could even do # # @singledispatch # def isempty(container): # if hasattr(container, "__isempty__"): # return container.__isempty() # return len(container) == 0 # # But frankly I think this is unnecessary. I may be in a minority here, though. @isempty.register def _(arr: numpy.ndarray): return len(arr.ravel()) == 0 So any protocol built into the language needs to be somehow better than that. If someone wanted to propose that the above (default) definition of isempty were added to the stdlib somewhere, so that people could register specialisations for their own code, then that might be more plausible - at least it wouldn't have to achieve the extremely high bar of usefulness to warrant being part of the language itself. I still don't think it's sufficiently useful to be worth having in the stdlib, but you're welcome to have a go at making the case... Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/JY37SEXSA5MJV2FLYCVQ23AX4TE5TEVH/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: We should have an explicit concept of emptiness for collections
On Tue, 24 Aug 2021 at 23:06, Tim Hoffmann via Python-ideas wrote: > > I also have the feeling that this is going round in circles. So let me get > back to the core question: > > **How do you check if a container is empty?** > > IMHO the answer should not depend on the container. While emptiness may mean > different things for different types. The check syntax can and should still > be uniform. I will note that if we take things to extremes, that constraint simply cannot be adhered to - types can define bool, len, and any other special method we care to invent, however they like. With that in mind, I'd argue that if a collection defines bool and len in such a way that "not bool(c)" or "len(c) == 0" doesn't mean "c is empty", then it is a special case, and has deliberately chosen to be a special case. Yes, I know that makes numpy arrays and Pandas dataframes special cases. As I said, they have deliberately chosen to not follow normal conventions. Take it up with them if you care to. (IMO there's no point - they have reasonable justifications for their choices, and it's too late to change anyway). Based on the "obvious" intent of the classes in collections.abc, I'd say that if you test "len(c) == 0" then you can reasonably say that you cover all collections. If you want to support the weird multi-dimensional zero-sized numpy arrays that have len != 0, then special case them. But frankly, I'd wait until a user comes up with a reason why you need to support them, who can tell you what they expect your code to *do* with them in the first place... Again, practical use cases rather than abstract questions. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/LSYLPDJ5VNPZR2W3T7PPP43CY7GZAXLL/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: We should have an explicit concept of emptiness for collections
On Wed, 25 Aug 2021 at 14:00, Tim Hoffmann via Python-ideas wrote: > > I can agree that "len(c) == 0" should mean "c is empty" for all practical > purposes (and numpy and pandas conform with that). OK > But "bool(c)" should mean "c is empty" is an arbitrary additional constraint. > For some types (e.g. numpy, padas) that cannot be fulfilled in a logically > consisent way. By requiring that additional constraint, the Python languages > forces these containers to be a special case. It's another way of expressing the intent, which is potentially useful if you don't care about those special cases. We're all consenting adults here, there's no "constraint" involved. > To sum things up, I've again written the > > Premise: > It is valuable for the language and ecosystem to not have this special > casing. I.e. I don't want to write `if not seq` on the one hand and `if > len(array) == 0`. The syntax for checking if a list or an array is empty > should be the same. The *language* (by which I assume you mean "Python") doesn't have any special casing. It just has behaviour. It's valuable for educating beginners, maybe, to have an easily expressed way of checking for emptiness, but for "the language and ecosystem"? I don't think so - it's valuable to allow people to express their intent in the way that is most natural to them, IMO. > Conclusion: > If so, PEP-8 has to be changed because "For sequences, (strings, lists, > tuples), use the fact that empty sequences are false:" is not a universal > solution. "Has to be" is extremely strong here. PEP 8 is a set of *guidelines* that people should use with judgement and thought, not a set of rules to be slavishly followed. And in fact, I'd argue that describing a numpy array or a Pandas dataframe as a "sequence" is pretty inaccurate anyway, so assuming that the statement "use the fact that empty sequences are false" applies is fairly naive anyway. But if someone wants to alter PEP 8 to suggest using len() instead, I'm not going to argue, I *would* get cross, though, if the various PEP 8 inspired linters started complaining when I used "if seq" to test sequences for emptiness. > The question then is, what other syntax to use for an emptiness check. > > Possible solutions: > 1) The length check is a possible solution: > "For sequences, (strings, lists, tuples), test emptiness by `if len(seq) == > 0`. > N.b. I think this is more readable than the variant `if not len(seq)` (but > that could be discussed as well). If someone is reading PEP 8 and can't make their own choice between "if len(seq) == 0" and "if not len(seq)" then they should go and read a Python tutorial, not expect the style guide to tell them what to do :-( > 2) The further question is: If we change the PEP 8 recommendation anyway, can > we do better than the length check? > IMHO a length check is semantically on a lower level than an empty-check. > Counting elements is a more detailed operation than only checking if we have > any element. That detail is not needed and distracting if we are only > interested in is_empty. This is vaguely similar to iterating over indices > (`for i in range(len(users))`) vs. iterating over elements (`for user in > users`). We don't iterate over indices because that's usually a detail we > don't need. So one can argue we shouldn't count elements if we only wan to > know if there are any. Not without either changing the language/stdlib or recommending a user-defined function or 3rd party library. To put that another way, the current Python language and stdlib doesn't currently have (or need, IMO) a better way of checking for emptiness. If you want to argue that it needs one, you need an argument far better than "to have something to put in PEP 8". > If we come to the conclusion that an explicit empty check is better than a > length=0 check, there are again different ways how that could be implmented > again. My favorite solution now would be adding `is_empty()` methods to all > standard containers and encourage numpy and pandas to add these methods as > well. (Alternatively an empty protocol would be a more formal solution to an > explicit check). Well, that's "if false then ..." in my opinion. I.e., I don't think you've successfully argued that an explicit empty check is better, and I don't think you have a hope of doing so if your only justification is "so we can recommend it in PEP 8". Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/WGJ5COO3WSVXF5RORT6KRQQHL5GILQXI/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: We should have an explicit concept of emptiness for collections
On Wed, 25 Aug 2021 at 14:13, Tim Hoffmann via Python-ideas wrote: > > Guido van Rossum wrote: > > So then the next question is, what's the use case? What code are people > > writing that may receive either a stdlib container or a numpy array, and > > which needs to do something special if there are no elements? Maybe > > computing the average? AFAICT Tim Hoffman (the OP) never said. > > There's two parts to the answer: > > 1) There functions e.g. in scipy and matplotlib that accept both numpy arrays > and lists of flows. Speaking from matplotlib experience: While eventually we > coerce that data to a uniform internal format, there are cases in which we > need to keep the original data and only convert on a lower internal level. We > often can return early in a function if there is no data, which is where the > emptiness check comes in. We have to take extra care to not do the PEP-8 > recommended emptiness check using `if not data`. You don't. You can write a local isempty() function in matplotlib, and add a requirement *in your own style guide* that all emptiness checks use this function. Why do people think that they can't write project-specific style guides, and everything must be in PEP 8? That baffles me. > 2) Even for cases that cannot have different types in the same code, it is > unsatisfactory that I have to write `if not seq` but `if len(array) == 0` > depending on the expected data. IMHO whatever the recommended syntax for > emptiness checking is, it should be the same for lists and arrays and > dataframes. You don't have to do any such thing. You can just write "if len(x) == 0" everywhere. No-one is *making* you write "if not seq". All I can see here is people making problems for themselves that they don't need to. Sorry if that's not how it appears to you, but I'm genuinely struggling to see why this is an issue that can't be solved by individual projects/users. The only exception I can see is the question "what's the best way to suggest to newcomers?" but that's more of a tutorial/documentation question, than one of standardisation, style guides or language features. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/QIHC4SZKOJ3JR6SHZMJCMXR2QXXEVKUV/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP8 mandatory-is rule
On Mon, 30 Aug 2021 at 23:27, Nick Parlante wrote: > I don't know Chris, doesn't this just show that If you construct a class with > a, shall we say, "oddball" definition of ==, then using == on that class gets > oddball results. And it goes without saying that we all understand that None > and False and True (the dominant cases where PEP8 is forcing "is") will > always have sensible definitions of ==. If you consider numpy arrays as "oddball" and unworthy of consideration, then I guess you're correct. But PEP 8 isn't going to change because you dismiss a major Python library as irrelevant... And as everyone has already said, PEP 8 isn't *forcing* anyone to do anything. If you don't like it, ignore it. But don't try to insist everyone else has to agree with you. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/2DEK4G4FUZQI6YPB5M6XCXGN5PYQLU3J/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: itertools.compress default selectors
You can already do this using filter(None, a) >>> list(filter(None, [None, "", "-filove-python", "CFLAGS=-O3"])) ['-filove-python', 'CFLAGS=-O3'] There's arguably a minor readability improvement (compress(a) suggests "remove the unneeded elements") but I'm not sure that's enough to justify the change. On the other hand, it's not like there's any obviously better default value for the second argument... I guess overall I'm fairly indifferent to the change. Paul On Mon, 13 Sept 2021 at 13:07, wrote: > > Hi! > > I propose to enhance "itertools.compress" in such a way so if you don't > provide selectors, then "data" itself is used as a selectors. > So "compress(a)" would be equivalent to "compress(a, a)" > > For example: > > >>> from itertools import compress > > >>> [*compress([0, 1, 2, 3]))] > [1, 2, 3] > > >>> [*compress(["", "CFLAGS=-O3"])] > ["CFLAGS=-O3"] > > >>> opts = compress([None, "", "-filove-python", "CFLAGS=-O3"]) > >>> " ".join(opts) > '-filove-python CFLAGS=-O3' > > What do you think guys about this? Perhaps it was proposed by someone else? > > Thanks! > Stepan Dyatkovskiy > ___ > Python-ideas mailing list -- python-ideas@python.org > To unsubscribe send an email to python-ideas-le...@python.org > https://mail.python.org/mailman3/lists/python-ideas.python.org/ > Message archived at > https://mail.python.org/archives/list/python-ideas@python.org/message/LVD63OEWRTTU542NKFODLERXCM7LEQ5D/ > Code of Conduct: http://python.org/psf/codeofconduct/ ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/GKJWM6K26OO2NMO6VUK4P3CDUEC6MDAJ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: itertools.compress default selectors
On Tue, 14 Sept 2021 at 08:38, m...@dyatkovskiy.com wrote: > Main motivation was a use case where we gather command line options, and some > of them are… optional. [...] > And yet I’m solid that we need some compact and nice way for rendering > strings with command options. That would be a thing. Frankly, I'd just use something like your "jin" function def make_option_string(options): return " ".join(opt for opt in options if opt) Note that I gave it a more readable name that reflects the actual use case. This is deliberate, as I think the main advantage here is readability, and using a name that reflects the use case, rather than a "generic" name, helps readability. That's also why I don't see this as being a useful candidate for the stdlib - it would have to have a "generic" name in that case, which defeats the (for me) main benefit. I find the "sep.join(x for x in it if x)" construction short and readable enough that I'd be unlikely to use a dedicated function for it, even if one existed. And while "x for x in it if x" is annoyingly repetitive, it's not so bad that I'd go hunting for a function to replace it. So for me, I *don't* think we need a dedicated function for this. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/KQNQIJFNPPOKQ7OJ7KMFOSNJDKJIHICV/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Addition of a "plus-minus" binary numeric operator
I doubt it, it seems way too specialised to be worth making into a language feature. If you want to, you can write a function: def limits(a, b): return a+b, a-b Paul On Tue, 14 Sept 2021 at 14:55, wrote: > > Hi all, > > I was wondering on whether there is any interest in introducing a > "plus-minus" operator: > > Conceptually very simple; instead of: > > upper, lower = a + b, a - b > > use instead: > > upper, lower = a +- b > > In recent projects I've been working on, I've been having to do the above > "plus minus" a lot, and so it would simplify/clean-up/reduce error potential > cases where I'm writing the results explicitly. > > It isn't a big thing, but seems like a clean solution, that also takes > advantage of python's inherent ability to return and assign tuples. > ___ > Python-ideas mailing list -- python-ideas@python.org > To unsubscribe send an email to python-ideas-le...@python.org > https://mail.python.org/mailman3/lists/python-ideas.python.org/ > Message archived at > https://mail.python.org/archives/list/python-ideas@python.org/message/MCAS5B63Q6ND74GEBP2N3OF3HLISSQMA/ > Code of Conduct: http://python.org/psf/codeofconduct/ ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/XYRRZPCPWF6VJTBX3MAWCIQEWXJC5F3X/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: itertools.compress default selectors
On Tue, 14 Sept 2021 at 19:58, Tim Peters wrote: > Except it's not that simple: Apologies, Tim, it took me a couple of reads to work out what you were saying here. I hope you won't mind if I restate the point for the benefit of anyone else who might have got confused it like I did... > def gen(hi): > i = 0 > while i < hi: > yield i > i += 1 > > from itertools import compress > g = gen(12) > print(list(filter(None, g))) > g = gen(12) > print(list(compress(g, g))) > > Which displays: > > [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] > [0, 2, 4, 6, 8, 10] > > The first is obviously intended, but the latter is what you get merely > by giving the same argument twice to `complress()`. The key point here is that the proposal that compress(g) should mean the same as compress(g, g) doesn't actually do what you were suggesting it would (and what you want), if g is an iterator - and after all, that's what itertools is supposed to be about, operations on iterators in general. To actually get the same behaviour as filter(None, g) in general needs a much more complicated change than simply saying "the default is to use the first argument twice". > `compress()` can't materialize its argument(s) into a list (or tuple) > first, because it's intended to work fine with infinite sequences. It > could worm around that like so:under the covers: > > from itertools import tee > g = gen(12) > print(list(compress(*tee(g > > but that's just bizarre ;-) And inefficient. > > Or perhaps the `compress()` implementation could grow internal > conditionals to use a different algorithm if the second argument is > omitted. But that would be a major change to support something that's > already easily done in more than one more-than-less obvious way. At this point, it's no longer a simple change adding a fairly obvious default value for an argument that's currently mandatory, it's actually an important and significant (but subtle) change in behaviour. Honestly, I think this pretty much kills the proposal. Thanks for pointing this flaw out Tim, and sorry if I laboured the point you were making :-) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/4N2UXKQBLAFM3OWVBTQ572SJZOU5PM5G/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: os.workdir() context manager
On Tue, 14 Sept 2021 at 20:44, Marc-Andre Lemburg wrote: > > I sometimes write Python scripts which need to work in specific > work directories. > > When putting such code into functions, the outer function typically > does not expect the current work dir (CWD) to be changed, so wrap the > code which need the (possibly) modified CWD using a simply context > manager along the lines of: > > class workdir: > def __init__(self, dir): > self.dir = dir > def __enter__(self): > self.curdir = os.getcwd() > os.chdir(self.dir) > def __exit__(self, *exc): > os.chdir(self.curdir) > return False > > Would there be interest in adding something like this to the os module > as os.workdir() ? I've needed (and implemented my own version of) this often enough that I'd be in favour of it. Of course, as you show, it's not hard to write it yourself, so I can live with it not getting implemented, too :-) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/I6YFWYNUAV7G6DNPZTBJ6T3SGJDPGBJL/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: os.workdir() context manager
On Wed, 15 Sept 2021 at 08:02, Christian Heimes wrote: > The "better way" to handle current working directory is to use the > modern *at() variants of syscalls, e.g. openat() instead open(). The > variants take an additional file descriptor dirfd that is used as the > current working directory for the syscall. Just a somewhat off-topic note, but dir_fd arguments are only supported on Unix, and the functionality only appears to be present at the NT Kernel level on Windows, not in the Windows API. So this is only a "better" approach if you are writing Unix-only code. Conversely, the most significant cases where I've wanted a "set the current directory" context manager are to run some code with a temporary directory as CWD, but then make sure I don't leave that as the CWD, because Windows won't let me delete a directory that's the CWD of a running process, so my cleanup will fail if I don't reset the CWD. So the proposed context manager is probably more beneficial for Windows users. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/7DWT26TNJ3HPZ7XAQG3HP74QSQX2ISNC/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP Draft: Power Assertion
On Fri, 24 Sept 2021 at 12:05, Noam Tenne wrote: > Caveats > --- > > It is important to note that expressions with side effects are affected by > this feature. This is because in order to display this information, we must > store references to the instances and not just the values. One immediate thought. You should give examples of the sort of expressions that are affected, and precisely what the effect is. It's impossible to judge the importance of this point without details. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/JJ2CT4FYFHJOOSABQOD6KRKCQ22FJ2BI/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP Draft: Power Assertion
On Sat, 25 Sept 2021 at 06:09, Stephen J. Turnbull wrote: > > Guido van Rossum writes: > > > I think this is by far the best option. Pytest can evolve much faster than > > the stdlib. > > Is there no room for making it easier to do this with less invasive > changes to the stdlib, or are Steven d'A's "heroic measures in an > import hook" the right way to go? +1 on this. There are a number of "better exceptions" packages out there (for example, `better_exceptions` :-)) which would benefit from an improved mechanism to introspect failed assertions. A language change to make life easier for all those packages seems like an "obvious" feature to me.From there, it's a relatively small step to having a better *default* assertion reporting mechanism, but the PEP should focus on the machinery first, and the front end second IMO. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/J3SQIJ4USRQ3XSUVOSUR5C6NKA3GLQZ4/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Reusing more builtins for type-hinting
On Fri, 1 Oct 2021 at 15:50, Christopher Barker wrote: > > The fact that the built in “any” is not a type is not an implementation > detail. any() and typing.Any are completely different things/concepts. They > just happen to be spelled the same. Agreed. > I don’t think it’s a bad thing that objects that are specifically about Type > Hinting can be found in the typing module. > > Overloading names in builtins to save an import is a really bad idea. Having to take the runtime cost of an import to do a development-time static analysis of the code, sort of is, though... Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/43ML2YGBXUR5OHKQKPNJNFUB3X6J34LE/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: pathlib update idea: iterable Path that returns subpaths
On Sat, 2 Oct 2021 at 13:22, Aaron Stacy wrote: > > I’d like to propose an enhancement to the wonderful pathlib module: make Path > instances iterable, where the iterator returns subpaths. > > This would be functionally very similar to Path(some_directory).rglob(‘*’). ... which is of course the major argument against this - it's already very easy to do this, so adding *yet another* way of iterating over the contents of a directory (we have Path.iterdir, Path.[r]glob, os.walk, os.scandir, os.listdir, ...) is just making things even more confusing. The counter-argument is "there should be one obvious way" - we definitely don't only have *one* way, at the moment, but none of them are "obvious". My big problem is that I don't think that making Path instances iterable is "obvious", either. What if the path is a file, not a directory? Why are we doing a recursive traversal, not just doing iterdir? If you want an "obvious" (IMO - this is all very subjective, and I'm not Dutch ;-)) approach, I'd argue that Path.iterdir(recursive=True) would be a more reasonable place for this functionality. But I know that Guido doesn't like functions that behave differently based on an argument that is typically always supplied as a literal value, so maybe my intuition isn't correct. There's also a lot of design decisions around things like whether to follow symlinks, how permission problems should be handled, etc. Clearly we could just say that we do what rglob("*") does, but then we're back to why we need something else that just does what rglob("*") does... I have some sympathy with the idea that rglob("*") isn't very discoverable, and os.walk is over-complex, but I'm not convinced this proposal is the right solution, either. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/O65XB4ZJ6K6DA7SBLARE3Q45WUQUINR2/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Adding a Version type
On Sun, 10 Oct 2021 at 03:20, Finn Mason wrote: > > Hello all, > > I was thinking about the following idioms: > > __version__ = "1.0" > > import sys > if sys.version_info < (3, 6): > # Yell at the user > > > I feel like there could be a better way to define versions. There's no real > standard way to specify versions in Python, other than "use semantic > versioning." This can cause problems such as: There is - it's defined in PEP 440. Python's packaging tools follow this specification, so if you don't conform to it your package is potentially not installable. It doesn't mandate semantic versioning - PEP 440 is also compatible with CalVer, for example. > * Uncertainty of the type of the version (commence the pattern matching! Or > `if... elif` statements if that's your thing) > * No real way to specify an alpha, beta, pre, or post version number in a > tuple of ints, e.g. `(2, 1, 0, "post0")` doesn't really work. > > I propose a new `Version` type, a standard way to define versions. The > signature would look something like `Version(major: int, minor: int = 0, > patch: Optional[int] = None, alpha: Optional[int] = None, beta: Optional[int] > = None, pre: Optional[int] = None, post: Optional[int] = None)`. You should look at the `packaging` library on PyPI, which is a complete implementation of many of the packaging standards, including PEP 440 versions. > To maintain backwards compatibility, comparisons such as `Version(1, 2) == > (1, 2)` and `Version(1, 2) == "1.2"` will return `True`. `str(Version(1, 2))` > will return `"1.2"` (to clarify, if `patch` were 0 instead of None, it would > return `"1.2.0"`). There will be an `as_tuple()` method to give the version > number as a tuple (maybe named tuple?), e.g. `Version(1, 2).as_tuple()` will > return `(1, 2)`. PEP 440 also includes standards for saying things like "== 3.6.*" or ">= 3.6". These are typically better for handling version constraints than simple tuple or string comparisons (they have well-defined semantics for pre- and post-release versions, for example). > A problem is that not all versioning systems are covered by this proposal. > For example, I've done some programming in C#, and many C# programs use a > four number system instead of the three number system usually used in Python > code, i.e. major.minor.patch.bugfix instead of major.minor.patch. (This may > not seem very different, but it often is.) PEP 440 covers all of these cases (and probably a lot more, as well). > Where to place this type is an open question. I honestly have no good ideas. > This could just be implemented as a PyPI package, but there may not be a > point in adding a whole dependency to use a semantic type that other people > probably aren't using (which wouldn't be the case if it were part of core > Python). It is on PyPI, and is commonly used by packaging tools. Whether it's something that should be in the stdlib, I'll pass on for now - any proposal to move it into the stdlib would need to be supported by the owners of the `packaging` project. > This would be implemented in Python 3.11 or 3.12, but I'd recommend a > backport on PyPI if implemented. As I say, it already exists on PyPI, so the proposal should instead be framed in terms of moving that project into the stdlib, if that's what you feel is necessary (and the project maintainers are willing). Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/NPCM4VN3ALBTUR7RR7RLG3NPEFOPNGG3/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: dict_items.__getitem__?
On Sun, 10 Oct 2021 at 05:06, Finn Mason wrote: > > Let's get back to the original topic. Should `dict.items()` be indexable now > that dicts are ordered? I say yes. Why shouldn't it? I say no. "Why shouldn't it?" isn't sufficient justification for a change. Because it costs someone time and effort to implement it, and that time and effort is wasted unless people *actually use it*. Because no convincing use cases have been presented demonstrating that it would improve real-world code. Because dictionaries (mappings) and lists (sequences) are intended for different purposes. Because no-one is willing to implement this idea. Consider: "Should lists be indexable by arbitrary values, not just by integers? I say yes. Why shouldn't they?" "Should tuples be mutable? I say yes. Why shouldn't they?" "Should integers be allowed to have complex parts? I say yes. Why shouldn't they?" It's up to the person proposing a change to explain why the change *should* happen - not to everyone else to have to explain why it shouldn't. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/DXHJHW2WWXUILG3YVZPRNEEHNNG3OFWY/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Adding a Version type
On Mon, 11 Oct 2021 at 13:05, Chris Angelico wrote: > In any case, there's not a lot of need to support Python 2 any more, > so most of this sort of check doesn't exist in my code any more. ... and in particular, "useful to help with code that needs to support Python 2" won't work as an argument for having a version type in the (Python 3.11+) stdlib :-) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/F6QGA6AOM2NSZI5R3YXAYXPH2NIS52GJ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Implementing additional string operators
On Wed, 13 Oct 2021 at 19:02, <2qdxy4rzwzuui...@potatochowder.com> wrote: > So aside from filename extensions, what are the real use cases for > suffix removal? Plurals? No, too locale-dependent and too many > exceptions. Whitespace left over from external data? No, there's > already other functions for that (and regexen and actual parsers if > they're not good enough). Directory traversal? No, that's what path > instances and the os module are for. I think this is a good point. Is removesuffix really useful enough to warrant having an operator *as well as* a string method? It was only added in 3.9, so we've been managing without it at all for years, after all... Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/WFS5LM2ZMBUZS7VWELVIFHRNPYXYZN5I/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Structure Pattern for annotation
On Thu, 14 Oct 2021 at 13:04, Ricky Teachey wrote: > > I think all of this additional syntax is just a mistake. > > The reason is it will encourage people to not properly annotate their input > types for duck typing. Some of these shortcuts might be nice for output > types. But the more general trying.Mapping, typing.Sequence and friends > should be preferred for input types. If terse shortcuts are available for the > concrete data structure types, but not for the generic types, a lot of people > are going to feel nudged to type hint their python improperly. +1. I'm not sure how much of my reservations about this whole discussion are ultimately reservations about typing in general, but I feel that the more we make it easier to express "exact" types, the more we encourage people to constrain their APIs to take precise types rather than to work with duck types. (I saw an example recently where even Mapping was over-specified, all that was needed was __getitem__, not even __len__ or __iter__). I know protocols allow duck typing in a static type checking context - maybe the energy focused on "making basic types easier to write" should be focused on making protocols easier to write, instead. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/KALBNJN2T4HAZFK4PZUV5RGAKXAVWCMD/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Type-hinting dictionaries for an arbitrary number of arbitrary key/value pairs? Counterpart to PEP 589?
Mypy correctly rejects this: ❯ type .\t.py from numbers import Number from typing import Dict Data = Dict[str, Number] def foo(bar: Data): print(bar) bar[1.0] = b'hello' PS 17:48 00:00.008 C:\Work\Scratch\foo ❯ mypy .\t.py t.py:9: error: Invalid index type "float" for "Dict[str, Number]"; expected type "str" t.py:9: error: Incompatible types in assignment (expression has type "bytes", target has type "Number") Found 2 errors in 1 file (checked 1 source file) If typeguard doesn't, maybe you need to raise that as a bug against that project? Paul On Fri, 15 Oct 2021 at 17:20, Sebastian M. Ernst wrote: > > Hi all, > > disclaimer: I have no idea on potential syntax or if it is applicable to > a wide audience or if there is already a good solution to this. It is > more like a "gap" in the type hint spec that I ran across in a project. > > In function/method signatures, I can hint at dictionaries for example as > follows: > > ```python > from numbers import Number > from typing import Dict > > from typeguard import typechecked > > Data = Dict[str, Number] > > @typechecked > def foo(bar: Data): > print(bar) > ``` > > Yes, this is using run-time checks (typeguard), which works just fine. > Only strings as keys and Number objects as values are going through. (I > was told that MyPy does not get this at the moment.) > > The issue is that `bar` itself still allows "everything" to go in (and out): > > ```python > @typechecked > def foo2(bar: Data): > bar[1.0] = b'should not be allowed' > ``` > > PEP 589 introduces typed dictionaries, but for a fixed set of predefined > keys (similar to struct-like constructs in other languages). In > contrast, I am looking for an arbitrary number of typed keys/value pairs. > > For reference, related question on SO: > https://stackoverflow.com/q/69555006/1672565 > > Best regards, > Sebastian > ___ > Python-ideas mailing list -- python-ideas@python.org > To unsubscribe send an email to python-ideas-le...@python.org > https://mail.python.org/mailman3/lists/python-ideas.python.org/ > Message archived at > https://mail.python.org/archives/list/python-ideas@python.org/message/DY5DPGWXLOOP3NQYSD73S53EMTRNKD74/ > Code of Conduct: http://python.org/psf/codeofconduct/ ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/FOJST3EZKQMK2R342OWN5M3VPEBNYXSC/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Type-hinting dictionaries for an arbitrary number of arbitrary key/value pairs? Counterpart to PEP 589?
On Fri, 15 Oct 2021 at 18:07, Sebastian M. Ernst wrote: > Ignoring typeguard, my suggestion still stands, although slightly > changed: Annotating a dictionary as described earlier in such a way that > type inference is not required OR in such a way that run-time checkers > have a chance to work more easily - if this makes any sense at all? I'm not sure what your suggestion actually is, though. "I am looking for an arbitrary number of typed keys/value pairs." - isn't that Mapping[str, float]" (or something similar)? I see no value in making the programmer do more work so that type checkers can do less type inference. Far from it, I'd like to have as few annotations as possible, and have the type checkers do *more* work for me. Ideally, I feel that I should only have to annotate the bare minimum, and let the checker do the rest. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/LWYFV4QTCFFSE7C6JBIBREUXKAUB35FJ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Type-hinting dictionaries for an arbitrary number of arbitrary key/value pairs? Counterpart to PEP 589?
On Sat, 16 Oct 2021 at 14:07, Alex Waygood wrote: > > Indeed — we essentially lie to mypy about the method resolution order for > list, dict, etc (mypy thinks that list directly inherits from > collections.abc.MutableSequence — see the typeshed stub here: > https://github.com/python/typeshed/blob/32bc2161a107db20c2ebc85aad31c29730db3e38/stdlib/builtins.pyi#L746), > but numeric ABCs are not special-cased by typeshed in the same way as > collections ABCs. Fundamentally, mypy has no knowledge about virtual > subclassing. IMO, that's a bug in mypy. Maybe the fix is "too hard", maybe it isn't. But if we added runtime isinstance checks, they would pass, so arguing that the type is wrong or that the variable "needs" to be typed differently is incorrect. I guess my argument here is that flagging an error when you're not 100% sure the code is wrong is a problem - "in the case of ambiguity, refuse the temptation to guess" seems relevant here. Personally, I'd be unwilling to mess around with the typing in a situation like this - I'd be more likely to just remove the types (the type checking equivalent of `#noqa` when you don't agree with what your style checker says). I'd much prefer mypy to miss the odd problem rather than flagging usages that aren't actually errors, precisely because you don't (as far as I know) have a mechanism for telling it you know what you're doing... But this is probably off-topic. I'm not 100% sure what the OP's proposal was, but as far as I can tell it seems to me that "dict[str, Number] is the correct usage for static typing" is the answer, regardless of mypy's ability to process it. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/WNR3A6YSC4PPLMZOTAYRFO5DYZ6PXQYX/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Type-hinting dictionaries for an arbitrary number of arbitrary key/value pairs? Counterpart to PEP 589?
On Sat, 16 Oct 2021 at 18:01, Steven D'Aprano wrote: > > On Sat, Oct 16, 2021 at 02:49:42PM +0100, Paul Moore wrote: > > > I'd be more likely to just remove the types (the type checking > > equivalent of `#noqa` when you don't agree with what your style > > checker says). > > Type annotations are still useful to the human reader, even if the type > checker is absent or wrong. > > I presume that mypy does support some "skip this" directive? If not, it > should. My impression (from the projects I've worked on using typing) was that it didn't. But I checked and I was wrong: `# type: ignore` does exactly that. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/YLGIGKYKNMCKWFHGLONT7L4R67DDH7MX/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Real Positional Arguments or OO Currying
On Mon, 18 Oct 2021 at 11:20, Mathew Elman wrote: > > I don't know if this has been suggested before, or if this is outlandishly > impossible (though I would be surprised if it was), so apologies in advance > if so. > > I have on occasion come across a situation where I use/write a signature like > this: > > def insert_x_into_y(x, y): > ... > > or worse > > def insert_into(item, container): > ... > > where, despite a driving idea of python syntax being readability in english, > the function signature is distinctly not english. > "I'll just go and insert into this item that container", is not only never > said but is actually ambiguous in english. > > What would be really cool, is if python let you write function signatures > like this: > > def insert_(item)_into_(container): > ... > > where the arguments dispersed between the function name are positional only > argument, and any key word arguments would have to go at the end. If you care enough, you could create an API that looked like this: insert(1).into(my_list) The `insert` function would create an object that had an `into` method that did the actual work. Personally, I think that sort of API is taking things too far, and I wouldn't use it. Apart from anything else, it "steals" extremely common words like "insert" for your specific API. But if you want to do it, you can - without needing any change to Python. > It would create a function that could be called as: > > insert_(1)_into_(my_list) > > or > > insert__into_(1, my_list) > > The purpose of allowing both should be obvious - so that the function can be > referenced and called in other places. If you want `insert__into_` as well, just do def insert__into(x, y): return insert(x).into(y) But why would you? It's ugly if spelled like that, and your whole argument is that the "interspersed arguments" form is better. If you just want to pass the function to something that expects "normal" argument conventions, lambda x,y: insert(x).into(y) does what you want. > (Rather than just skipping the brackets the function call with only the end > parentheses could have a special stand in character e.g. ., ?, !, _ or other > if that was more preferred.) > > This sort of signature is particularly annoying for boolean checks like > `isinstance` (N.B. I am _not_ suggesting changing any builtins), which one > could wrap with: > > def is_(obj)_an_instance_of_(type): > return isinstance(obj, type) I've never heard anyone else suggest anything like this, so you might want to consider that the annoyance you feel is not a common reaction... > For precedence in other languages, this is similar to curried functions in > functional languages e.g Haskell, especially if each part of a function were > to be callable, which would be up for debate. > > Allowing each part to be called would make sense if each "next" partial > function were an attribute on the previous and what it returned, making it a > sort of object oriented currying. > Then the syntax could be with a `.`: > > def is_(obj)._an_instance_of_(type): > ... > > is_(1)._an_instance_of_(int) > is_._an_instance_of_(1,int) Yes, that's something like what I'm suggesting. Given that this can be done already in Python, I don't think there's anything like enough justification for special language support for it. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/NDZENHII3TOSTYW2U5MCZ6FP7WQUVNMJ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Real Positional Arguments or OO Currying
On Mon, 18 Oct 2021 at 12:49, Mathew Elman wrote: > > The point is that at the moment to set this sort of api up requires a lot of > work, defeating 50% of the value i.e. to define a new function with the > attribute access behaviour requires defining each individual function to > return a middle step object that has the attribute of the next function, so > you can't define a function in plain english. Adding a new feature to a language is *even more* work, though. The reason (one of the reasons) we'd add a feature to the language is because it would get used so often that the repeated saving outweighs the initial (and ongoing) cost of the language feature. > e.g. > > def insert_into(x, y): > ... > > def insert(x): > class Return: > def into(self, y): > return insert_into(x, y) > return Return() > > insert.into = insert_into > > is a very long way to say: > > def insert(x)._into(y): > ... > > and that is without the actual logic and for only 2 positional args. > > > > But why would you? It's ugly if spelled like that, and your whole argument > > is that the "interspersed arguments" form is better. If you just want to > > pass the function to something that expects "normal" argument conventions, > > lambda x,y: insert(x).into(y) does what you want. > > The point is so that in code that expects dynamically called functions or to > be able to reference the function by name it needs to have a single name that > follows backward compatible naming conventions. I would be happy with it > being on the onus of the developer in question to add a wrapping function, > less happy than if it was added by default but it would still be a saving > (and could maybe be in a decorator or something). So the automatic definition of the extra name is purely because there's no other way to pass these types of function around as first-class objects? What about other aspects of first class functions - would you be able to introspect these new types of function? Extract the places in the name where the arguments can be interposed? If not, why not? How would a call like foo_(x)_bar(y) be parsed into the AST? Would the original form be recoverable (black, for example, would want this). Also, are the underscores part of the syntax? Is foo(x)bar(y) a single function call using your new syntax? If not, why not? You would be making underscores into special syntax otherwise. > > I've never heard anyone else suggest anything like this, so you might want > > to consider that the annoyance you feel is not a common reaction... > > I know lots of people that have had this reaction but just shrugged it off as > "the way things are", which would seem like a good way to stagnate a > language, so I thought I would ask. There seem to be a lot of open design questions. Do you have any examples of prior art? Languages that implement this type of syntax, and otherwise have the sort of capabilities that Python does (as David Mertz noted, Cobol had this sort of "English like" syntax, but it didn't have first class function objects. > I see this argument used for python in this list (and in the wild) a lot i.e. > that it should be readable in English That's an over-simplification, and TBH I suspect that most people using the argument know that. Python should be readable in the sense that it should have a generally natural looking syntax, use keywords that read naturally in English, and generally be accessible to people familiar with English. It does *not* mean that English word order must be adhered to, or that technical or abbreviated terms cannot be used (we use "def" rather than "define", and "class" means something very different from the non-computing meaning). Taking "readable in English" to its "logical" conclusion results in "ADD 7 TO X" rather than "x = x + 7" and if you think that a proposal to add that syntax to Python would be well-received, you've badly misjudged both the design principles of Python and the attitude of this mailing list... Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/Y435OHRF5E4LGLMZX4GX7HVOB65YAROY/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Syntax for late-bound arguments
On Sat, 23 Oct 2021 at 17:09, Chris Angelico wrote: > > Proposal: Proper syntax and support for late-bound argument defaults. > > def spaminate(thing, count=:thing.getdefault()): > ... > > def bisect(a, x, lo=0, hi=:len(a)): > if lo < 0: > raise ValueError('lo must be non-negative') > +1 from me. I agree that getting a good syntax will be tricky, but I like the functionality. I do quite like Guido's "hi=>len(a)" syntax, but I admit I'm not seeing the potential issues he alludes to, so maybe I'm missing something :-) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/7Y2O3XTWCSBPVISSXJKCXKK7VDTVKH6H/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671: Syntax for late-bound function argument defaults
This should probably reference PEP 661 (Sentinel Values) which is being discussed on Discourse: https://discuss.python.org/t/pep-661-sentinel-values/9126 It's a different proposal, but one of the major motivating use cases (if not the only one) for sentinels is handling function default values that can't be expressed at definition times. So how the two proposals interact should be discussed *somewhere*, IMO. Personally I'd choose to support this proposal, and take the view that it weakens the need for PEP 661 to the point where I'd prefer not to bother with that proposal. Paul On Sun, 24 Oct 2021 at 01:15, Chris Angelico wrote: > > Incorporates comments from the thread we just had. > > Is anyone interested in coauthoring this with me? Anyone who has > strong interest in seeing this happen - whether you've been around the > Python lists for years, or you're new and interested in getting > involved for the first time, or anywhere in between! > > https://www.python.org/dev/peps/pep-0671/ > > PEP: 671 > Title: Syntax for late-bound function argument defaults > Author: Chris Angelico > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 24-Oct-2021 > Python-Version: 3.11 > Post-History: 24-Oct-2021 > > > Abstract > > > Function parameters can have default values which are calculated during > function definition and saved. This proposal introduces a new form of > argument default, defined by an expression to be evaluated at function > call time. > > > Motivation > == > > Optional function arguments, if omitted, often have some sort of logical > default value. When this value depends on other arguments, or needs to be > reevaluated each function call, there is currently no clean way to state > this in the function header. > > Currently-legal idioms for this include:: > > # Very common: Use None and replace it in the function > def bisect_right(a, x, lo=0, hi=None, *, key=None): > if hi is None: > hi = len(a) > > # Also well known: Use a unique custom sentinel object > _USE_GLOBAL_DEFAULT = object() > def connect(timeout=_USE_GLOBAL_DEFAULT): > if timeout is _USE_GLOBAL_DEFAULT: > timeout = default_timeout > > # Unusual: Accept star-args and then validate > def add_item(item, *optional_target): > if not optional_target: > target = [] > else: > target = optional_target[0] > > In each form, ``help(function)`` fails to show the true default value. Each > one has additional problems, too; using ``None`` is only valid if None is not > itself a plausible function parameter, the custom sentinel requires a global > constant; and use of star-args implies that more than one argument could be > given. > > Specification > = > > Function default arguments can be defined using the new ``=>`` notation:: > > def bisect_right(a, x, lo=0, hi=>len(a), *, key=None): > def connect(timeout=>default_timeout): > def add_item(item, target=>[]): > > The expression is saved in its source code form for the purpose of inspection, > and bytecode to evaluate it is prepended to the function's body. > > Notably, the expression is evaluated in the function's run-time scope, NOT the > scope in which the function was defined (as are early-bound defaults). This > allows the expression to refer to other arguments. > > Self-referential expressions will result in UnboundLocalError:: > > def spam(eggs=>eggs): # Nope > > Multiple late-bound arguments are evaluated from left to right, and can refer > to previously-calculated values. Order is defined by the function, regardless > of the order in which keyword arguments may be passed. > > > Choice of spelling > -- > > Our chief syntax proposal is ``name=>expression`` -- our two syntax proposals > ... ahem. Amongst our potential syntaxes are:: > > def bisect(a, hi=>len(a)): > def bisect(a, hi=:len(a)): > def bisect(a, hi?=len(a)): > def bisect(a, hi!=len(a)): > def bisect(a, hi=\len(a)): > def bisect(a, hi=`len(a)`): > def bisect(a, hi=@len(a)): > > Since default arguments behave largely the same whether they're early or late > bound, the preferred syntax is very similar to the existing early-bind syntax. > The alternatives offer little advantage over the preferred one. > > How to Teach This > = > > Early-bound default arguments should always be taught first, as they are the > simpler and more efficient way to evaluate arguments. Building on them, late > bound arguments are broadly equivalent to code at the top of the function:: > > def add_item(item, target=>[]): > > # Equivalent pseudocode: > def add_item(item, target=): > if target was omitted: target = [] > > > Open Issues > === > > - yield/await? Will they cause problems? Might end up being a non-issue. > > - annotations? They go before the default, so is there any way an anno could > w
[Python-ideas] Re: PEP 671: Syntax for late-bound function argument defaults
On Tue, 26 Oct 2021 at 16:48, Eric V. Smith wrote: > > And also the "No Loss of Abilities If Removed" section sort of applies > to late-bound function arguments: there's nothing proposed that can't > currently be done in existing Python. I'll grant you that they might > (might!) be more newbie-friendly, but I think the bar is high for > proposals that make existing things doable in a different way, as > opposed to proposals that add new expressiveness to the language. One issue with not having an introspection capability, which has been bothering me but I've not yet had the time to come up with a complete example, is the fact that with this new feature, you have functions where there's no way to express "just use the default" without knowing what the default actually *is*. Take for example def f(a, b=None): if b is None: b = len(a) ... def g(a, b=>len(a)): ... Suppose you want to call f as follows: args = [ ([1,2,3], 2), ([4,5,6], None), ([7,8,9], 4), ] for a, b in args: f(a, b) That works fine. But you cannot replace f by g, because None doesn't mean "use the default", and in fact by design there's *nothing* that means "use the default" other than "know what the default is and supply it explicitly". So if you want to do something similar with g (allowing the use of None in the list of tuples to mean "use the default"), you need to be able to introspect g to know what the default is. You may also need to manipulate first-class "deferred expression" objects as well, just to have something you can return as the default value (you could return a string and require the user to eval it, I guess, but that doesn't seem particularly user-friendly...) I don't have a good solution for this, unfortunately. And maybe it's something where a "good enough" solution would be sufficient. But definitely, it should be discussed in the PEP so what's being proposed is clear. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/YY5NAIKYXMNS2645SBSTYT3UXQYN2WBL/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671: Syntax for late-bound function argument defaults
On Tue, 26 Oct 2021 at 19:25, Chris Angelico wrote: > > On Wed, Oct 27, 2021 at 5:05 AM Paul Moore wrote: > > > > On Tue, 26 Oct 2021 at 16:48, Eric V. Smith wrote: > > > > > > And also the "No Loss of Abilities If Removed" section sort of applies > > > to late-bound function arguments: there's nothing proposed that can't > > > currently be done in existing Python. I'll grant you that they might > > > (might!) be more newbie-friendly, but I think the bar is high for > > > proposals that make existing things doable in a different way, as > > > opposed to proposals that add new expressiveness to the language. > > > > One issue with not having an introspection capability, which has been > > bothering me but I've not yet had the time to come up with a complete > > example, is the fact that with this new feature, you have functions > > where there's no way to express "just use the default" without knowing > > what the default actually *is*. > > > > Take for example > > > > def f(a, b=None): > > if b is None: > > b = len(a) > > ... > > > > def g(a, b=>len(a)): > > ... > > > > Suppose you want to call f as follows: > > > > args = [ > > ([1,2,3], 2), > > ([4,5,6], None), > > ([7,8,9], 4), > > ] > > > > for a, b in args: > > f(a, b) > > > > That works fine. But you cannot replace f by g, because None doesn't > > mean "use the default", and in fact by design there's *nothing* that > > means "use the default" other than "know what the default is and > > supply it explicitly". So if you want to do something similar with g > > (allowing the use of None in the list of tuples to mean "use the > > default"), you need to be able to introspect g to know what the > > default is. You may also need to manipulate first-class "deferred > > expression" objects as well, just to have something you can return as > > the default value (you could return a string and require the user to > > eval it, I guess, but that doesn't seem particularly user-friendly...) > > > > I don't have a good solution for this, unfortunately. And maybe it's > > something where a "good enough" solution would be sufficient. But > > definitely, it should be discussed in the PEP so what's being proposed > > is clear. > > > > Wouldn't cases like this be most likely to use *args and/or **kwargs? > Simply omitting the argument from those would mean "use the default". > Or am I misunderstanding your example here? Maybe. I don't want to make more out of this issue than it warrants. But I will say that I genuinely would write code like I included there. The reason comes from the fact that I do a lot of ad-hoc scripting, and "copy this text file of data, edit it to look like a Python list, paste it into my code" is a very common approach. I wouldn't bother writing anything generic like f(*args), because it's just a "quick hack". (I'd rewrite it to use f(*args) when I'd done the same copy/paste exercise 20 times, and I was fed up enough to insist that I was allowed the time to "write it properly this time" ;-)) Having the code stop working just because I changed the way the called function handles its defaults would be a nuisance (again, I'm not saying this would happen often, just that I could easily imagine it happening if this feature was available). The function f might well be from a library, or a script, that I wrote for something else, and it might be perfectly reasonable to use the new feature for that other use case. There's no doubt that this is an artificial example - I'm not trying to pretend otherwise. But it's a *plausible* one, in the sort of coding I do regularly in my day job. And it's the sort of awkward edge case where you'd expect there to be support for it, by analogy with similar features elsewhere, so it would feel like a "wart" when you found out you couldn't do it. As I said, I'm not demanding a solution, but I would like to see it acknowledged and discussed in the PEP, just so the trade-offs are clear to people. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/A4IVU75OU5D4KTQP3OOR2V7DKA5E22EN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671: Syntax for late-bound function argument defaults
On Sat, 30 Oct 2021 at 23:13, Brendan Barnwell wrote: > > On 2021-10-30 15:07, David Mertz, Ph.D. wrote: > > I'm -100 now on "deferred evaluation, but contorted to be useless > > outside of argument declarations." > > > > At first I thought it might be harmless, but nothing I really care > > about. After the discussion, I think the PEP would be actively harmful > > to future Python features. > > I'm not sure I'm -100, but still a hard -1, maybe -10. > > I agree it seems totally absurd to add a type of deferred expression > but restrict it to only work inside function definitions. That doesn't > make any sense. If we have a way to create deferred expressions we > should try to make them more generally usable. I was in favour of the idea, but having seen the implications I'm now -0.5, moving towards -1. I'm uncomfortable with *not* having a "proper" mechanism for building signature objects and other introspection (I don't consider having the expression as a string and requiring consumers to eval it, to be "proper"). And so, I think the implication is that this feature would need some sort of real deferred expression to work properly - and I'd rather deferred expressions were defined as a standalone mechanism, where the full range of use cases (including, but not limited to, late-bound defaults!) can be considered. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/3UNMX3BEFWA2HJMKQSPVGC574QCWP5UV/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671: Syntax for late-bound function argument defaults
On Sun, 31 Oct 2021 at 12:45, Eric V. Smith wrote: > > I think it's safe to say people are opposed to the PEP as it current > stands, not in it's final, as yet unseen, shape. But I'm willing to use > other words that "I'm -1 on PEP 671". You can read my opposition as "as > it currently stands, I'm -1 on PEP 671". Same for me. > > As for what seems like one major issue: > > > > Yes, this is a kind of "deferred" evaluation, but it is not a general > > purpose one, and that, I think, is the strength of the proposal, it's > > small and specific, and, most importantly, the scope in which the > > expression will be evaluated is clear and simple. > > And to me and others, what you see as a strength, and seem opposed to > changing, we see as a fatal flaw. > > What if the walrus operator could only be used in "for" loops? What if > f-strings were only available in function parameters? What if decorators > could only be used on free-standing functions, but not on object methods? > > In all of these cases, what could be a general-purpose tool would have > been restricted to one specific context. That would make the language > more confusing to learn. I feel you're proposing the same sort of thing > with late-bound function argument defaults. And I think it's a mistake. I agree with Eric. I can see the value in a small and specific proposal, if there's a small and specific issue involved. But as the discussions have progressed, it seems to me that the small and specific issue that we *thought* was involved, has wider and more general implications: 1. There's a broader question of being able to tell if an argument was left unspecified in the call. Defaulting to None, and using a sentinel value, both try to address this in specific limited ways. PEP 671 is addressing the same issue, but from a different angle, because the code to supply the late bound default has to say, in effect, "if this argument wasn't supplied, evaluate the late-bound expression and use that as a default". That suggests to me that a better mechanism than "use (*args, **kwargs) and check if the argument was supplied" would be generally useful, and PEP 671 might just be another workaround that could be better fixed by addressing that need directly. 2. The question of when the late-bound default gets evaluated, and in what context, leads straight into the deferred expression debate that's been ongoing for years, in many different contexts. Maybe PEP 671 is just another example of something that wouldn't be an issue if we had deferred expressions. Sure, "now is better than never" might apply here - endlessly sticking with the status quo because we can't work out the details of the grand solution to everything is a classic "perfect is the enemy of the good" situation. But equally, maybe what we have already is *good enough*, and there's no real rush to solve just this one piece of the puzzle. It's tempting to solve the bit that we can see clearly right now, but that shouldn't blind us to the possibility of a more flexible solution that addresses the issue as part of a more general problem. > > In contrast, a general deferred object would, to me, be really > > confusing about what scope it would get evaluated in -- I can't even > > imagine how I would do that -- how the heck am I supposed to know what > > names will be available in some function scope I pass this thing > > into??? Also, this would only allow a single expression, not an > > arbitrary amount of code -- if we're going to have some sort of > > "deferred object" -- folks will very soon want more than that, and > > want full deferred function evaluation. So that really is a whole > > other kettle of fish, and should be considered entirely separately. > > And again, this is where we disagree. I think it should be considered in > the full context of places it might be useful. I (and I think others) > are concerned that we'd be painting ourselves into a corner with this > proposal. For example, if the delayed evaluation were available as text > via inspect.Signature, we'd be stuck with supporting that forever, even > if we later added delayed evaluation objects to the language. This is for me the main issue where I think "constraining the design of the broader feature later" really hurts. There's absolutely no doubt in my mind that *if* we already had deferred expressions, then we'd expose late-bound defaults as delayed expressions in the function's signature. If we implement them as strings just because we don't yet have delayed expressions, then when we do look at designing delayed expressions, we're stuck with the backward compatibility problem of how we fit them into function signatures without breaking all the code that's been written to expect a string. Yes, that's all 15 obscure utilities with a total user base of about 5 people ;-), I completely accept that this is a very niche issue - but backward compatibility often ends up concerned with such details, and they can be the hardes
[Python-ideas] Re: Adding pep8-casing-compliant aliases for the entire stdlib
Let's just say up front that I'm a strong -1 on this proposal, as I think it is needless churn, and while it may be *technically* backward compatible, in reality it will be immensely disruptive. There's one particular point I want to pick up on, though. On Thu, 11 Nov 2021 at 15:25, Matt del Valle wrote: > If we went to a disconnected alternate universe where python had never been > invented and introduced it today, in 2021, would we introduce it with a > uniform naming convention, or the historical backwards-supporting mishmash of > casing we've ended up with? Since I think the answer is pretty clear, I'm > strongly in favor of making this minimally-invasive change that at least > works towards uniform casing, even if that dizzying utopia is far beyond the > horizon. Our grandchildren might thank us :p Certainly, there's a lot of inconsistency that can only be justified as historical baggage. There have been a number of proposals to "tidy up" such cases, but even when focused on specific instances, the general conclusion has been that this would be too disruptive. But let's put that aside and address the broader question here. Yes, if we'd been designing everything now, we quite probably would have adopted a more consistent approach. But I think it's foolish to assume that whatever convention we defined for names would be strictly based on the type of the value. After all, even if you adopt a no-compromises stance on PEP 8 (a stance that the PEP itself rejects, by the way!) the first part of the "Naming Conventions" section says """ Names that are visible to the user as public parts of the API should follow conventions that reflect usage rather than implementation. """ To examine some specific cases, lists are a type, but list(...) is a function for constructing lists. The function-style usage is far more common than the use of list as a type name (possibly depending on how much of a static typing advocate you are...). So "list" should be lower case by that logic, and therefore according to PEP 8. And str() is a function for getting the string representation of an object as well as being a type - so should it be "str" or "Str"? That's at best a judgement call (usage is probably more evenly divided in this case), but PEP 8 supports both choices. Or to put it another way, "uniform" casing is a myth, if you read PEP 8 properly. What you actually seem to be arguing for is a renaming based on a hypothetical version of PEP 8 that is far stricter than the actual document, and which doesn't take into account the messiness of real-world APIs and applications. That's a very common, and in my opinion misguided, stance. For me, one of the best things about PEP 8 is its repeated assertions that the "rules" it defines are only guidelines and that they should not be imposed blindly, but programmer judgement should always take precedence. I find it very frustrating that people making a fuss about "following PEP 8" seem completely blind to the whole of the section "A Foolish Consistency is the Hobgoblin of Little Minds" (one part of which explicitly says that the PEP does not justify adding or changing code just to follow guidelines that were created after the code was written), and the *many* places where the PEP offers two (or sometimes even more) alternatives, without preferring one over the other. As I said, I'm -1 on this proposal. Paul PS If you are really committed to an alternative naming convention, you can always write a module that adds all of the aliases you might want. That way, you can follow your own preferences without imposing them on everyone else... ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/D56P72COERVXLYWADMM76JYI4EFSC27T/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Adding pep8-casing-compliant aliases for the entire stdlib
On Thu, 11 Nov 2021 at 17:13, Carl Meyer wrote: > > I'm also -1 on churning the stdlib in search of a global consistency > that PEP 8 itself disavows, but this particular argument against it > doesn't make sense: > > On Thu, Nov 11, 2021 at 9:14 AM Paul Moore wrote: > > To examine some specific cases, lists are a type, but list(...) is a > > function for constructing lists. The function-style usage is far more > > common than the use of list as a type name (possibly depending on how > > much of a static typing advocate you are...). So "list" should be > > lower case by that logic, and therefore according to PEP 8. And str() > > is a function for getting the string representation of an object as > > well as being a type - so should it be "str" or "Str"? That's at best > > a judgement call (usage is probably more evenly divided in this case), > > but PEP 8 supports both choices. Or to put it another way, "uniform" > > casing is a myth, if you read PEP 8 properly. > > Any type can be called to construct an instance of that type. If I > define a class Foo, I create an instance of Foo by calling `Foo(...)`. > `list` and `str` are no different; I can create an instance of the > type by calling it. This doesn't mean they are "both a type and a > function" in some unusual way, it just means that we always call types > in order to construct instances of them. I understand that. However, PEP 8 states "Names that are visible to the user as public parts of the API should follow conventions that reflect *usage* rather than *implementation*." (My emphasis) I quoted this, but you cut that part of my post. My point here is that how you interpret "usage" is far from clear - I'm sure that a lot of people would teach str(...) as a function that creates a string representation of an object, deferring the detail that it's actually a type, and you can call a type to create objects of that type until later. So would a newcomer necessarily know (or even need to know) that str is a type, not a function? There's also the case of changing implementation between a class and a factory function - surely that should not require a compatibility-breaking name change? The key is that it's fairly easy to argue "reasonable doubt" here - PEP 8 is intended to be applied with a certain level of judgement, not as a set of absolute rules. But yes, I didn't make my point particularly clearly, I apologise. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/2A5OB3Q7GRRPZD2BKBGF2EMAPPJIKBJ6/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Adding pep8-casing-compliant aliases for the entire stdlib
On Thu, 11 Nov 2021 at 22:22, Brendan Barnwell wrote: > > On 2021-11-11 09:33, Paul Moore wrote: > > I understand that. However, PEP 8 states "Names that are visible to > > the user as public parts of the API should > > follow conventions that reflect*usage* rather than*implementation*." > > (My emphasis) I quoted this, but you cut that part of my post. > > I'm not the one who previously replied to your earlier post, but I > still don't really understand what the relevance of this is. EVERY > class can be used like a function (barring perhaps a few oddities like > None). So the fact that you see a name used like `str(this)` or > `list(that)` or `some_name(a, b, c)` doesn't tell you anything about > "usage". That syntax is completely consistent with usage as a class and > as a function. Chris Angelico made the point far better than I've managed to, and in any case the thread is basically finished at this point, so I won't say anything more other than to quote Chris and say "this is what I was trying to say": > The distinction between "this is a type" and "this is a function" is > often relatively insignificant. The crux of your proposal is that it > should be more significant, and that the fundamental APIs of various > core Python callables should reflect this distinction. This is a lot > of churn and only a philosophical advantage, not a practical one. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/X7AV7WMDISAZSADHQT7XJ3AFYIMGD5TE/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Adding pep8-casing-compliant aliases for the entire stdlib
On Fri, 12 Nov 2021 at 03:46, Steven D'Aprano wrote: > > On Thu, Nov 11, 2021 at 10:06:45PM -0500, Ricky Teachey wrote: > > > Is there a standard idiom-- perhaps using a type-hint-- to signal to the > > IDE/linter that my user-defined class is intended to be used as a > > function/factory, and not as a type (even though it is in fact a type)? > > Not really. I don't think there is even a standard idiom for the human > reader to decide whether something is used as a "function" or a "class". > It is subjective, based on usage and convention. Precisely. And that's why automated tools like flake8 can't reliably enforce rules like this, because they can't determine intent. On Fri, 12 Nov 2021 at 00:30, Brendan Barnwell wrote: > > I think this is a big part of the problem. There are various tools > out > there (flake8 being one of them) that purport to "improve" or "fix" your > code (or warn you to do it yourself), and various companies and > organizations that adopt policies tied to those tools (e.g., "your pull > request must pass this PEP 8 linter to be accepted"). It's a big problem. Very much this. Tools like flake8 aren't bad in themselves (they catch when I make dumb typos in my code, and I'm grateful for that) but treating them as if they had the final say on what is acceptable code is vey bad (there's a reason linters support "#fmt: off"). Unfortunately, this usually (in my experience) comes about through a "slippery slope" of people saying that mandating a linter will stop endless debates over style preferences, as we'll just be able to say "did the linter pass?" and move on. This of course ignores the fact that (again, in my experience) far *more* time is wasted complaining about linter rules than was ever lost over arguments about style :-( Paul PS Thanks to Ethan for clarifying my posting much better than I managed to :-) ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/YG7R4UX5ZJKFINJ3AW4BOGH3RNZJXVDS/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Adding pep-8-casing-compliant aliases for unittest and logging
On Sat, 13 Nov 2021 at 08:47, Stephen J. Turnbull wrote: > > A final comment: I wonder if you're being too conservative. It's true > that generally we prefer small targeted modules in the stdlib, but > everybody needs tests (even if they don't know it ;-), and there might > be quite large audiences for the great majority of PyTest. So maybe > we could get all of PyTest in (perhaps in a slimmed-down form without > some of the redundant/deprecated functionality). > > As I mentioned, I'm not a PyTest user (yet! :-), so I wouldn't be a > good proponent for this. But I find the proposal exciting. I love pytest, and I'm a happy user of it. But I've never wanted it in the stdlib. Because it's a developer tool, basically. As a developer, I'm perfectly fine having my tools installed in a per-project virtualenv, or set up as standalone commands via pipx, or whatever works best for my project. I don't need the full "developer experience" in the stdlib, because pip install works fine. And yes, I know that for some developers, access to PyPI isn't as easy as that (I've been in that position myself, many times!) but there are workarounds and hacks, which are fine if it's setting up stuff once on your dev machine. And having pytest able to change and innovate is important - if it became part of the stdlib, it would (of necessity) stagnate, and the role of innovator in testing tools would pass to someone else. **However**, the situation is completely different for packages that are used in applications that get shipped out to end users. And "applications" is a very broad term, that covers full standalone executables, web services, one-file scripts, Jupyter notebooks, etc. Python's story on building and deploying such applications is still pretty bad (and I say this as a packaging specialist!) We've focused on libraries at the cost of the final result, and as a consequence it's extremely easy to use packages from PyPI when developing your new application, but when you get round to deploying it, things get hard and you start wishing that the stuff you used was in the stdlib, because that would make things so much easier. So there's regular discussions about adding functionality to the stdlib, but it's *that* sort of package (requests, toml, bits of numpy, data structure libraries, etc), and not tools like pytest (or tox, hypothesis, nox, black, or flake8, ...) So basically, I don't think it's likely that a proposal to add pytest to the stdlib would get very far (it's fine as a PyPI package) but that's specific to pytest, and as a general principle, "it's on PyPI so it doesn't need to be in the stdlib" *doesn't* apply, and won't until we have a better deployment story for Python tools. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/KZEZ2QHR5V6QIHD7QQM75F7I2724JTKN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Enhancing iterator objects with map, filter, reduce methods
On Fri, 26 Nov 2021 at 14:39, Raimi bin Karim wrote: > So this is more of a heartfelt note rather than an objective one — I would > love > my fellow Python programmers to be exposed to this mental model, and that > could only be done by implementing it in the standard library. I'm somewhat ambivalent about this pattern. Sometimes I find it readable and natural, other times it doesn't fit my intuition for the problem domain. I do agree that helping people gain familiarity with different approaches and ways of expressing a computation, is a good thing. I get your point that putting this functionality in a 3rd party library might not "expose" it as much as you want. In fact, I'd be pretty certain that something like this probably already exists on PyPI, but I wouldn't know how to find it. However, just because that doesn't provide the exposure you're suggesting, doesn't mean that it "could only be done by implementing it in the standard library". This isn't a technical problem, it's much more of a teaching and evangelisation issue. Building a library and promoting it via blogs, social media, demonstrations, etc, is a much better way of getting people interested. Showcasing the approach in an application that lots of people use is another (Pandas, for example, shows off the "fluent" style of chained method calls, which some people love and some hate, that's very similar to your proposal here). It's a lot of work, though, and not the type of work that a programmer is necessarily good at. Many great libraries are relatively obscure, because the author doesn't have the skills/interest/luck to promote them. What you *do* get from inclusion in the stdlib is a certain amount of "free publicity" - the "What's new" notices, people discussing new features, the general sense of "official sanction" that comes from stdlib inclusion. Those are all useful in promoting a new style - but you don't get them just by asking, the feature needs to qualify for the stdlib *first*, and the promotion is more a "free benefit" after the fact. And in any case, as others have mentioned, even being in the stdlib isn't guaranteed visibility - there's lots of stuff in the stdlib that gets overlooked and/or ignored. Sorry - I don't have a good answer for you here. But I doubt you'll find anyone who would be willing to help you champion this for the stdlib. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/WRSCVNGAS6JALJY56ZE7EU634V7DU73G/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671 (late-bound arg defaults), next round of discussion!
On Wed, 1 Dec 2021 at 06:19, Chris Angelico wrote: > > I've just updated PEP 671 https://www.python.org/dev/peps/pep-0671/ > with some additional information about the reference implementation, > and some clarifications elsewhere. > > *PEP 671: Syntax for late-bound function argument defaults* > > Questions, for you all: > > 1) If this feature existed in Python 3.11 exactly as described, would > you use it? Probably not. Mainly because I don't have any real use for it rather than because I have any inherent problem with it (but see below, the more I thought about it the more uncomfortable with it I became). > 2) Independently: Is the syntactic distinction between "=" and "=>" a > cognitive burden? > > (It's absolutely valid to say "yes" and "yes", and feel free to say > which of those pulls is the stronger one.) Not especially. The idea of having both late-bound and early-bound parameters is probably more of a cognitive burden than the syntax. > 3) If "yes" to question 1, would you use it for any/all of (a) mutable > defaults, (b) referencing things that might have changed, (c) > referencing other arguments, (d) something else? N/A, except to say that when you enumerate the use cases like this, none of them even tempt me to use this feature. I think that the only thing I might use it for is to make it easier to annotate defaults (as f(a: list[int] => []) rather than as f(a: list[int] | None = None). So I'll revise my answer to (1) and say that I *might* use this, but only in a way it wasn't intended to be used in, and mostly because I hate how verbose it is to express optional arguments in type annotations. (And the fact that the annotation exposes the sentinel value, even when you want it to be opaque). I hope I don't succumb and do that, though ;-) > 4) If "no" to question 1, is there some other spelling or other small > change that WOULD mean you would use it? (Some examples in the PEP.) Not really. It addresses a wart in the language, but on consideration, it feels like the cure is no better than the disease. > 5) Do you know how to compile CPython from source, and would you be > willing to try this out? Please? :) Sorry, I really don't have time to, in the foreseeable future. If I did have time, one thing I would experiment with is how this interacts with typing and tools like pyright and mypy (yes, I know type checkers would need updating for the new syntax, so that would mostly be a thought experiment) - as I say, I'd expect to annotate a function with an optional list argument defaulting to an empty list as f(a: list[int] => []), which means that __annotations__ needs to distinguish between this case and f(a: list[int]) with no default. >>> def f(a: list[int]): pass ... >>> f.__annotations__ {'a': list[int]} >>> def f(a: list[int] => []): pass ... >>> f.__annotations__ ??? > I'd love to hear, also, from anyone's friends/family who know a bit of > Python but haven't been involved in this discussion. If late-bound > defaults "just make sense" to people, that would be highly > informative. Sorry, I don't have any feedback like that. What I can say, though, is I'd find it quite hard to express the question, in the sense that I'd struggle to explain the difference between early and late bound parameters to a non-expert, much less explain why we need both. I'd probably just say "it's short for a default of None and a check" which doesn't really capture the point... > Any and all comments welcomed. I mean, this is python-ideas after > all... bikeshedding is what we do best! I hope this was useful feedback. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/BMJLEOHEA7MYC5IPYUSXELPRWGW4VG6M/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671 (late-bound arg defaults), next round of discussion!
On Wed, 1 Dec 2021 at 15:24, David Mertz, Ph.D. wrote: > > On Wed, Dec 1, 2021 at 9:24 AM Paul Moore wrote: >> >> I think that the only >> thing I might use it for is to make it easier to annotate defaults (as >> f(a: list[int] => []) rather than as f(a: list[int] | None = None). > > > Why not `f(a: Optional[list[int]] = None)`? > > I'm not counting characters, but that form seems to express the intention > better than either of the others IMHO. If None were a valid argument, and I was using an opaque sentinel, Optional doesn't work, and exposing the type of the sentinel is not what I intend (as it's invalid to explicitly supply the sentinel value). Also, Optional[list[int]] doesn't express the intent accurately - the intended use is that people must supply a list[int] or not supply the argument *at all*. Optional allows them to supply None as well. As I say, I don't consider this an intended use case for the feature, because what I'm actually discussing here is optional arguments and sentinels, which is a completely different feature. All I'm saying is that the only case when I can imagine using this feature is for when I want a genuinely opaque way of behaving differently if the caller omitted an argument (and using None or a sentinel has been good enough all these years, so it's not exactly a pressing need). Let's just go back to the basic point, which is that I can't think of a realistic case where I'd want to actually use the new feature. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/NYHE57O3JVD4GWBIJC57MB5E7PWG6YPR/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671 (late-bound arg defaults), next round of discussion!
On Wed, 1 Dec 2021 at 22:27, Greg Ewing wrote: > > On 2/12/21 4:40 am, Paul Moore wrote: > > the > > intended use is that people must supply a list[int] or not supply the > > argument *at all*. > > I don't think this is a style of API that we should be encouraging > people to create, because it results in things that are very > awkward to wrap. Hmm, interesting point, I agree with you. It's particularly telling that I got sucked into designing that sort of API, even though I know it's got this problem. I guess that counts as an argument against the late bound defaults proposal - or maybe even two: 1. It's hard (if not impossible) to wrap functions that use late-bound defaults. 2. The feature encourages people to write such unwrappable functions when an alternative formulation that is wrappable is just as good. (That may actually only be one point - obviously a feature encourages people to use it, and any feature can be over-used. But the point about wrappability stands). Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/HMQVEXZSUWGJSXNGGSDKVBDWNF5PYMUX/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671 (late-bound arg defaults), next round of discussion!
On Thu, 2 Dec 2021 at 08:29, Paul Moore wrote: > > On Wed, 1 Dec 2021 at 22:27, Greg Ewing wrote: > > > > On 2/12/21 4:40 am, Paul Moore wrote: > > > the > > > intended use is that people must supply a list[int] or not supply the > > > argument *at all*. > > > > I don't think this is a style of API that we should be encouraging > > people to create, because it results in things that are very > > awkward to wrap. > > Hmm, interesting point, I agree with you. It's particularly telling > that I got sucked into designing that sort of API, even though I know > it's got this problem. I guess that counts as an argument against the > late bound defaults proposal - or maybe even two: > > 1. It's hard (if not impossible) to wrap functions that use late-bound > defaults. > 2. The feature encourages people to write such unwrappable functions > when an alternative formulation that is wrappable is just as good. > > (That may actually only be one point - obviously a feature encourages > people to use it, and any feature can be over-used. But the point > about wrappability stands). Actually, Chris - does functools.wraps work properly in your implementation when wrapping functions with late-bound defaults? >>> def dec(f): ... @wraps(f) ... def inner(*args, **kw): ... print("Calling") ... return f(*args, **kw) ... return inner ... >>> @dec ... def g(a => []): ... return len(a) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/M5OHAWF44FRD6OL5F65IDDNKBFL36RUW/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671 (late-bound arg defaults), next round of discussion!
On Mon, 6 Dec 2021 at 09:45, Stephen J. Turnbull wrote: > > Rob Cliffe via Python-ideas writes: > > > Nobody has attempted (or at least completed) a PEP, never mind an > > implementation, of a "generalized deferred object/type", in the last N > > years or decades. > > Haskell anything. Ruby blocks. Closer to home, properties and closures. > > So I don't think the object part is that hard, it's the syntax and > semantics that's devilish. At one level, it's trivial. A deferred expression is `lambda: expression`. Evaluating it is `deferred_expr()`. What's not at all obvious is the requirements beyond that - what do people *actually* want that isn't covered by this. The most obvious answer is that they don't want to have to check for a deferred expression and explicitly evaluate it, which triggers the question, when do they want the language to evaluate it for them? "Every time" doesn't work, because then you can't treat deferred expressions as first class objects - they keep disappearing on you ;-) So IMO it's the *requirements* that are hard. Maybe that's just me using different words for the same thing you were saying, but to me, the distinction is important. People throw around the term "deferred object", but everyone seems to think that everyone else understands what they mean by that term, and yet no-one will give a precise definition. We can't have a PEP or an implementation until we know what we're proposing/implementing. I don't intend to champion a "deferred objects" proposal, but I do think that they (whatever they are) would be a better (more general) solution than late-bound arguments. So here's a possible minimal definition of what a "deferred object" is. It takes the view that explicitly requesting the evaluation of a deferred is OK, but people don't want to have to check it's a deferred before evaluating. 1. `defer EXPR` creates a "deferred object", that is semantically identical to `lambda: EXPR`, except that it isn't a callable, instead it's a new type of object. 2. `undefer EXPR` is exactly the same as `EXPR`, except that if `EXPR` evaluates to a deferred object, it gets called (in the sense of it being equivalent to a lambda which can be called). Here's a prototype implementation, and a demonstration of how it would be used to implement late bound arguments. Please note, I understand that the syntax here is horrible. That's exactly the point, this needs language support to be non-horrible. That's what a "deferred expression" proposal would provide. # Explicitly creating Deferred objects is horrible, this is the bit that *really* needs language support class Deferred: def __init__(self, callable): self.callable = callable # This could easily be a builtin function (or an operator if people prefer syntax) once we have deferred objects. def undefer(expr): if isinstance(expr, Deferred): return expr.callable() return expr x = 12 # def f(a=defer x): def f(a=Deferred(lambda: x)): a = undefer(a) return a assert f(9) == 9 assert f() == 12 x = 8 assert f() == 8 assert f(9) == 9 If anyone wants to take this and make a *proper* deferred object proposal out of it, then please do so. If not, then at a minimum I think this offers something vaguely concrete to discuss regarding the "why deferred objects are a more general solution to the late bound argument" question. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/QDGHZ3CFRNLAMW4JDWCZ3NELADGWR4QN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671 (late-bound arg defaults), next round of discussion!
On Mon, 6 Dec 2021 at 11:21, Chris Angelico wrote: > The reason I consider this to be an independent proposal, and NOT a > mechanism for late-bound defaults, is this problem: > > def f(lst, n=>len(lst)): > lst.append(1) > print(n) > > f([10, 20, 30]) > > A late-bound default should print 3. A deferred expression should > print 4. They're not a more general solution to the same question; > they're a solution to a different question that has some overlap in > what it can achieve. A None-coalescing operator would also have some > overlap with each of the above, but it is, again, not the same thing. As I said, no-one is being clear about what they mean by "deferred expressions". My strawman had an explicit syntax for "undeferring" for precisely this reason, it lets the programmer decide whether to undefer before or after the append. Most of the objections to deferred expressions that I've seen seem to involve this confusion - objectors assume that evaluation happens "magically" and then object to the fact that the place they want the evaluation to happen doesn't match with the place they assume the magic would occur. I see this as more of an argument that implicit evaluation is a non-starter, and therefore deferred expressions should be explicitly evaluated. Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/SKZKVJH4HNYQUIRZSIQS4E72CDIGN4ED/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671 (late-bound arg defaults), next round of discussion!
On Wed, 8 Dec 2021 at 18:09, Chris Angelico wrote: > > On Thu, Dec 9, 2021 at 4:55 AM Stephen J. Turnbull > wrote: > > But the "good idea" of general deferreds is only marginally relevant > > to our -1s. It's those -1s that constitute the main issue for Chris, > > since they're a noisy signal that the SC might think as we do. > > Please explain to me *exactly* what your arguments against the current > proposal are. At the moment, I am extremely confused as to what people > actually object to, and there's endless mischaracterization and > accusation happening. > > Can we actually figure out what people are really saying, and what the > problems with this proposal are? > > NOT that there might potentially be some other proposal, but what the > problems with this one are. Argue THIS proposal, not hypothetical > other proposals. Note that I'm not vehemently -1 on this PEP, but I am against it. So I'm not necessarily one of the people whose response you need and are asking for here, but my views are part of the opposition to the PEP. So here's my problems with this proposal: 1. The problem that the PEP solves simply isn't common enough, or difficult enough to work around, to justify new syntax, plus a second way of defining default values. 2. There's no syntax that has gained consensus, and the objections seem to indicate that there are some relatively fundamental differences of opinion involved. 3. There's no precedent for languages having *both* types of binding behaviour. Sure, late binding is more common, but everyone seems to pick one form and stick with it. 4. It's a solution to one problem in the general "deferred expression" space. If it gets implemented, and deferred expressions are added later, we'll end up with two ways of achieving one result, with one way being strictly better than the other. (Note, for clarity, that's *not* saying that we should wait for something that might never happen, it's saying that IMO the use case here isn't important enough to warrant rushing a partial solution). To be 100% explicit, none of the above are showstopper objections (some, like the choice of syntax, are pretty minor). I'm not arguing that they are. Rather, my problem with the PEP is that we have a number of individually small issues like this, which aren't balanced out by a sufficiently compelling benefit. The PEP isn't *bad*, it's simply not good *enough* (IMO). And it's not obvious how to fix the issue, as there's no clear way to increase the benefit side of the equation. That sucks, as it's a lot of work to write a PEP, and "meh, I'm not convinced" is the worst possible response. But that's how this feels to me. The reason deferred objects keep coming up is because they *do* have a much more compelling benefit - they help in a much broader range of cases. It's fine to say they are a different proposal, and that "but we might get deferred expressions" is a flawed objection (which it is, if that's all the objection consists of). But rejecting that argument doesn't do anything to improve the weak benefits case for late-bound defaults, or to fix the various minor problems that weigh it down. All IMO, of course... Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/OAGZ6MDGIAQ6HAHHMXL2LLLZ4FNBUFT3/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: PEP 671 (late-bound arg defaults), next round of discussion!
On Wed, 8 Dec 2021 at 19:59, Rob Cliffe via Python-ideas wrote: > > On 08/12/2021 19:27, Paul Moore wrote: > > > > The reason deferred objects keep coming up is because they *do* have a > > much more compelling benefit - they help in a much broader range of > > cases. > That may be true. I don't know. > Can anyone provide some realistic use cases? Of what? Deferred expressions? I ask because the rest of your post seems to only be thinking in terms of argument defaults, when the point I'm trying to make is that deferred expressions have uses outside of that situation. Honestly, I don't have particular examples off the top of my head. It's not me that's arguing for deferred objects. I probably should have worded that sentence as "The reason deferred objects keep coming up is because the people interested in them are claiming that they *do* have a much more compelling benefit - they help in a much broader range of cases." In the context of the current discussion about late-bound defaults, I already said that deferred expressions could reasonably be declared not relevant, and it wouldn't affect my core complaint, which is that the benefit doesn't justify the costs. But certainly if someone does propose introducing deferred expressions, I'd expect them to explain the benefits, and I would expect that (a) one benefit would be that they handle all of the cases that late-bound defaults cover, and (b) there are further benefits in areas outside default values. That's what I mean when I say that deferred expressions are a superset of the functionality of late-bound defaults. > I've read the whole thread > and I can only recall at most one, viz. the default value is expensive > to compute and may not be needed. But that is a good time *not* to use > a late-bound default! (The sentinel idiom would be better.) Anything > can be used inappropriately, that doesn't make it bad per se. > I don't wish to disparage anyone's motives. I am sure all the posts > were made sincerely and honestly. But without examples (of how deferred > objects would be useful), if *feels to me* (no doubt wrongly) as if > people are using a fig leaf to fight against this PEP. I agree. But you're only responding to the last paragraph of my post. Everything else I said was explaining my reservations over the late-bound defaults proposal, and I *explicitly* said that those reservations stand independent of any deferred expression proposal. Honestly, it may feel to you that people are using weak arguments to fight the PEP, but to me it feels like supporters of the PEP are ignoring all of the *other* objections and trying to make the argument entirely about deferred expressions. I guess we're both mistaken in our feelings ;-) Paul ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/H5YHKYVWHNGCZ3PVF74YNCYVFMDU26YD/ Code of Conduct: http://python.org/psf/codeofconduct/