[Python-Dev] Small lament...
Just wanted to throw this out there... I lament the loss of waking up on April 1st to see a creative April Fool's Day joke on one or both of these lists, often from our FLUFL... Maybe such frivolity still happens, just not in the Python ecosystem? I know you can still import "this" or "antigravity", but those are now old (both introduced before 2010). When was the last time a clever easter egg was introduced or an April Fool's Day joke played? ¯\_(ツ)_/¯ Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/Q62W2Q6R6XMX57WK2CUGEENHMT3C3REF/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Switching to Discourse
> > I have a question about how you handle multiple communities. I'm > subscribed to ~30 python-dev style mailing lists across different > projects. There is no way I can open up 30 Discourse sites each day. > Mail brings everything into one place for me, and I have things setup > so that new mail from python-dev style lists is separated from my > general inbox. > +1 I have other interests outside Python. Email filters allow me to categorize email automatically, saving messages in folders which wait for me to get around to that category. Considering the explosion of outlets for Python discussion, I will relate a recent unfortunate incident I don't think would have happened a couple years ago. I won't name names, but I won't go out of my way to keep the parties from being discovered. Someone posted a note to the [Python Help] forum on discuss.python.org recently stating Python had an obvious memory leak. I tried to help, explaining what I thought he needed to do to demonstrate a leak. He posted a small C program which initialized, then immediately finalized the Python runtime, and basically said, "this is a memory leak." I pointed out that you need to loop over the same operation to determine if you really have a leak. Back and forth for a bit. Finally, I said, "if you believe this to be a memory leak, then you should open an issue on GitHub." My intent was to get his argument in front of the people who really are the experts on Python's memory management. His response, "Oh, I already have, here and here and here." What a nice way to waste my time... I imagine he was trolling, but maybe he was just dissatisfied with the responses he got on GH and thought he could get someone to go to bat for him. My thinking is this would likely have not happened in the olden days when almost all Python development/programming traffic was housed in python-list and python-dev.Granted, the Python community was smaller, but, perhaps just as importantly, a couple active core developers always seemed to keep an eye on python-list. It seems likely that someone would have seen this thread and nipped it in the bud early. "I responded to your issue a couple months ago and explained why this isn't a memory leak. Now go away." Today, I don't recall noticing core developers on the [Python Help] forum. (I could well be wrong, but the web interface doesn't make it obvious at-a-glance who's posted to a thread from the summary page. It's tiny avatars all the way down.) The flip side of that is that if you want to ask a question about something, it's less obvious where to post that question. The fragmented community means you stand a greater chance of guessing wrong and have it not be seen by anyone who can help. Just my 2¢ Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/PFA2J7HOHCWJGLCX6N6CIVNEEQSW65AL/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Switching to Discourse
> No, Discord is a different thing; it does text and voice communication > channels in real-time. If you're familiar with Slack, it's broadly > similar in purpose. Thanks (and to the others who replied). It seems like they've tried to make it a game, giving me the "opportunity" to buy boosts (or whatever). What's up with that? Do we really need yet another place full of overlapping discussion channels? Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/M6FDI4GBZHR2I77NEN3LFJDVR2GYIQAP/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Switching to Discourse
I have a perhaps stupid question. Is Discord the same as discuss.python.org, just by another name? I find the similarity in names a bit confusing. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/B3WC35H3BMUNDVAOEJSGRAZOQCJ4KHD7/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Switching to Discourse
> > I don't think I *can* do much more than accept it and move on: > *if python-dev was used by everyone*, rather than almost exclusively by > people who prefer e-mail (and presumably use threading mail clients), > we'd get mangled threading anyway from all the non-threaded clients. > Don't forget that used to be the case. ;-) Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/M7XKLFRIZQY3ZDQY6TGBQVSP5HKFKDUH/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Switching to Discourse
> > The discuss.python.org experiment has been going on for quite a while, > and while the platform is not without its issues, we consider it a > success. The Core Development category is busier than python-dev. > According to staff, discuss.python.org is much easier to moderate.. If > you're following python-dev but not discuss.python.org, you're missing > out. > Personally, I think you are focused too narrowly and aren't seeing the forest for the trees. Email protocols were long ago standardized. As a result, people can use any of a large number of applications to read and organize their email. To my knowledge, there is no standardization amongst the various forum tools out there. I'm not suggesting discuss is necessarily better or worse than other (often not open source) forum tools, but each one implements its own walled garden. I'm referring more broadly than just Python, or even Python development, though even within the Python community it's now difficult to manage/monitor all the various discussion sources (email, discuss, GitHub, Stack Overflow, ...) Get off my lawn! ;-) Skip, kinda glad he's retired now... ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/2V5T44LUN73ONCBI7F5GKGDDXNOVIDZN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Can I ask a real dumb procedural question about GitHub email?
I subscribe to the python/cpython stuff on GitHub. I find it basically impossible to follow because of the volume. I realize there are probably plenty of extra changes going in based on the recent language summit (and maybe some sprints at PyCon?) as well as the proximity to the beta 1 freeze. Still, does anyone actually try to follow everything that comes out of that firehose? I've received updates to about 125 new GMail conversations (more total messages than that) in the last 24 hours, and that's after using filters to delete Miss-Islington-type messages altogether. As far as I can quickly ascertain, all the messages are from real people, not bots. How (if at all) do people deal with this firehose of email? Am I the only person dumb enough to have tried? I used to scan for csv-module-related messages, but don't even try to do that now. My only real reason for continuing to subscribe is that it feeds into a process that updates a dictionary of "common" words used by my XKCD-936-derived password generator. Thx, Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/BCBQWW2ZLRU2GCSZJH6Q6YNAXMH3Q6FB/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: About PEPs being discussed on Discourse
> > Discourse it just flat-out easier to admin: individuals can flag posts, > automatic spam detection, site-wide admins instead of per-list, ability to > split topics, ability to lock topics, ability to "slow down" topics, > time-limited suspensions, etc. I quit being an admin for any ML beyond > python-committers because I found it too frustrating to deal w/ when > compared to the tools I have on discuss.python.org. > Personally, I never found administering mailing lists to be all that challenging. Also, I think it's worth taking into consideration what works best for users, not just admins. There are far more interactions with discussion media by users than administrators. My personal preference as a user is for mailing lists (everything is funneled through the same user interface rather than several not-quite-identical forum and social media interfaces - I do more than just Python stuff online, and suspect many other people do). Still, I understand that I am a dinosaur and the world is changing, so I shouldn't be surprised that a meteor is approaching. Skip > ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/X777BTUZ7VFYIJDNP4AIHOR4JYZOJZ4Q/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Are "Batteries Included" still a Good Thing? [was: It's now time to deprecate the stdlib urllib module]
On Wed, Mar 30, 2022, 12:02 PM Toshio Kuratomi wrote: > > As just one example, i found two interesting items in the discussion > started by Skip about determining what modules don't have maintainers just > downstream if this. > Age in snake years doesn't necessarily correlate well with one's desire to take a deep dive into the documentation. Just sayin'... These answers might have been there waiting for a more diligent search on my part. Skip > ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/QRBIANZOIJHGKKQVNBZ7PHECGTFHBPBI/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Are "Batteries Included" still a Good Thing? [was: It's now time to deprecate the stdlib urllib module]
> > There's the CODEOWNERS file: > https://github.com/python/cpython/blob/main/.github/CODEOWNERS Thanks. Never would have thought there was such a thing. I was looking for files with "maintain" in them. Skimming it, it would seem that most of the stuff in Lib or Modules isn't really associated with a particular person or small group. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/PPQ5LVXVPAZJ5OHQX36HYHJQ3TDR76NX/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Are "Batteries Included" still a Good Thing? [was: It's now time to deprecate the stdlib urllib module]
I was trying to think through how a "remote" stdlib might work. In the process, I got to wondering if there are known "specialists" for various current modules. Every now and then I still get assigned (or at least made nosy) about something to do with the csv module. Is there an official module-by-module list of maintainers? I was thinking about this in the context of the urllib discussion. Whether or not it might be a candidate to keep long term (as one of the handful of modules required to build and test CPython), if there are known maintainers of specific modules or packages, I think it might be worthwhile to give them the chance to chime in. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/NHUDBM2NBUIRSQOYSPBCOJIW45FCY2TY/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Are "Batteries Included" still a Good Thing? [was: It's now time to deprecate the stdlib urllib module]
> What happens when the new maintainer puts malware in the next release of > a package in sumo.txt? > Will core devs be blamed for listing it? > As a user, how do I determine if I can trust the packages there? (This > is easily the hardest part of finding and installing a package from > PyPI, though many people seem to skip it.) I will point out that you quoted my entire post except for the most important line, the one which reads: > Just thinking out loud... I am still thinking out loud, so keep that in mind. (Perhaps this belongs on one of the two ideas groups, but this is where things started, so I will continue here for now.) Consider a hypothetical situation. - Suppose Python v. X is the last version delivered with batteries, so v. Y lacks a json module (assuming that's not determined to be so important that it must be delivered as part of the CPython installer proper) . - Downstream, some Linux distro realizes their users still want batteries (as do their Python admin tools), so recreates a Python package with them. Their package of v. Y *does* include (at least) the json battery. - One of the maintainers of the now externally maintained json battery takes it upon themselves to protest an incursion into East WhatsItStan by West WhatsItStan in the worst manner possible and inserts code to scrub disks of all West WhatsItStan-ian homed computers (as the primary maintainer of some widely installed (Node? JS?) package package did). - Some poor West WhatsItStan-ian upgrades their Linux distro, only to find when they next run their favorite Python application that their disk drive is wiped clean. Note that I haven't postulated the existence of a sumo.txt file. Whether the CPython distribution contains batteries or not, someone will recreate the batteries, if not in toto, at least piecemeal (some application or package depends on json and will blindly pull it in). Who gets the blame here? The core Python devs (because json "always came with Python")? The maintainers of the Linux distro for recreating the batteries but not having any West WhatsItStanian-based testers? The new maintainers of the now external (and corrupted) json package? Maybe the several levels of indirection would serve to insulate the Python devs, but maybe not. > If they do exert control, why not keep it in stdlib? Perhaps that is the best route. Someone here (Stéfane Fermigier, it seems) questioned whether batteries included are such a good idea. It's quite possible that enough core devs disagree with that sentiment that the batteries will stay (and perhaps grow in number, though more slowly than in the past). That said, if enough devs agree with Stéfane, then what's the best route forward? Discard all batteries and let the chips fall where they may? Extract the dead batteries (and presumably their docs and test bits) into separate github.com/python repositories? Ask others to extract them into their own non-github.com/python repositories? One of the common reasons old platform support is dropped from CPython (OS/2 or AmigaOS anyone?) is that the maintenance load for the core devs is too high relative to the community benefit derived from supporting small minority platforms. The unlucky modules named by PEP 594 are about to suffer the same fate. (I'm not arguing that they shouldn't be deleted.) Once PEP 594 has been implemented, all the low-hanging fruit will have been picked. At that point, it's keep everything or keep (almost) nothing. I think that will largely depend on the willingness of the core devs to keep maintaining modules they may well never use in their own work (like getopt and wave). One thing I think is obvious. If you remove all the batteries not deemed absolutely essential to build and test CPython, I think you have to somehow symbolically say, "these correspond to the various batteries which used to be in the CPython distribution." Maybe it's text in a README file, a new PEP, a sumo.txt file, or a Linux distro's packaging. Skip P.S. Personally, I was never fond of shutil or the logging package (as two examples — though for different reasons). We might have something better in both realms by now if they hadn't been added to the stdlib long ago. This ability of presence in the stdlib to forestall further development might well be the strongest argument to remove most batteries. On the other hand, we seem to have had little trouble cycling through several completely different regular expression modules. Maybe I'm just imagining a barrier where none exists. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/PPGY7Q2OWFJ6JYUDAXWVW2HDGEK47C2I/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Are "Batteries Included" still a Good Thing? [was: It's now time to deprecate the stdlib urllib module]
Barry writes (in part): > We could still distribute “sumo” releases which include all the > batteries, but develop and maintain them outside the cpython repo, > and even release them separately on PyPI. It’s *possible* but I > don’t know if it’s *practical*. to which Stephen responds (in part): > [Emacs really didn't change much in 20 years.] [I]n > Python, every new release makes the Mailman crew want to stop > supporting all previous releases of Python because there's some > feature that can't be emulated that we really love: genexps or async > or walrus operator or It seems to me that what we have is the possibility that, say, package P migrates to PyPI in concert with the release of Python version X where maintenance can be picked up by the broader Python community. Whether or not it would proceed to track ongoing changes to the language isn't clear, and it's not obvious to me that less effort would be required other than perhaps by core devs (though some of them would likely pitch in to maintain packages with which they are currently involved). If you go with the "discard the batteries" approach, I think it would at least be worthwhile distributing a requirements.txt file ("sumo.txt" or "batteries.txt"?) which would tell someone installing Python how to reclaim the batteries of their Python youth, and for other Python implementations to track a set of packages which would make them plausibly compatible with CPython. CI could still rely on that to provide as much test coverage as current today (I think). If nothing else, it might alert the core devs to the potential for breakage of presumably widely used packages by changes to CPython. What happens to P when Python version Y grows a new syntactic feature? Do P's maintainers fork to both continue feature growth as well as syntactic modernization? If something like batteries.txt is created to tie a slimmed-down CPython distribution to the batteries it once contained, would the core group exert any control over unit testing, documentation, package variations and such? Just thinking out loud... Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/76ZTGO2PD6Q5NZOO5JWJRTTR2E2JY2H6/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: New PEP website is horrible to read on mobile device
Dang auto-correct... I meant "anti-tracking," in case it wasn't obvious. Skip On Wed, Mar 16, 2022, 10:19 AM Skip Montanaro wrote: > One thing I would mention though is people who can reproduce it check if >> you have any extensions enabled or other tools that can block network >> traffic. Sometimes privacy based extensions and tools can have false >> positives and block resources required to render sites correctly. >> > > (I have not yet tried to reproduce this...) > > Sure, maybe the people seeing this should do some debugging, but... My > counterargument is that in this day and age of invasive tracking and the > corresponding attempts by people to (rightly imo) suppress such tracking, > it's incumbent upon website developers to insure their sites operate in the > face of such tools. I'm referring to common tools, Brave, DuckDuckGo > anti-teaching VPN, pihole, Firefox, etc. > > Skip > > ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/7H6PLUFGTGWVSEIMGHMRXMC2Z4NEXU2X/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: New PEP website is horrible to read on mobile device
> > One thing I would mention though is people who can reproduce it check if > you have any extensions enabled or other tools that can block network > traffic. Sometimes privacy based extensions and tools can have false > positives and block resources required to render sites correctly. > (I have not yet tried to reproduce this...) Sure, maybe the people seeing this should do some debugging, but... My counterargument is that in this day and age of invasive tracking and the corresponding attempts by people to (rightly imo) suppress such tracking, it's incumbent upon website developers to insure their sites operate in the face of such tools. I'm referring to common tools, Brave, DuckDuckGo anti-teaching VPN, pihole, Firefox, etc. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/JITOYHE56PRWKFSID52WDFYKR3NNVMKB/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: 3.11 enhanced error location - can it be smarter?
> > It would not be nice if the traceback module API started providing > text with embedded escape sequences without a way to turn then off in the > API. > I think fobj.isatty() would give the traceback module a good idea whether it's writing to a display device or not. There are a number of other complications though (APIs, platform differences, TERM environment variables or lack thereof, forcible overriding through an API, what other systems (IDLE, PyCharm, etc) do, ...). If it seems the right place to make a change is in the traceback module, my recommendation would be to fork the existing module and publish your prototype on PyPI. Here's a PyPI module (last updated several years ago) that purports to color traceback output: https://pypi.org/project/colored-traceback/ (This really belongs on python-ideas, right?) Skip > ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/AIOGXKOTAV6WGGQ6JHOBF4U4Q6S3RRRK/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Suggestion: a little language for type definitions
> Here is the type hint for `len`, taken from the stub file in typeshed: > > def len(__obj: Sized) -> int: ... > > Putting the mysterious double underscore naming convention aside, I do > not find it credible that anyone capable of programming Python beyond a > beginner level can find that "unreadable". Not by any definition of > unreadable I can think of. Sure, that's pretty trivial, no question. As would be the similar C declaration. As Glenn Lindermann reminded me of cdecl: https://cdecl.org/ you can see how you can get carried away. It's the "getting carried away" parts of the (sometimes organizationally mandatory) type system in Python that are problematic for me, not the simple sized object input, int output sort of thing. You have people asking questions like these: https://discuss.python.org/t/contravariant-typing-type/12741 https://discuss.python.org/t/how-to-annotate-a-new-dict-class-with-typeddict/12723 I don't know if they are just trying to run with scissors, are way the hell off in the weeds, or if the more esoteric corners of tle typing world are simply going to continue to impose themselves on the rest of us. There will always be people who want to express "declare foo as pointer to function (void) returning pointer to array 3 of int" (from the cdecl.org website). Other people have to read that. Maybe Python will eventually grow a pydecl.org domain and website to serve a similar purpose. :-) > Even if your type system is not Turing complete, it is still going to be > pretty powerful. We're not using Pascal any more :-) And that means that > the types themselves are communicating some fairly complex semantics. > Blaming the syntax for something which is inherently hard is not helpful. I don't think anyone's blaming the syntax. I interpreted Jack's suggestion to mean that we would be able to do better with t-strings encapsulating a little language designed to cleanly describe types. I first encountered Python in the 1993-1994 timeframe (1.0.something). Part of its appeal to me at least (and to many others I think) was that it was the anti-Perl. Perl's obfuscation wasn't in its typing. It was elsewhere (everywhere else?). With a full-fledged type system in place it seems like Python is starting to desert that niche. (Yes, I realize Perl is no longer the big dog it once was.) Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/PPFPZUUWKEBNRDTDSPNCQRC5XENB6Q62/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Suggestion: a little language for type definitions
> > So if you hate type annotations because they are unreadable, then you > hate Python because Python is unreadable. > That seems rather harsh. I suspect if those of us who are uncomfortable with the typing subsystem actually hated Python we would have found our way to the exits long ago. Typing was always supposed to be optional, so I didn't worry too much about it at the time. As Jack indicated though, while it may be optional at the language level, it's often not truly optional at the organizational level. As you indicated, there are two things going on, Python syntax and the semantics which go along with it. Python's economical syntax is a terrific reflection of its runtime semantics, hence the use of the phrase "executable pseudocode" to describe Python (at least in the past). Just because you are using Python syntax for your declarations doesn't mean that (a) mapping the semantics of the desired declarations onto existing syntax will be straightforward or (b) that the semantics of those declarations will be reflected as effortlessly as it reflects runtime semantics. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/RSFLXNJIXM7TSIQS5GS3UOJCU7P2ULIU/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Suggestion: a little language for type definitions
> ... make sense of what they’re reading. Some of us have that problem with type-embellished code now. I'm not sure a little language would be such a bad idea. 樂 Fortunately, my relationship to the working world allows me to simply ignore explicit typing. Way, way BITD I recall leaning on a crutch to generate complex C type declarations. I no longer recall what it was called, but you gave it a restricted English description of what you wanted ("function returning pointer to function returning void pointer" or something similar) and it spit out the necessary line noise. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/SHHUA4SBSEIT2GFZRYUTGF4EJHETHUY3/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Is anyone using 15-bit PyLong digits (PYLONG_BITS_IN_DIGIT=15)?
Perhaps I missed it, but maybe an action item would be to add a buildbot which configures for 15-bit PyLong digits. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/ZXPOXNYDZXAI3ZVXMSQCSL6YFLQDKKMA/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: issues-test-2 spam?
> Is anyone else also getting multiple subscription notices? > Yup. In an earlier thread (here? discuss.python.org?) I thought it was established that someone was working on something related to Python bug tracking in GitHub. Or something like that. I've just been deleting them. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/MOS6XIDBWLQYHKO4BFO64WF4CXSM5YCZ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: "immortal" objects and how they would help per-interpreter GIL
It might be worth (re)reviewing Sam Gross's nogil effort to see how he approached this: https://github.com/colesbury/nogil#readme He goes into plenty of detail in his design document about how he deals with immortal objects. From that document: Some objects, such as interned strings, small integers, statically allocated PyTypeObjects, and the True, False, and None objects stay alive for the lifetime of the program. These objects are marked as immortal by setting the least-significant bit of the local reference count field (bit 0). The Py_INCREF and Py_DECREF macros are no-ops for these objects. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/33E2H6JD46VWYBEEV7YB4EIUEZ5JODSQ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Optimizing literal comparisons and contains
> That is not entirely true: > https://github.com/python/cpython/pull/29639#issuecomment-974146979 The only places I've seen "if 0:" or "if False:" in live code was for debugging. Optimizing that hardly seems necessary. In any case, the original comment was about comparisons of two constants. I suppose sweeping up all of that into a constant expression folding/elimination step performed on the AST and/or during peephole optimization would cover both cases. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/HW3WJUPENKNZ2NNJ52LRRM7HLFWB44XH/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Optimizing literal comparisons and contains
> Many operations involving two literals are optimized (to a certain level). So > it sort of surprises me that literal comparisons are not optimized and > literal contains only convert the right operand to a constant if possible. > I'd like to implement optimizations for these especially for the literal > contains. There is a TODO in the ast optimizer for literal comparisons as > well, and that's another reason I would like to have these added. Though not having paid much attention to this over the years, I'm pretty sure I've seen the topic float past from time-to-time. As I recall, optimizing expressions involving two constants isn't a big win in Python because that sort of expression occurs rarely in anything other than test code. In C/C++ and its cousins, preprocessors routinely expand names to constants, so 2 == 2 might well be seen by the compiler even though the author actually wrote MUMBLE == FRAZZLE If a Python programmer wrote such an expression, MUMBLE and FRAZZLE would be names bound to a particular object, and could — theoretically — be rebound at runtime, so such an expression couldn't safely be optimized. So, while you could optimize expressions involving just constants, the benefit would be exceedingly small compared to the effort to write and maintain the optimization code. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/4TFW3Q2X42XKP4IK263ZXOGAZ3XXYUH4/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python multithreading without the GIL
Sam> I think the performance difference is because of different versions of NumPy. Thanks all for the help/input/advice. It never occurred to me that two relatively recent versions of numpy would differ so much for the simple tasks in my script (array creation & transform). I confirmed this by removing 1.21.3 and installing 1.19.4 in my 3.9 build. I also got a little bit familiar with pyperf, and as a "stretch" goal completely removed random numbers and numpy from my script. (Took me a couple tries to get my array init and transposition correct. Let's just say that it's been awhile. Numpy *was* a nice crutch...) With no trace of numpyleft I now get identical results for single-threaded matrix multiply (a size==1, b size==2): 3.9: matmul: Mean +- std dev: 102 ms +- 1 ms nogil: matmul: Mean +- std dev: 103 ms +- 2 ms and a nice speedup for multi-threaded (a size==3, b size=6, nthreads=3): 3.9: matmul_t: Mean +- std dev: 290 ms +- 13 ms nogil: matmul_t: Mean +- std dev: 102 ms +- 3 ms Sam> I'll update the version of NumPy for "nogil" Python if I have some time this week. I think it would be sufficient to alert users to the 1.19/1.21 performance differences and recommend they force install 1.19 in non-nogil builds for testing purposes. Hopefully adding a simple note to your README will take less time than porting your changes to numpy 1.21 and adjusting your build configs/scripts. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/5RXRTNNCYBCILMVATHODFGAZ5ZEQXRZI/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python multithreading without the GIL
> Remember that py stone is a terrible benchmark. I understand that. I was only using it as a spot check. I was surprised at how much slower my (threaded or unthreaded) matrix multiply was on nogil vs 3.9+. I went into it thinking I would see an improvement. The Performance section of Sam's design document starts: As mentioned above, the no-GIL proof-of-concept interpreter is about 10% faster than CPython 3.9 (and 3.10) on the pyperformance benchmark suite. so it didn't occur to me that I'd be looking at a slowdown, much less by as much as I'm seeing. Maybe I've somehow stumbled on some instruction mix for which the nogil VM is much worse than the stock VM. For now, I prefer to think I'm just doing something stupid. It certainly wouldn't be the first time. Skip P.S. I suppose I should have cc'd Sam when I first replied to this thread, but I'm doing so now. I figured my mistake would reveal itself early on. Sam, here's my first post about my little "project." https://mail.python.org/archives/list/python-dev@python.org/message/WBLU6PZ2RDPEMG3ZYBWSAXUGXCJNFG4A/ ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/CGT4EMEA7JEH6CIRTB7Z5UUIKWKREAMF/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python multithreading without the GIL
Skip> 1. I use numpy arrays filled with random values, and the output array is also a numpy array. The vector multiplication is done in a simple for loop in my vecmul() function. CHB> probably doesn't make a difference for this exercise, but numpy arrays make lousy replacements for a regular list ... Yeah, I don't think it should matter here. Both versions should be similarly penalized. Skip> The results were confusing, so I dredged up a copy of pystone to make sure I wasn't missing anything w.r.t. basic execution performance. I'm still confused, so will keep digging. CHB> I'll be interested to see what you find out :-) I'm still scratching my head. I was thinking there was something about the messaging between the main and worker threads, so I tweaked matmul.py to accept 0 as a number of threads. That means it would call matmul which would call vecmul directly. The original queue-using versions were simply renamed to matmul_t and vecmul_t. I am still confused. Here are the pystone numbers, nogil first, then the 3.9 git tip: (base) nogil_build% ./bin/python3 ~/cmd/pystone.py Pystone(1.1.1) time for 5 passes = 0.137658 This machine benchmarks at 363218 pystones/second (base) 3.9_build% ./bin/python3 ~/cmd/pystone.py Pystone(1.1.1) time for 5 passes = 0.207102 This machine benchmarks at 241427 pystones/second That suggests nogil is indeed a definite improvement over vanilla 3.9. However, here's a quick nogil v 3.9 timing run of my matrix multiplication, again, nogil followed by 3.9 tip: (base) nogil_build% time ./bin/python3 ~/tmp/matmul.py 0 10 a: (160, 625) b: (625, 320) result: (160, 320) -> 51200 real 0m9.314s user 0m9.302s sys 0m0.012s (base) 3.9_build% time ./bin/python3 ~/tmp/matmul.py 0 10 a: (160, 625) b: (625, 320) result: (160, 320) -> 51200 real 0m4.918s user 0m5.180s sys 0m0.380s What's up with that? Suddenly nogil is much slower than 3.9 tip. No threads are in use. I thought perhaps the nogil run somehow didn't use Sam's VM improvements, so I disassembled the two versions of vecmul. I won't bore you with the entire dis.dis output, but suffice it to say that Sam's instruction set appears to be in play: (base) nogil_build% PYTHONPATH=$HOME/tmp ./bin/python3/python3 Python 3.9.0a4+ (heads/nogil:b0ee2c4740, Oct 30 2021, 16:23:03) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import matmul, dis >>> dis.dis(matmul.vecmul) 26 0 FUNC_HEADER 11 (11) 28 2 LOAD_CONST 2 (0.0) 4 STORE_FAST 2 (result) 29 6 LOAD_GLOBAL 3 254 ('len'; 254) 9 STORE_FAST 8 (.t3) 11 COPY 9 0 (.t4 <- a) 14 CALL_FUNCTION 9 1 (.t4 to .t5) 18 STORE_FAST 5 (.t0) ... So I unboxed the two numpy arrays once and used lists of lists for the actual work. The nogil version still performs worse by about a factor of two: (base) nogil_build% time ./bin/python3 ~/tmp/matmul.py 0 10 a: (160, 625) b: (625, 320) result: (160, 320) -> 51200 real 0m9.537s user 0m9.525s sys 0m0.012s (base) 3.9_build% time ./bin/python3 ~/tmp/matmul.py 0 10 a: (160, 625) b: (625, 320) result: (160, 320) -> 51200 real 0m4.836s user 0m5.109s sys 0m0.365s Still scratching my head and am open to suggestions about what to try next. If anyone is playing along from home, I've updated my script: https://gist.github.com/smontanaro/80f788a506d2f41156dae779562fd08d I'm sure there are things I could have done more efficiently, but I would think both Python versions would be similarly penalized by dumb s**t I've done. Skip Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/4JSJFOWQPZHUAUGDVRGIU6LTF7QNXTLD/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python multithreading without the GIL
> > Did you try running the same code with stock Python? > > One reason I ask is the IIUC, you are using numpy for the individual > vector operations, and numpy already releases the GIL in some > circumstances. > I had not run the same code with stock Python (but see below). Also, I only used numpy for two bits: 1. I use numpy arrays filled with random values, and the output array is also a numpy array. The vector multiplication is done in a simple for loop in my vecmul() function. 2. Early on I compared my results with the result of numpy.matmul just to make sure I had things right. That said, I have now run my example code using both PYTHONGIL=0 and PYTHONGIL=1 of Sam's nogil branch as well as the following other Python3 versions: * Conda Python3 (3.9.7) * /usr/bin/python3 (3.9.1 in my case) * 3.9 branch tip (3.9.7+) The results were confusing, so I dredged up a copy of pystone to make sure I wasn't missing anything w.r.t. basic execution performance. I'm still confused, so will keep digging. It would also be fun to see David Beezley’s example from his seminal talk: > > https://youtu.be/ph374fJqFPE > Thanks, I'll take a look when I get a chance. Might give me the excuse I need to wake up extra early and tag along with Dave on an early morning bike ride. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/YZYJIDFH6Y3YCD3LCBQPRDQXN2JGJA7N/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python multithreading without the GIL
Guido> To be clear, Sam’s basic approach is a bit slower for single-threaded code, and he admits that. But to sweeten the pot he has also applied a bunch of unrelated speedups that make it faster in general, so that overall it’s always a win. But presumably we could upstream the latter easily, separately from the GIL-freeing part. Something just occurred to me. If you upstream all the other goodies (register VM, etc), when the time comes to upstream the no-GIL parts won't the complaint then be (again), "but it's slower for single-threaded code!" ? ;-) Onto other things. For about as long as I can remember, the biggest knock against Python was, "You can never do any serious multi-threaded programming with it. It has this f**king GIL!" I know that attempts to remove it have been made multiple times, beginning with (I think) Greg Smith in the 1.4 timeframe. In my opinion, Sam's work finally solves the problem. Not being a serious parallel programming person (I have used multi-threading a bit in Python, but only for obviously I/O-bound tasks), I thought it might be instructive — for me, at least — to kick the no-GIL tires a bit. Not having any obvious application in mind, I decided to implement a straightforward parallel matrix multiply. (I think I wrote something similar back in the mid-80s in a now defunct Smalltalk-inspired language while at GE.) Note that this was just for my own edification. I have no intention of trying to supplant numpy.matmul() or anything like that. It splits up the computation in the most straightforward (to me) way, handing off the individual vector multiplications to a variable sized thread pool. The code is here: https://gist.github.com/smontanaro/80f788a506d2f41156dae779562fd08d Here is a graph of some timings. My machine is a now decidedly long-in-the-tooth Dell Precision 5520 with a 7th Gen Core i7 processor (four cores + hyperthreading). The data for the graph come from the built-in bash time(1) command. As expected, wall clock time drops as you increase the number of cores until you reach four. After that, nothing improves, since the logical HT cores don't actually have their own ALU (just instruction fetch/decode I think). The slope of the real time improvement from two cores to four isn't as great as one to two, probably because I wasn't careful about keeping the rest of the system quiet. It was running my normal mix, Brave with many open tabs + Emacs. I believe I used A=240x3125, B=3125x480, giving a 240x480 result, so 15200 vector multiplies. . [image: matmul.png] All-in-all, I think Sam's effort is quite impressive. I got things going in fits and starts, needing a bit of help from Sam and Vadym Stupakov to get the modified numpy implementation (crosstalk between my usual Conda environment and the no-GIL stuff). I'm sure there are plenty of problems yet to be solved related to extension modules, but I trust smarter people than me can solve them without a lot of fuss. Once nogil is up-to-date with the latest 3.9 release I hope these changes can start filtering into main. Hopefully that means a 3.11 release. In fact, I'd vote for pushing back the usual release cycle to accommodate inclusion. Sam has gotten this so close it would be a huge disappointment to abandon it now. The problems faced at this point would have been amortized over years of development if the GIL had been removed 20 years ago. I say go for it. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/WBLU6PZ2RDPEMG3ZYBWSAXUGXCJNFG4A/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python multithreading without the GIL
Mohamed> I love everything about this - but I expect some hesitancy due to this "Multithreaded programs are prone to concurrency bugs.". Paul> The way I see it, the concurrency model to be used is selected by developers. They can choose between ... I think the real intent of the statement Mohamed quoted is that just because your program works in a version of Python with the GIL doesn't mean it will work unchanged in a GIL-free world. As we all know, the GIL can hide a multitude of sins. I could be paraphrasing Tim Peters here without realizing it explicitly. It kinda sounds like something he might say. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/4OWK2DQKQOZZDPNWA7KC3NAUTWOBFOND/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: My apologies to the list
> > For the record, my personal arrangement for years has been to read most > open source mailing-lists using GMane, on a NNTP reader separate from my > main mail client. This works fine when I don't want to read open > source-related e-mails :-) > And if you're not an NNTP person (anymore), filters in Gmail (and I assume other mail readers/apps) allow you to sequester mails into separate folders which you can ignore if you like. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/BIYYUXBJVIF3J55FMGZ7Z3D4JRUUK637/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Notes on PEP 8
> > However, it has become a de facto standard for all Python code, and in the > document itself, there is frequent wording akin to "Identifiers used in the > standard library must be ASCII compatible ...", and even advice for third > party libraries. > > Which I think is acknowledging that PEP 8 is indeed not only about the > standard library. > > So maybe there should be a bit of text about that at the top. > This was my thought as I read the original thread yesterday. There are tools in the wild which base their style recommendations/enforcements on PEP 8. Heck, there is even a tool in PyPI called "pep8." While 2.x is out of support, it *is* still used by many organizations. If nothing else, it would seem to be useful to branch the pep8 repo ("lastpy2" perhaps?) just before applying Chris's updates. That would allow enterprising folks to easily fork and reference back to the last point where the PEP 8 text did mention Python 2.x. (This no longer applies to me personally, as I have fully gone over to Python 3, but at my last job there was still plenty of Python 2 code to be had.) Just a thought... Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/UJUBBRQWXNGHXRGFO7WFXFL7DHKQQE6T/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: GDB not breaking at the right place
> > I'm having a hard time debugging some virtual machine code because GDB > won't break where it's supposed to. > A quick follow-up. The GDB folks were able to reproduce this in an XUbuntu 20.04 VM. I don't know if they tried straight Ubuntu, but as the main difference between the two is the user interface it seems likely the bug might surface there as well. The use of a VM thus provides another option as a workaround for me, though my simple-minded label-to-line number script works as well. Skip > ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/W5PHGSKBDCWWLOFNUYFX3USO4OSWHIIS/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: GDB not breaking at the right place
me> I'm having a hard time debugging some virtual machine code because GDB won't break where it's supposed to. Here's a quick follow-up. I tried a number of different values of OPT during configuration and compilation, but nothing changed the result. I could never (and still can't) get GDB to break at the line it announces when a breakpoint is set using the TARGET_* labels generated for computed gotos. I also backed away from my dev branch and switched to up-to-date versions of main and 3.10. No difference... So, I opened a bug against GDB: https://sourceware.org/bugzilla/show_bug.cgi?id=27907 and ... wait for it ... The person who responded (Keith Seitz @ RedHat) was unable to reproduce the problem. He encouraged me to build GDB and try again, and with some effort I was able to build an executable (wow, the GDB build process makes building Python look like a piece of cake). Still, the difference between the announced and actual line numbers of the breakpoint remains. I disabled Python support in GDB by renaming my ~/.gdbinit file which declares add-auto-load-safe-path /home/skip/src/python/rvm That had no effect either. I don't have any LD_*_PATH environment variables set. I think I've run out of things to try. I don't recall anyone here indicating they'd tried to replicate the problem. Could I bother someone to give it a whirl? It's easy. Just run GDB referring to a Python executable with computed gotos enabled and debug symbols included. At the (gdb) prompt, execute: b ceval.c:_PyEval_EvalFrameDefault:TARGET_LOAD_CONST run and compare the line number announced when the breakpoint is set with the line number announced when execution stops. On my main branch (updated yesterday), using OPT='-O3 -g -Wall' I get an absolutely bonkers break in the action: % ~/src/binutils-gdb/gdb/gdb ./pythonild.sh GNU gdb (GDB) 11.0.50.20210524-git Copyright (C) 2021 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html > This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-pc-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <https://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from ./python... (gdb) b ceval.c:_PyEval_EvalFrameDefault:TARGET_LOAD_CONST *Breakpoint 1 at 0x5e934: file Python/ceval.c, line 1836.* (gdb) r Starting program: /home/skip/src/python/rvm/python warning: the debug information found in "/lib64/ld-2.31.so" does not match "/lib64/ld-linux-x86-64.so.2" (CRC mismatch). [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". *Breakpoint 1, 0x555b2934 in _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:2958* 2958 DISPATCH(); LOAD_CONST is, in fact, defined at line ceval.c:1836. Line 2958 is the last line of the implementation of LOAD_NAME, just a few lines away :-/. If I get more detailed with the configure/compile options I can get the difference down to a few lines, but I've yet to see it work correctly. I'm currently offering OPT='-g -O0 -Wall' --with-pydebug --with-trace-refs to the configure script. In most any other program, breaking a few lines ahead of where you wanted would just be an annoyance, but in the Python virtual machine, it makes the breakpoint useless. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/H5KAVDYA4BKRH6ZXIWTY246X662WOPW2/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: GDB not breaking at the right place
> Just turn off optimisation when you want to single-step. But I don't just want to single-step. I want to break at the target label associated with a specific opcode. (I am - in fits and starts - working on register-based virtual machine instructions). If I'm working on, for example, the register version of POP_JUMP_IF_FALSE, stepping through a bunch of instances of the working register version of LOAD_FAST or EXTENDED_ARG isn't going to be helpful. Further, I have a set of GDB commands I want to execute at each breakpoint. And I want to do this across GDB sessions (so, I save breakpoints and user-defined commands in a GDB command file). Just to make things concrete, here's what I want to print every time I hit my JUMP_IF_FALSE_REG statement's code: define print_opargs_jump p/x oparg p (oparg >> 16) & 0xff | (oparg >> 8) & 0xff p oparg & 0xff p *fastlocals@4 end This break command should do the trick: break ceval_reg.h:_PyEval_EvalFrameDefault:TARGET_JUMP_IF_FALSE_REG commands print_opargs_jump end but it doesn't. GDB stops execution in some completely other one of the 50+ instructions I've implemented so far. And not even at the start of said other instruction. This problem is true whether I compile with -g -Og or -g -O0. The only difference between the two is that GDB stops execution at different incorrect locations. That, as you might imagine, makes debugging difficult. Setting breakpoints by line number works as expected. In all the years I've been using GDB I've never had a problem with that. However, that's fragile in the face of changing offsets for different instructions in the C code (add a new instruction, add or delete C code, reorder instructions for some reason, etc), it's difficult to maintain those kinds of breakpoints. I wrote a crude little script that converts the above break command into this: break ceval_reg.h:440 commands print_opargs_jump end This is just a workaround until someone (unlikely to be me) solves the problem with breaking at labels. If someone could refute or verify my contention that breaking via labels is broken, I'd much appreciate it. I've not yet checked setting labeled breakpoints directly in ceval.c. To minimize merge conflicts, I'm implementing my register instructions in a new header file, Python/ceval_reg.h, which is #included in ceval.c at the desired spot. Maybe that factors into the issue. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/DE33TZBPIRV5DEOFFZBFDXPWRLZE47IB/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: GDB not breaking at the right place
> I strongly suggest to only build Python with -O0 when using gdb. -Og > enables too many optimizations which makes gdb less usable. Thanks, Victor. It never made sense to me that you would want any optimizations enabled when truly debugging code (as opposed to wanting debug symbols and a sane traceback in production code). I'm getting more convinced that the problem I'm seeing is a GCC/GDB thing, particularly because I can move the erroneous stopping point by changing the GCC optimization level. I'll probably open a bugzilla report just so it's on that team's radar screen. In the meantime, to get going again I wrote a crude script which maps the file:function:label form to file:linenumber form. That way I can save/restore breakpoints across GDB sessions and still avoid problems when the offsets to specific instructions change. Skip Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/AOKKJHCRLEYU64V425AJHMM46CCYW55M/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: GDB not breaking at the right place
On Fri, May 21, 2021 at 2:48 PM Guido van Rossum wrote: > I suspect that you're running into the issue where compiler optimizations > are *forced* on for ceval.c. > > There's a comment near the top about this. Just comment out this line: > > #define PY_LOCAL_AGGRESSIVE > > We tried to define that macro conditionally, but something broke because > the C stack frame for _PyEval_EvalFrameDefault became enormous without > optimization, and some tests failed. (Maybe it was Victor's refleak test? > The git history will tell you more if you're really interested.) > > This is a nasty trap (I fell in myself, so that makes it nasty :-), but > the proper fix would be convoluted -- we'd need a way to enable or disable > this separately so the specific test can run but developers trying to step > through ceval.c will be able to see the unoptimized code. > Thanks, Guido, however that doesn't seem to help. I grepped around for PY_LOCAL_AGGRESSIVE in the source. It seems to be specific to MSVC. Here's the definition in Include/pyport.h with a slight change to the indentation to demonstrate its scope better: #if defined(_MSC_VER) # if defined(PY_LOCAL_AGGRESSIVE) /* enable more aggressive optimization for MSVC */ /* active in both release and debug builds - see bpo-43271 */ #pragma optimize("gt", on) # endif /* ignore warnings if the compiler decides not to inline a function */ # pragma warning(disable: 4710) /* fastest possible local call under MSVC */ # define Py_LOCAL(type) static type __fastcall # define Py_LOCAL_INLINE(type) static __inline type __fastcall #else # define Py_LOCAL(type) static type # define Py_LOCAL_INLINE(type) static inline type #endif I can move the actual point where GDB breaks by replacing -Og with -O0, but it still breaks at the wrong place, just a different wrong place. If I set a breakpoint by line number, it stops at the proper place. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/YLD6WPVPAPX4F26R2JTQ35J7NOJQWVBF/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] GDB not breaking at the right place
I'm having a hard time debugging some virtual machine code because GDB won't break where it's supposed to. Here's my breakpoint #2: 2 breakpoint keep y 0x556914fd ceval_reg.h:_PyEval_EvalFrameDefault:TARGET_JUMP_IF_FALSE_REG breakpoint already hit 1 time p/x oparg p (oparg >> 16) & 0xff | (oparg >> 8) & 0xff p oparg & 0xff p *fastlocals@4 but when it breaks, it's not at the beginning of the case (that is, where the TARGET_JUMP_IF_FALSE_REG label is defined), but inside the SETLOCAL macro of the COMPAR_OP_REG case! (That is, it's not anywhere close to the correct place.) case TARGET(COMPARE_OP_REG): { int dst = REGARG4(oparg); int src1 = REGARG3(oparg); int src2 = REGARG2(oparg); int cmpop = REGARG1(oparg); assert(cmpop <= Py_GE); PyObject *left = GETLOCAL(src1); PyObject *right = GETLOCAL(src2); PyObject *res = PyObject_RichCompare(left, right, cmpop); *SETLOCAL(dst, res);* if (res == NULL) goto error; DISPATCH(); } It actually breaks in the Py_XDECREF which is part of the SETLOCAL macro: #define SETLOCAL(i, value) do { PyObject *tmp = GETLOCAL(i); \ GETLOCAL(i) = value; \ *Py_XDECREF(tmp)*; } while (0) (actually, in the Py_DECREF underneath the Py_XDECREF macro). I've configured like so: ./configure --with-pydebug --with-tracerefs --with-assertions Python/ceval.c is compiled with this GCC command: gcc -pthread -c -Wno-unused-result -Wsign-compare -g -Og -Wall-std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include-DPy_BUILD_CORE -o Python/ceval.o Python/ceval.c I don't know if this is a GCC problem, a GDB problem, or a Skip problem. Is there more I can do to help the tool chain break at the correct place? It seems that if I break at a hard line number, GDB does the right thing, but I'd kind of prefer to use the symbolic label instead. I rather like the notion of breaking at a label name, but if GCC/GDB can't figure things out, I guess I'll have to live with line numbers. Thanks, Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/EOTDLRRUR6J6KMM6ZKBDJDAZLBEY6BBP/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Can't sync cpython main to my fork
> Maybe others have different workflows, but I don't see much of a need for > keeping your fork's main branch up to date. My workflow is something like > this: > > % git remote -v > origin g...@github.com:JelleZijlstra/cpython.git (fetch) > origin g...@github.com:JelleZijlstra/cpython.git (push) > upstream https://github.com/python/cpython.git (fetch) > upstream https://github.com/python/cpython.git (push) > % git checkout main > Already on 'main' > Your branch is up to date with 'upstream/main'. > % git pull > ... get new changes from upstream > % git checkout -b myfeature > ... write my code > % git push -u origin myfeature > ... open a pull request > > So my local main branch tracks upstream/main (the real CPython repo), not > origin/main (my fork). Thanks. Up until the 3.10 split I was tracking main from a development branch in my fork, and trying — lately pretty much unsuccessfully — to drink from the firehose of changes to the virtual machine code. It made sense to me to keep my fork's main up-to-date with upstream/main. Now that I have diverged to follow the 3.10 branch for now, that's less of an issue. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/Y6KZJPLQJNGXII7YBHTU7ICESICELL7V/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Can't sync cpython main to my fork
> Your main branch in GitHub has some commits they are not in python/cpython. > https://github.com/smontanaro/cpython/commits/main Regarding this. How else am I to keep my fork in sync with python/cpython other than by the occasional pull upstream/push origin process? That's what all those merges are. Is that first commit (Github (un)Dependabot) the culprit, or are all the other git merge results also problematic? Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/AGUYBV54ZOBH6LJGABMOTLA4MEAXPWWY/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Can't sync cpython main to my fork
Thanks for the recipe to fix my problem. Your main branch in GitHub has some commits they are not in python/cpython. > https://github.com/smontanaro/cpython/commits/main Is there a way to easily tell how they differ? My (obvious to me, but wrong) guess was git diff upstream/main origin/main Then I went to Github and compared my fork with python/cpython: https://github.com/python/cpython/compare/main...smontanaro:main It appears I might have screwed the pooch by accepting Github's recent pull request. I'm just a gitiot. How am I supposed to know not to accept their PRs? Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/SLTBCXF5S4S54F2VKQJZGOGWYNCR34PE/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Can't sync cpython main to my fork
(Sorry, this is probably not really python-dev material, but I'm stuck trying to bring my fork into sync with python/cpython.) I don't know if I did something to my fork or if the master->main change did something to me, but I am unable to sync my smontanaro/cpython main with the python/cpython main. The dev guide gives this simple recipe: git checkout main git pull upstream main git push origin main Here's how that goes: (python39) rvm% git co main Already on 'main' Your branch is up to date with 'upstream/main'. (python39) rvm% git pull upstream main >From git://github.com/python/cpython * branch main -> FETCH_HEAD Already up to date. (python39) rvm% git push origin main To github.com:smontanaro/cpython.git ! [rejected] main -> main (non-fast-forward) error: failed to push some refs to 'github.com:smontanaro/cpython.git' hint: Updates were rejected because the tip of your current branch is behind hint: its remote counterpart. Integrate the remote changes (e.g. hint: 'git pull ...') before pushing again. hint: See the 'Note about fast-forwards' in 'git push --help' for details. I looked at the fast-forward stuff in 'git push --help' but couldn't decipher what it told me, or more importantly, how it related to my problem. It's not clear to me how python/cpython:main can be behind smontanaro/cpython:main. I've attached my .git/config file in case that provides clues to the Git aficionados. Thx... Skip config Description: Binary data ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/J6GGEKUBMPU3X3WNKUG2XUD3GDV7L2FK/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Keeping Python a Duck Typed Language.
> Practically speaking, one issue I have is how easy it is to write > isinstance or issubclass checks. It has historically been much more > difficult to write and maintain a check that something looks like a duck. > > `if hasattr(foo, 'close') and hasattr(foo, 'seek') and hasattr(foo, > 'read'):` > > Just does not roll off the figurative tongue and that is a relatively > simple example of what is required for a duck check. > > To prevent isinstance use when a duck check would be better, > I'm going to chime in briefly then return to lurking on this topic, trying to figure out all the changes to typing while I wasn't paying attention. Back in ancient times I recall "look before you leap" as the description of either of the above styles of checks, no matter which was easier to type. At the time, I thought the general recommendation was to document what attributes you expected objects to provide and just make the relevant unguarded references. I no longer recall what the tongue-in-cheek description of that style was (just "leap"?) Is that more simple usage more akin to classic "duck typing" than always guarding accesses? I assume that will still have a place in the pantheon of Python type variants. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/QW7NQ5KTOQLV27MKZO5B3TMSTXIR5MC5/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: How about using modern C++ in development of CPython ?
> > Perhaps there's some history in the python-dev archives that would inform > you of previous discussions and help you repeating already-considered > arguments. > This topic has come up a few times over the years. Maybe it would be worthwhile to have an informational PEP which documents the various arguments pro and con to short-circuit or inform future discussions. I'm not volunteering to write it. Denis, maybe you could make a run at it. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/4SEEGT6B553YF73NXKQIT7ZCYRVTS542/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: On the migration from master to main
Can I distract people for a moment to ask a couple procedural questions about this change? I maintain my own fork of https://github.com/python/cpython, but don't yet see a main branch on python/cpython. - When is the new main branch supposed to appear - Once it does, what will I need to do other than to update my fork? - I have a branch in my fork (register2) which is a branch from my fork's master. Not being a Git whiz, how will I switch so my register2 branch has the new main as its upstream (I think that's the correct word)? - How long will master be around after the switch before going to that big branch in the sky? I was able to scrounge up the few commands necessary to make the change to one of my standalone projects and successfully move it from master to main. The several articles I at least peeked at all followed pretty much the same recipe as this one: https://stevenmortimer.com/5-steps-to-change-github-default-branch-from-master-to-main/ None, however, discuss how the change will affect forks. I'm a bit unclear how that part is supposed to work. I thought I might find a process PEP about this change, but saw nothing obvious in PEP 0. I'm sure this is a trivial few git commands for those more familiar with the toolchain than I am, but I see over 18k forks of the repository. I suspect a few people out of that crowd will be in the same boat as me. Thx, Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/ZPTBTTWIAHTMPLXEMI237LWLBDIPXZDT/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Non-monotonically increasing line numbers in dis.findlinestarts() output
> co_lnotab has had negative deltas since 3.6. Thanks. I'm probably misreading Objects/lnotab_notes.txt. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/ATKSFCDGEXW5G6LADOM2F4SAUVLARHGP/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Non-monotonically increasing line numbers in dis.findlinestarts() output
Consider this little session from the tip of the spear: >>> sys.version '3.10.0a6+ (heads/master:0ab152c6b5, Mar 15 2021, 17:24:38) [GCC 10.2.0]' >>> def while2(a): ... while a >= 0: ... a -= 1 ... return a ... >>> dis.dis(while2) 2 0 LOAD_FAST0 (a) 2 LOAD_CONST 1 (0) 4 COMPARE_OP 5 (>=) 6 POP_JUMP_IF_FALSE 24 3 >>8 LOAD_FAST0 (a) 10 LOAD_CONST 2 (1) 12 INPLACE_SUBTRACT 14 STORE_FAST 0 (a) 2 16 LOAD_FAST0 (a) 18 LOAD_CONST 1 (0) 20 COMPARE_OP 5 (>=) 22 POP_JUMP_IF_TRUE 8 4 >> 24 LOAD_FAST0 (a) 26 RETURN_VALUE >>> list(dis.findlinestarts(while2.__code__)) [(0, 2), (8, 3), (16, 2), (24, 4)] I get that somewhere along the way the compiler has been optimized to duplicate the loop's test so as to eliminate some jumps (perhaps). It's not clear to me that the line number should be "2" in the duplicated test though. Shouldn't it be "3"? I stumbled on this while trying to generate a line number table in my side project register VM. As I understand it, the line number delta in the output table is supposed to always be >= 0. In my code I'm using dis.findlinestarts() to determine the line numbers for each block. Perhaps I should be modifying its results. OTOH, maybe it's a bug. (If that's the consensus, I will create an issue.) Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/V76BSGEHGHPM3D4TXYTGBXCZ5UCCBOVV/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python 0.9.1
> In conversation with Dan, I have fixed my conda package (but overwritten the same version). I needed to add this to the build: > > # sudo apt-get install gcc-multilib > CC='gcc -m32' make python Thanks. That fixes it for me as well. I never even looked at intobject.c, since it compiled out of the box, and didn't dig into it when I saw the error. Looking now, I see a 32-bit assumption: if (x > 0x7fff || x < (double) (long) 0x8000) return err_ovf(); With the -m32 flag, running lib/testall.py runs to completion. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/S3SIGIAWOORGVDM6ZOR6UOG3T2RUM5TX/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python 0.9.1
> If we can get a clean copy of the original sources I think we should put them > up under the Python org on GitHub for posterity. Did that earlier today: https://github.com/python/pythondotorg/issues/1734 Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/D22Q4RLF5MIGOQG744NVZ5J5D7DBZ4TN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python 0.9.1
This is getting a bit more off-topic for python-dev than I'd like. I will make a couple comments though, then hopefully be done with this thread. > The original ones are here: > http://ftp.fi.netbsd.org/pub/misc/archive/alt.sources/volume91/Feb/ > Look at http://ftp.fi.netbsd.org/pub/misc/archive/alt.sources/index.gz > for the associating subjects with file names. As far as I can tell, > they extract flawlessly using unshar. Thanks. Will check them out. > When I see diffs like this (your git vs. the unshar result) I tend to > trust unshar more: ... Well, sure. I was trying to reverse engineer the original shar files from Google's HTML. I was frankly fairly surprised that I got as close to perfection as I did. I realized that Google had mangled Guido's old CWI email, but didn't worry about it. I also saw the TeX macro mangling, but as I wasn't planning to rebuild the documentation, I didn't worry too much about that. I expected to need a bunch of manual patchwork to get back to something that would even compile. It's nice to know that in this case, "the Internet never forgets." Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/J6IOWRUUZ64EHFLGSNBMSNO6RIJEZO22/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python 0.9.1
> Also mind > http://www.dalkescientific.com/writings/diary/archive/2009/03/27/python_0_9_1p1.html > for result comparison. Thanks, Paul. I had lost track of Andrew. Good to know he's still out there. I wonder why his tar file was never sucked up into the historical releases page. Whew! My stupid little extraction script did a reasonable job. I see plenty of differences, but a cursory examination shows they are only in leading whitespace. Where I translated "\t" to TAB, it seems Andrew used a suitable number of spaces. Python modules/scripts seem more plausibly indented, and the couple I tried worked, so I'm a bit more confident I have things right: % PYTHONPATH=lib ./src/python >>> import string >>> print string.upper('hello world!') HELLO WORLD! >>> % ./src/python lib/fact.py 9 [3, 3, 41, 271] 4096 [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2] The tests don't pass though. 1 * 1 raises an integer overflow exception: >>> 1 * 1 Unhandled exception: run-time error: integer overflow Stack backtrace (innermost last): File "", line 1 I'll let someone figure that out. :-) At any rate, the git repo has been updated. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/RPNQWLFJ54QENZZEKTRLSYZXVPDOGWFS/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python 0.9.1
> If someone knows how to get the original Usenet messages from what Google > published, let me know. Seems the original shar is there buried in a Javascript string toward the end of the file. I think I've got a handle on it, though it will take a Python script to massage back into correct format. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/GIHYWK64MY4TBQA357HOK2K7MG3HZBFN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python 0.9.1
> > Wow. Was white-space not significant in this release of Python? I see the >> lack of indentation in the first Python programs. >> > > Indentation most certainly was significant from day 0. I suspect what > happened is that these files got busted somehow by the extraction process > used by Skip or Hiromi. > Yes, that's certainly possible. While it's nice that Google has archived this stuff, their faithfulness to the original formats leaves a bit to be desired (and gmane still doesn't work for me, eliminating that option). Guido's messages are displayed as HTML, and I saw no way to get at the raw Usenet messages. I just copied the shar data and saved the result. It seems clear that tabs copied as spaces. The Makefile indentation was hosed up. It should have dawned on me that the .py, .c and .h files would be messed up as well. I was only concerned with building the interpreter. If someone knows how to get the original Usenet messages from what Google published, let me know. Skip > ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/WQ4QHOWHQVLCNCJGAWALBSLDYLVHJIEI/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Python 0.9.1
A note to webmas...@python.org from an astute user named Hiromi in Japan* referred us to Guido's shell archives for the 0.9.1 release from 1991. As that wasn't listed in the historical releases README file: https://legacy.python.org/download/releases/src/README I pulled the shar files (and a patch), then made a few tweaks to get it to build: % ./python >>> print 'hello world!' hello world! >>> import sys >>> dir(sys) ['argv', 'exit', 'modules', 'path', 'ps1', 'ps2', 'stderr', 'stdin', 'stdout'] >>> sys.modules {'builtin': ; 'sys': ; '__main__': } >>> sys.exit(0) I then pushed the result to a Github repo: https://github.com/smontanaro/python-0.9.1 There is a new directory named "shar" with the original files, a small README file and a compile.patch file between the original code and the runnable code. It was a pleasant diversion for a couple hours. I was tired of shovelling snow anyway... Thank you, Hiromi. Skip * Hiromi is bcc'd on this note in case he cares to comment. I didn't want to publish his email beyond the bounds of the webmaster alias without his permission. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/VZYELIYAQWUHHGIIEPPJFREDX6F24KMN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Constructing expected_opinfo_* lists in test_dis.py
> The problem is not that dis.get_instructions can't be trusted, but that > the test isn't testing the dis module at all. It is testing whether the > output from the compiler has changed. > A lot of the tests in test_dis do that. Thanks. Perhaps such tests belong in a different test_* module? (I ask this in a rhetorical sense.) I realize that there can not be (nor should be) perfect isolation of test cases so that (for example) test_sys.py includes all tests of sys module functionality. Still, if a fairly large chunk of the contents of test_dis.py don't test dis module functionality (I'm guessing >= 50%), perhaps moving them to test_compiler.py or something similar would be a stronger signal about their intent. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/BZASBG64R2ZBFROEYPEW3GGSPJOQFJT5/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Constructing expected_opinfo_* lists in test_dis.py
Guido> Maybe these lines in test_dis.py? ... Skip> Thanks, I'll take a look. I was expecting there'd be a standalone Skip> script somewhere. Hadn't considered that comments would be hiding Skip> code. Indeed, that did the trick, however... I'm a bit uncomfortable with the methodology. It seems test_dis is using the same method (dis.get_instructions) to both generate the expected output and verify that dis.get_instructions works as expected. For the most part, you see the test case fails, rerun the code to generate the list, substitute, et voila! The test (magically) passes. Somewhere along the way, it seems there should be a way to alert the user that perhaps dis.get_instructions is broken and its output is not to be trusted completely. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/FAI7XYMYO3CGKJDU3WBD2AJ6Z6SEDPYD/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Constructing expected_opinfo_* lists in test_dis.py
> Maybe these lines in test_dis.py? > ``` > #print('expected_opinfo_jumpy = [\n ', > #',\n '.join(map(str, _instructions)), ',\n]', sep='') > ``` Thanks, I'll take a look. I was expecting there'd be a standalone script somewhere. Hadn't considered that comments would be hiding code. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/W7YPHZSIZHZIV7YBVFEJNT6IHCB6L4VW/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Constructing expected_opinfo_* lists in test_dis.py
I'm still messing around with my register VM stuff (off-and-on). I'm trying to adjust to some changes made a while ago, particularly (but probably not exclusively) after RERAISE acquired an argument. As a result, at least the expected_opinfo_jumpy list changed in a substantial way. I can manually work my way through it, but it sorta seems like someone might have used a script to translate the output of dis.dis(jumpy) into this list. Am I mistaken about this? Thx, Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/M6UGHHF42MR3QLR634M2JZA2XNIKMHZQ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Drop Solaris, OpenSolaris, Illumos and OpenIndiana support in Python
On Thu, Oct 29, 2020, 6:32 PM Gregory P. Smith wrote: > I agree, remove Solaris support. Nobody willing to contribute seems > interested. > *sniff* I spent a lot of professional time in front of SunOS and Solaris screens. But yes, I agree. It seems time to give Solaris the boot. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/Q7KZHO74XOI47ML7FQ5B7NWBQRQKWBYZ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] f_localsplus[0] == NULL in super_init_without_args()
(I'm far from certain this is the correct place for this message. Maybe I should have opened a case on bpo instead?) I got far behind on my register instruction set stuff and in the interim the ground shifted underneath me. I'm still working to try and get the test suite to pass (modulo test_ssl, which I expect to fail on Ubuntu 20.04 for the time being). In trying to track down where my code differs from the main/3.10 branch, I was looking at the super_init_without_args() function in typeobject.c. I'm puzzled by this chunk of code near the top: PyObject *obj = f->f_localsplus[0]; Py_ssize_t i, n; if (obj == NULL && co->co_cell2arg) { /* The first argument might be a cell. */ n = PyTuple_GET_SIZE(co->co_cellvars); for (i = 0; i < n; i++) { if (co->co_cell2arg[i] == 0) { PyObject *cell = f->f_localsplus[co->co_nlocals + i]; assert(PyCell_Check(cell)); obj = PyCell_GET(cell); break; } } } More specifically, the test for f->f_localsplus[0] being NULL seems odd. I can understand that there might not be any local variables, but there is no test for co->co_nlocals == 0. Since we don't know how many locals there might be, if there were none, wouldn't a reference to a cell variable or free variable occupy the first slot (assuming there are any)? I guess I'm confused about what the obj == NULL term in the if statement's expression is doing for us. On a (maybe not) related note, there is a comment further down in super_init(): /* Call super(), without args -- fill in from __class__ and first local variable on the stack. */ I'm not seeing where the first local variable on the stack is used to fill anything in, certainly not within the block guarded by the type == NULL expression. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/FOCMYF53HVGTCE4GA3BFVA6ASZEQ2THF/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Virtual machine bleeds into generator implementation?
> > Thanks for the replies. I will cook up some private API in my cpython > fork. Whether or not my new vm ever sees the light of day, I think it > would be worthwhile to consider a proper API (even a _PyEval macro or > two) for the little dance the two subsystems do. > I committed a change to my fork: https://github.com/smontanaro/cpython/commit/305758a42ec92dcd1d0a181f454af63b5741da5d This moves direct stack manipulation out of genobject.c into ceval.c and allows me to work on a non-stack way to deal with these tasks (note all the calls to Py_FatalError in the CO_REGISTER branches). I am specifically not holding this up as a proposal for how to do this (I am largely ignorant of many of the internal or CPython-specific aspects of the C API). Still, the tests pass and I can start to address those fatal errors. Skip . ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/U4YAIEGPUOX4X67GC5GWEE3PMHEVCKIR/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Virtual machine bleeds into generator implementation?
> > I think it's worse that this though, as it seems that in gen_send_ex() > > it actually pushes a value onto the stack. That can't be solved by > > simply adding a state attribute to the generator object struct. > > At the higher level, "it doesn't push value on stack", it "sets value > of the yield operator to return". Potatoes, potahtoes. :-) The current implementation "sets the value of the yield operator to return" by pushing a value onto the stack: /* Push arg onto the frame's value stack */ result = arg ? arg : Py_None; Py_INCREF(result); *(f->f_stacktop++) = result; Thanks for the replies. I will cook up some private API in my cpython fork. Whether or not my new vm ever sees the light of day, I think it would be worthwhile to consider a proper API (even a _PyEval macro or two) for the little dance the two subsystems do. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/NR66U3TD3BH6K7CQTA4B6HOLOT3KP3VF/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Virtual machine bleeds into generator implementation?
This is more an observation and question than anything else, but perhaps it will stimulate some ideas from the experts. Consider this trivial generator function: def gen(a): yield a When the YIELD_VALUE instruction is executed, it executes (in the non-async case): retval = POP(); f->f_stacktop = stack_pointer; goto exiting; This is fine as far as it goes. However, execution eventually leads to Objects/genobject.c where we hit this code (I think after falling off the YIELD_VALUE instruction, but perhaps virtual machine execution reaches RETURN_VALUE): /* If the generator just returned (as opposed to yielding), signal * that the generator is exhausted. */ if (result && f->f_stacktop == NULL) { There are several other references to f->f_stacktop in genobject.c. I've not yet investigated all of them. As I'm working on a register-based virtual machine implementation, I don't fiddle with the stack at all, so it's a bit problematic that the generator implementation is so intimate with the stack. As this is an area of the core which is completely new to me, I wonder if someone can suggest alternate ways of achieving the same effect without relying on the state of the stack. It seems to me that from within PyEval_EvalFrameDefault the implementations of relevant instructions could reference the generator object via f->f_gen and call some (new?) private API on generators which toggles the relevant bit of state in the generator. I think it's worse that this though, as it seems that in gen_send_ex() it actually pushes a value onto the stack. That can't be solved by simply adding a state attribute to the generator object struct. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/Q7JIWXV7O5FCA4A4TVF4RGOMAAA5EJRO/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Do we need port some extension modules to the modules directory?
> I notice some modules not in modules directory(for example: > _warnings、marshal in python directory). Do we need port those modules to > modules directory? I strongly suspect the answer is "no." Modules which aren't in the Modules directory are built directly into the Python executable. Using your example of the _warnings module, note that in Makefile.pre.in Python/_warnings.o is listed in the PYTHON_OBJS list (as are Python/sysmodule.o and Python/marshal.o). This is evidence they are built directly into the interpreter itself. Another is that at runtime those modules have no __file__ attribute: >>> import marshal >>> marshal.__file__ Traceback (most recent call last): File "", line 1, in AttributeError: module 'marshal' has no attribute '__file__' >>> import sys >>> sys.__file__ Traceback (most recent call last): File "", line 1, in AttributeError: module 'sys' has no attribute '__file__' >>> import _warnings >>> _warnings.__file__ Traceback (most recent call last): File "", line 1, in AttributeError: module '_warnings' has no attribute '__file__' Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/U4Q622G32XJGIXUMIW3V7QPKMHK7PEEV/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: How to enable tracemalloc for the test suite?
> It seems like your Python changes use Py_False "somewhere" without > Py_INCREF(Py_False). > Maybe it's COMPARE_OP_REG() which calls SETLOCAL(dst, False). Yes, this was the problem. Thanks for the fix. Too much blind adherence on my part to the existing COMPARE_OP logic. I've even written (relatively speaking) tomes about it in both my in-progress PEP as well as in various comments throughout the code. I don't think I had all that sorted out in my mind before implementing the first few instructions. Fortunately, I'm not too far into implementing the actual instructions. I should be able to easily go back and desk check the others. > Replacing stack-based bytecode with register-based bytecode requires > to rethink the lifetime of registers... I had a hard time to fix my > old "registervm" project to fix the register lifetime: I added > CLEAR_REG bytecode to explicitly clear a register. Using a stack, all > "CLEAR_REG" operation are implicit. You have to make them explicit. > Hopefully, a compiler can easily reuse registers and remove most > CLEAR_REG. I'm trying it the simplest way I can think of. Registers are exactly like local variables, so SETLOCAL Py_XDECREFs whatever is already there before overwriting it with a new value. At the end of _PyEval_EvalFrameDefault if the code object's co_flags includes the (new) CO_REGISTER flag, it loops over the stack/register space calling Py_CLEAR. The stack/register space is also Py_CLEAR'd when the frame is first allocated. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/OD2ZNQRVDN652JZAPPFYJV67KRXHIMTH/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: How to enable tracemalloc for the test suite?
Victor> I wrote the feature (both tracemalloc and query tracemalloc when a Victor> buffer overflow is detected), so I should be able to help you ;-) Yes, I thought you might. :-) I've attached the output of a more complete run. The command is % PYTHONTRACEMALLOC=5 ./python ./Tools/scripts/run_tests.py -R 5:50:reflog.txt test_rattlesnake where test_rattlesnake.py has been cut down to a single unit test, which presumably is the one which exercises the problematic code. I don't get the error about writing off either end of the buffer unless I set the second arg pretty high (it succeeds at 40, fails at 45). I'll also quote one part of Tim's response: Tim> To my eyes, you left out the most important part ;-) A traceback Tim> showing who made the fatal free() call to begin with. Which is correct as far as that goes, but I hadn't yet given up all hope of figuring things out. I was more concerned with why I couldn't (and still can't) get tracemalloc to sing and dance. :-) I thought setting PYTHONTRACEMALLOC should provoke some useful output, but I was confused into thinking I was (am?) still missed something because it continued to produce this message: Enable tracemalloc to get the memory block allocation traceback which suggests to me tracemalloc still isn't enabled. That's emitted from Modules/_tracemalloc.c and seems to be properly protected: if (!_Py_tracemalloc_config.tracing) { PUTS(fd, "Enable tracemalloc to get the memory block " "allocation traceback\n\n"); return; } so I think there is still more to do. I was worried enough that I might have misspelled the environment variable name that I ran again after copying it from the documentation. No such Doh! moment for me, though I suspect it might be coming. :-/ FWIW, the register branch of my CPython fork: https://github.com/smontanaro/cpython/tree/register is what I'm working with at the moment. Current master has been merged to it. It should be exactly what is failing for me. (Note that I'm not asking for help there, just pointing out for the curious where my busted code is.) Thanks for both of your responses. Skip typescript Description: Binary data ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/BO5672ZXBFVFPEHWUBQMNAZDTEC6XT54/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] How to enable tracemalloc for the test suite?
I've got a memory issue in my modified Python interpreter I'm trying to debug. Output at the end of the problematic unit test looks like this: ... == Tests result: FAILURE then SUCCESS == 1 test OK. 1 re-run test: test_rattlesnake Total duration: 2.9 sec Tests result: FAILURE then SUCCESS Debug memory block at address p=0x55a227969080: API '' 0 bytes originally requested The 7 pad bytes at p-7 are not all FORBIDDENBYTE (0xfd): at p-7: 0x00 *** OUCH at p-6: 0x00 *** OUCH at p-5: 0x00 *** OUCH at p-4: 0x00 *** OUCH at p-3: 0x00 *** OUCH at p-2: 0x00 *** OUCH at p-1: 0x00 *** OUCH Because memory is corrupted at the start, the count of bytes requested may be bogus, and checking the trailing pad bytes may segfault. The 8 pad bytes at tail=0x55a227969080 are not all FORBIDDENBYTE (0xfd): at tail+0: 0x00 *** OUCH at tail+1: 0x00 *** OUCH at tail+2: 0x00 *** OUCH at tail+3: 0x00 *** OUCH at tail+4: 0x00 *** OUCH at tail+5: 0x00 *** OUCH at tail+6: 0x00 *** OUCH at tail+7: 0x00 *** OUCH The block was made by call #0 to debug malloc/realloc. Enable tracemalloc to get the memory block allocation traceback Looking at the tracemalloc module docs and trying various command line args (-X tracemalloc=5) or environment variables (PYTHONTRACEMALLOC=5) I'm unable to provoke any different output. The module docs give a tantalizing suggestion: Example of output of the Python test suite: but I see no command line args related to running the test suite with tracemalloc enabled. Pointers appreciated. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/4WNQ5EERCQST4AIL6HLV4GDGM5LAVV6Q/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Changing layout of f_localsplus in frame objects
> So far, my second go-round is proceeding better (fingers crossed). I > have added a new slot to the _frame struct (f_cellvars), initialized > once at creation, then referenced elsewhere. I'm also rerunning the > test suite more frequently. Once I've tweaked everything, all that > will remain (in theory) to effect my change is to tweak the > initialization of f_valuestack and f_cellvars. (If an extra slot is > determined to be too space-expensive, a macro or inline function would > suffice.) Found and fixed the last holdout (a SETLOCAL call in ceval.c). For the curious, I've pushed the change to my fork: https://github.com/smontanaro/cpython/commit/318f16ff76e91e665b779e3b478a4406d0a9c0ec As I expected, almost all the changes were in frameobject.c. The other changes were mostly just to remove no longer needed pointer arithmetic. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/US7OA44U7I6VTOZHUPCAKSXENLP4KJD3/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Changing layout of f_localsplus in frame objects
> ... I would expect the FastToLocals and LocalsToFast functions to require > some non-trivial adjustments ... Thanks, Nick. I'm making precisely that change in a few places in frameobject.c. One loop for locals, another for cells & frees, a third for the stack (where the active stack is involved). So far, my second go-round is proceeding better (fingers crossed). I have added a new slot to the _frame struct (f_cellvars), initialized once at creation, then referenced elsewhere. I'm also rerunning the test suite more frequently. Once I've tweaked everything, all that will remain (in theory) to effect my change is to tweak the initialization of f_valuestack and f_cellvars. (If an extra slot is determined to be too space-expensive, a macro or inline function would suffice.) Aside: To speed up my testing it would be kinda nice if you could tell regrtest, "Just run X% of the tests at random plus any which failed on the previous run," but it's not horrible as-is. If it doesn't already exist (implying I didn't just miss it), perhaps it could be a good "easy" feature request for core dev novitiates. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/ON46TXJLMXLYTCRIQXRNNNUWU6HDB2YQ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Changing layout of f_localsplus in frame objects
(Apologies if you're seeing this twice. I first posted to the discourse instance.) I first worked on a register-based virtual machine in the 1.5.2 timeframe. That was before PEP 227 (closures) landed. Prior to that, local variables and the stack were contiguous in the f_localsplus array. Based on a comment at the time from Tim Peters (which I recall but can no longer find), the number of required registers won’t be any greater than the maximum extent of the stack. (It makes sense, and worked for me at the time.) That allowed me to repurpose the stack space as my registers and treat the full f_localsplus array as a single large “register file,” eliminating LOAD_FAST_REG and STORE_FAST_REG instructions completely, which I believe was a major win. I want to rearrange the current f_localsplus to make locals and stack neighbors again. My first look at the code suggested there is no good reason it can’t be done, but figured Jeremy Hylton must have had a good reason to prefer the current layout which places cells and frees in the middle. While the necessary changes didn’t look extensive, they did look a bit tedious to get right, which I have confirmed. My first attempt to reorganize f_localsplus from locals/cells/frees/stack to locals/stack/cells/frees has been an abysmal failure. I’ve found a couple mistakes and corrected them. Implementing those corrections caused the types of failures to change (convincing me they were necessary), but did not eliminate them entirely (so, necessary, but not sufficient). I’ve clearly missed something. I’m also fairly ignorant about recent changes to the language (big understatement), so thought that before going any further, I would see if anyone with better current knowledge of frame objects knew of a reason why my desired layout change wouldn’t work. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/NWQKIDC2IZXJZFQPJDYAWIWNKYUHCSH3/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: How to respond to repeated bad ideas
> Atm we don't have an index of ideas, apart from pep 3099, and I'm not sure we > can make one (can we?), so I do not see a way to prevent this from happening. Maybe an informational PEP which briefly lists rejected ideas? Presumably, they'd normally come up in python-ideas, python-list or python-dev. Each rejected idea could link to one or more relevant threads in one of those lists. Not sure who should be the gatemasters for new bad ideas. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/NGUVNPQZTKMSZQN5O65WSQDGEX5WIEIH/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] extern "C" { ... } in Include/cpython/*.h
(Apologies. Not sure where to ask this, and I'm not much of a C++ programmer. Maybe I should have just added a comment to the still-open issue.) I just noticed that Nick migrated the guts of Include/frameobject.h to include/cpython/frameobject.h. It's not clear to me that the latter should be #include'd directly from anywhere other than Include/frameobject.h. If that's the case, does the extern "C" stuff still need to be replicated in the lower level file? Won't the scope of the extern "C" block in Include/frameobject.h be active at the lower level? Whatever the correct answer is, I suspect the same constraints should apply to all Include/cpython/*.h files. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/CQATW7HLURFFZDCEZRKFWHPSEPO7SDSZ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python-dev mailing lis archives earlier than late April 1999?
> Also note that comp.lang.python and hence python-list from late March > 1994 onward is archived at > <https://groups.google.com/forum/#!forum/comp.lang.python> > Thanks. During my first attempts at applying date range filters on Google Groups, everything came up empty, even for later dates (1999-ish), so I thought perhaps they were gone. I am now getting results. I don't know if I originally hit GG while it was napping or something, but I am getting useful results now. (1/1/94-1/1/95 yields results, while 1/1/93-1/1/94 comes up empty.) This appears to be the first message, on that most persistent of mailing list/newsgroup topics, "Am I First?": https://groups.google.com/forum/#!topic/comp.lang.python/kCAxphFFnn4 Post #2 had more meat (Python FAQ): https://groups.google.com/forum/#!topic/comp.lang.python/Nkrro8yfZoo Thanks for the help, folks. I believe I am off and running... Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/WKINWZ4UJVC2LRCPLK7TQZBAGS3JMRKW/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python-dev mailing lis archives earlier than late April 1999?
Thanks. Mirroring to my laptop now. Will discuss how to preserve it more permanently with postmaster. Skip On Mon, Jan 6, 2020 at 4:43 PM Guido van Rossum wrote: > Via Twitter I got > ftp://ftp.ntua.gr/mirror/python/search/hypermail/python-recent/, which > has earlier python-list archives, ending in April 1995. Not exactly what > you were looking for but probably also worth saving before that archive > dies. > > On Mon, Jan 6, 2020 at 11:56 AM Skip Montanaro > wrote: > >> Thanks all. I just pinged Ken and am going to rummage around >> mail.python.org for a bit. >> >> Skip >> >> On Mon, Jan 6, 2020 at 12:10 PM Barry Warsaw wrote: >> >>> comp.lang.python and thus python-list definitely predate Mailman. In >>> fact, my earliest Python story involves seeing c.l.py creation, >>> browsing for a bit (because who doesn’t love a cool little language that >>> just a handful of enthusiasts are raving about?), and finding it full only >>> of Monty Python jokes. Which of course are great, but why in comp.lang?! >>> Thanks, but I’ll stick with Perl. :) >>> >>> Anyway, python-list and some of the other early lists I can’t find >>> details on right now were originally hosted on Majorodomo. Given that the >>> Mailman archives only go back to 1999, and Guido (and thus most of the >>> Python development infrastructure) had already moved to CNRI by then, it’s >>> possible that the original Majordomo archives were never migrated into >>> Mailman. I just don’t remember and it would take more archive spelunking >>> than I want to do right now. Possibly Ken Manheimer would remember more >>> details. >>> >>> I kind of doubt those original Majordomo archives have survived the >>> various hosting migrations since then, but maybe they are laying around on >>> mail.python.org some place? >>> >>> -Barry >>> >>> > On Jan 6, 2020, at 06:48, Skip Montanaro >>> wrote: >>> > >>> > On Wed, Jan 1, 2020 at 7:25 PM Mark Sapiro wrote: >>> > On 1/1/20 11:22 AM, Barry Warsaw wrote: >>> > > I am looking at the MM2 mailing list creation confirmation messages >>> in my personal archives. Both d...@python.org (at 09:49 server local >>> time?) and python-dev@python.org (at 14:17) were created on April 19, >>> 1999. I don’t remember what happened to dev@ but based on the >>> timeline, I’m retroguessing that we created dev@ first, then quickly >>> rethought the name, created python-dev@ and retired dev@. >>> > >>> > Just to provide some closure here, the pipermail archive for python-dev >>> > goes back to April 21, 1999. There is one, possibly spurious message >>> > from some other list dated March 16, 1995 from Linus Torvalds. >>> > >>> > Aside from this one message and as far as I can tell, all the other >>> > messages from April 21 forward are in the current Hyperkitty archive. >>> > >>> > (Apologies for letting this drop for a couple days.) >>> > >>> > I'm still befuddled. When I look at the MM2 archive for python-list, >>> it also only goes back to Feb 1999. Surely I'm missing something. Maybe GNU >>> Mailman itself isn't much older than 1999. Perhaps python-dev content was >>> embedded in python-list/comp.lang.python before Apr 1999, but we were >>> certainly discussing development of and in Python well before 1999. Where >>> did all the archives go? Maybe it's just my failing memory. I can accept >>> that. If you look at the filenames of the earliest python-list and >>> python-dev messages in the archives: >>> > >>> > • New (?) suggestion to solve "assignment-in-while" desire >>> (python-list - Feb 1999 - 005101.html) >>> > • ZServer 1.0b1: spurious colon in HTTP response line >>> (python-dev - Apr 1999 - 095103.html) >>> > you get the impression that there must have been earlier messages. >>> Wouldn't new lists simply start with message 00.html by default? The >>> first message in the csv mailing list is >>> https://mail.python.org/pipermail/csv/2003-January/00.html. >>> > >>> > Perhaps what I really pine for are comp.lang.python archives? GMane >>> is gone. Google Groups seems to have nothing. They must be someplace. I've >>> heard the Internet never forgets. Even if my personal quest (old messages >>> about Rattlesnake and other alternative virtual machine projects) fails to >>>
[Python-Dev] Re: Python-dev mailing lis archives earlier than late April 1999?
Thanks all. I just pinged Ken and am going to rummage around mail.python.org for a bit. Skip On Mon, Jan 6, 2020 at 12:10 PM Barry Warsaw wrote: > comp.lang.python and thus python-list definitely predate Mailman. In > fact, my earliest Python story involves seeing c.l.py creation, browsing > for a bit (because who doesn’t love a cool little language that just a > handful of enthusiasts are raving about?), and finding it full only of > Monty Python jokes. Which of course are great, but why in comp.lang?! > Thanks, but I’ll stick with Perl. :) > > Anyway, python-list and some of the other early lists I can’t find details > on right now were originally hosted on Majorodomo. Given that the Mailman > archives only go back to 1999, and Guido (and thus most of the Python > development infrastructure) had already moved to CNRI by then, it’s > possible that the original Majordomo archives were never migrated into > Mailman. I just don’t remember and it would take more archive spelunking > than I want to do right now. Possibly Ken Manheimer would remember more > details. > > I kind of doubt those original Majordomo archives have survived the > various hosting migrations since then, but maybe they are laying around on > mail.python.org some place? > > -Barry > > > On Jan 6, 2020, at 06:48, Skip Montanaro > wrote: > > > > On Wed, Jan 1, 2020 at 7:25 PM Mark Sapiro wrote: > > On 1/1/20 11:22 AM, Barry Warsaw wrote: > > > I am looking at the MM2 mailing list creation confirmation messages in > my personal archives. Both d...@python.org (at 09:49 server local time?) > and python-dev@python.org (at 14:17) were created on April 19, 1999. I > don’t remember what happened to dev@ but based on the timeline, I’m > retroguessing that we created dev@ first, then quickly rethought the > name, created python-dev@ and retired dev@. > > > > Just to provide some closure here, the pipermail archive for python-dev > > goes back to April 21, 1999. There is one, possibly spurious message > > from some other list dated March 16, 1995 from Linus Torvalds. > > > > Aside from this one message and as far as I can tell, all the other > > messages from April 21 forward are in the current Hyperkitty archive. > > > > (Apologies for letting this drop for a couple days.) > > > > I'm still befuddled. When I look at the MM2 archive for python-list, it > also only goes back to Feb 1999. Surely I'm missing something. Maybe GNU > Mailman itself isn't much older than 1999. Perhaps python-dev content was > embedded in python-list/comp.lang.python before Apr 1999, but we were > certainly discussing development of and in Python well before 1999. Where > did all the archives go? Maybe it's just my failing memory. I can accept > that. If you look at the filenames of the earliest python-list and > python-dev messages in the archives: > > > > • New (?) suggestion to solve "assignment-in-while" desire > (python-list - Feb 1999 - 005101.html) > > • ZServer 1.0b1: spurious colon in HTTP response line (python-dev > - Apr 1999 - 095103.html) > > you get the impression that there must have been earlier messages. > Wouldn't new lists simply start with message 00.html by default? The > first message in the csv mailing list is > https://mail.python.org/pipermail/csv/2003-January/00.html. > > > > Perhaps what I really pine for are comp.lang.python archives? GMane is > gone. Google Groups seems to have nothing. They must be someplace. I've > heard the Internet never forgets. Even if my personal quest (old messages > about Rattlesnake and other alternative virtual machine projects) fails to > bear fruit, I suspect there is value in maintaining the history of the > Python language. > > > > Thx again... > > > > Skip > > > > ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/KNITMEVRZZJY2DHYJBBQPCWKCP2DX7JV/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Python-dev mailing lis archives earlier than late April 1999?
On Wed, Jan 1, 2020 at 7:25 PM Mark Sapiro wrote: > On 1/1/20 11:22 AM, Barry Warsaw wrote: > > I am looking at the MM2 mailing list creation confirmation messages in > my personal archives. Both d...@python.org (at 09:49 server local time?) > and python-dev@python.org (at 14:17) were created on April 19, 1999. I > don’t remember what happened to dev@ but based on the timeline, I’m > retroguessing that we created dev@ first, then quickly rethought the > name, created python-dev@ and retired dev@. > > Just to provide some closure here, the pipermail archive for python-dev > goes back to April 21, 1999. There is one, possibly spurious message > from some other list dated March 16, 1995 from Linus Torvalds. > > Aside from this one message and as far as I can tell, all the other > messages from April 21 forward are in the current Hyperkitty archive. (Apologies for letting this drop for a couple days.) I'm still befuddled. When I look at the MM2 archive for python-list, it also only goes back to Feb 1999. Surely I'm missing something. Maybe GNU Mailman itself isn't much older than 1999 <https://mail.python.org/pipermail/mailman-announce/1999-July/04.html>. Perhaps python-dev content was embedded in python-list/comp.lang.python before Apr 1999, but we were certainly discussing development of and in Python well before 1999. Where did all the archives go? Maybe it's just my failing memory. I can accept that. If you look at the filenames of the earliest python-list and python-dev messages in the archives: - New (?) suggestion to solve "assignment-in-while" desire <https://mail.python.org/pipermail/python-list/1999-February/005101.html> (python-list - Feb 1999 - 005101.html) - ZServer 1.0b1: spurious colon in HTTP response line <https://mail.python.org/pipermail/python-dev/1999-April/095103.html> (python-dev - Apr 1999 - 095103.html) you get the impression that there must have been earlier messages. Wouldn't new lists simply start with message 00.html by default? The first message in the csv mailing list is https://mail.python.org/pipermail/csv/2003-January/00.html. Perhaps what I really pine for are comp.lang.python archives? GMane is gone. Google Groups seems to have nothing. They must be someplace. I've heard the Internet never forgets. Even if my personal quest (old messages about Rattlesnake and other alternative virtual machine projects) fails to bear fruit, I suspect there is value in maintaining the history of the Python language. Thx again... Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/W7UHTMO5ZCZGITDME74RS3JFGNIE3W3Y/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Python-dev mailing lis archives earlier than late April 1999?
I could swear python-dev was older than late April 1999, yet that's as far back as the MM3 archives go. As evidence, here's an email from Jack Jansen on 28 April 1999 which was a reply to an earlier message not present in the current archive: https://mail.python.org/archives/list/python-dev@python.org/thread/EMH62JYFLIL5FJ3EPOKX3NKCPCO3TCPH/ Any pointers to older messages appreciated... Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/WQGRVOSTETPCN3CK4PMS5XBN67VQRLLN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Macros instead of inline functions?
This is my last post on this, at least as far as specific usage instances are concerned. See my question about PEP 7 below. If that is a discussion people think worthwhile, please start a new thread. > if (!VISIT(...)) { > return 0; > } > if (!VISIT(...)) { > return 0; > } > if (!VISIT(...)) { > return 0; > } > > instead of just > > VISIT(...); > VISIT(...); > VISIT(...); That seems easily solved with the VISIT-as-macro calling _VISIT-as-inline function. That pattern exists elsewhere in the code, in the INCREF/DECREF stuff, for example. The advantage with inline functions (where you can use them) is that the debugger can work with them. They are also more readable in my mind (no protective parens required around expressions/arguments, no do { ... } while (0) } business, no intrusive backslashification of every line) and they probably play nicer with editors (think Emacs speedbar or tags file - not sure if etags groks macros). In any case, I was just somewhat surprised to see relatively new code using macros where it seemed inline functions would have worked as well or better. My more general question stands. Should PEP 7 say something about the two? (Someone mentioned constants. Should they be preferred over macros?) Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/IKT2BXHOIGPUE7Y7JNYW5M7QGYMPYZQB/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Macros instead of inline functions?
> > I don't think stable code which uses macros should be changed (though > > I see the INCREF/DECREF macros just call private inline functions, so > > some conversion has clearly been done). Still, in new code, shouldn't > > the use of macros for more than trivial use cases (constant defs, > > simple one-liners) be discouraged at this point? > > You can't goto from the inline function. Thanks, that is true, and if needed would add another case where macros are preferred over inline functions (as would use of the cpp token pasting operator, ##). I see relatively few goto-using macros spread across a number of source files, but the examples I highlighted in Python/ast_opt.c use return, not goto. It seems they could easily be crafted as inline functions which return 0 (forcing early return from enclosing function) or 1 (equivalent to current fall through). Still, I'm not terribly worried about existing usage, especially in stable, well-tested code. I guess I'm more wondering if a preference for inline functions shouldn't be mentioned in PEP 7 for future authors. Skip Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/H6XQFF7RKVSLDTE64SS6D352HDLDMVCC/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Macros instead of inline functions?
As I wander around the code base, I keep seeing macro definitions in the C code. For example, there are four CALL* macros defined in Python/ast_opt.c which contain not entirely trivial bits of syntax. That code is from 2017 (as compared to, say, Modules/audioop.c, which first saw the light of day in 1992) I see the inline keyword used unconditionally in many places. I don't think stable code which uses macros should be changed (though I see the INCREF/DECREF macros just call private inline functions, so some conversion has clearly been done). Still, in new code, shouldn't the use of macros for more than trivial use cases (constant defs, simple one-liners) be discouraged at this point? Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/AVF6W3PMCAQK73NXOXHMHNW2KP7FJOIJ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Deprecating the "u" string literal prefix
Guido> I think it’s too soon to worry about this. Simon> +100 Ditto. Besides, isn't support for u"..." just a variable and a couple tests in the earliest phase of compilation? If things are going to get deprecated/removed, I'd prefer the focus be placed on those bits which present a significant support burden. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/WFB7KNWCFK3U2HD6I5UCUT6RQFCB7BMJ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Mixed Python/C debugging
Thanks for the responses. I know there are multiple tools out there (to wit, Wes's response), but I'm really after what people actually use and find works. I apologize that wasn't clear. I did neglect to mention that my environment is Linux (specifically Ubuntu 18.04), so Windows-based solutions aren't likely to be workable for me. For the time being, I've been working through one or two of the docs/tutorials about the parsing/compiler internals which focus on the C side, so gdb with curses display enabled (Ctrl-X a) and built-in PyObject support has been sufficient. I will eventually need mixed language debugging though. And, as an Emacs user, how this might play in that sandbox is of interest. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/IZRJX3YYOBJWJ6UAE5PIAJBPKB7IOHS2/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Mixed Python/C debugging
Having tried comp.lang.python with no response, I turn here... After at least ten years away from Python's run-time interpreter & byte code compiler, I'm getting set to familiarize myself with that again. This will, I think, entail debugging a mixed Python/C environment. I'm an Emacs user and am aware that GDB since 7.0 has support for debugging at the Python code level. Is Emacs+GDB my best bet? Are there any Python IDEs which support C-level breakpoints and debugging? Thanks, Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/L2KBZM64MYPXIITN4UU3X6L4PZS2YRTB/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: How to extricate large set of diffs from hg.python.org/sandbox?
> I think this might work: > > $ hg diff -r fb80df16c4ff -r tip > > Not sure fb80df16c4ff is the correct base revision. It seems to be > the base of Victor's work. I put the resulting patch file here: > > http://python.ca/nas/python/registervm-victor.txt Thanks, Neil. I barely remembered anything about Mercurial (not even installed on my current device). It didn't occur to me that the necessary precursor to that big diff might be as simple as hg clone https://hg.python.org/sandbox/registervm/ Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/IAMCZ3APJ2LOGWXCH7PQOV67E52KMBZV/
[Python-Dev] How to extricate large set of diffs from hg.python.org/sandbox?
Victor's experiments into a register-based virtual machine live here: https://hg.python.org/sandbox/registervm I'd like to revive them, if for no other reason to understand what he did. I see no obvious way to collect them all as a massive diff. For the moment, I downloaded each commit and am applying them oldest to newest (against 3.3, which I think was Victor's base), correcting issues as I go along. Still, that is going to take a good long while. If there's an easier way to do this, I'm all ears. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/BR2V4XHEBMCLNWU2TMUIJBEYU2UNQBRD/
[Python-Dev] Re: New keyword in bpo: `newcomer friendly`
> There is now a “newcomer friendly” keyword in bpo. > > My hope is that going forward, we can tag issues that are suitable for first > time contributors with this keyword. Hmmm... I haven't looked lately, but didn't there used to be an "easy" tag which purported to serve roughly the same purpose? I see an "Easy issues" link in the left-hand sidebar: https://bugs.python.org/issue?status=1&@sort=-activity&@columns=id%2Cactivity%2Ctitle%2Ccreator%2Cstatus&@dispname=Easy%20issues&@startwith=0&@group=priority=6&@action=search&@filter=&@pagesize=50 This issue has the "easy" keyword: https://bugs.python.org/issue19217 Are "newcomer friendly" and "easy" aimed at somewhat different targets? Skip Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/7C6WTTBSEYETZDSP6N2JEX2F6NWL5S7E/
[Python-Dev] Re: Replacing 4 underscores with a $ sign, idea for a PEP
My only comment is that this belongs first on python-ideas <https://mail.python.org/mailman3/lists/python-ideas.python.org/>. Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/54OTFAX74ICNALHGC5H4OMZDCIXXJ6J3/
[Python-Dev] Re: PEP 581 has been updated with "Downsides of GitHub" section
> You have missed at least one: the minimum technology requirement for > using Github is a lot more stringent than for Roundup. Github's minimum > system requirements are higher, and it doesn't degrade as well, so > moving to Github will make it much harder for those who are using older > technology. If not exclude them altogether. Is that Git or GitHub? If the latter, more JavaScript bits or something else? Skip ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/6COCTJ3E2WZUA5DTAWB34NQG5H45MX7I/
Re: [Python-Dev] PEP 594: Removing dead batteries from the standard library
> If this were my PEP, I'd call it "Removing unloved batteries from the > standard library". Or maybe, "Removing obsolete and (potentially) dangerous batteries from the standard library." I can certainly understand why either class of module would be removed. When bsddb185 was tossed out, I put it up on PyPI (https://pypi.org/project/bsddb185/) so the source wouldn't be lost. I never expected to actually need it, but I got pinged for the first time a few months ago. Someone needed help building it. So, no matter the reason for removal, I think it would be reasonable to toss all these modules into GitHub or PyPI. Someone will want them. Skip ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Easier debugging with f-strings
> My only complaint is that you steadfastly refuse use Guido’s time machine > keys to make this available in 3.7. Wait a minute, Barry. You mean you don't already have an Emacs function to do the rewriting as a pre-save-hook? Skip ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 581: Using GitHub Issues for CPython
> I'd like to formally present to Python-dev PEP 581: Using GitHub Issues for > CPython > > Full text: https://www.python.org/dev/peps/pep-0581/ Thanks for doing this. I think there is a pretty strong argument to be made that mature, widely adopted systems like GitHub (or GitLab or Bitbucket) should be used where possible. One thing I didn't notice was any sort of explanation about how CPython wound up on Roundup to begin with. I think it would be worthwhile to mention a couple reasons, when the decision was made to use Roundup, etc. Without it, a casual reader might think the core devs made a horrible mistake way back when, and are only now getting around to correcting it. I don't recall when Roundup was adopted, but it was quite awhile ago, and the issue tracker universe was a much different place. Here are a couple things I recall (perhaps incorrectly). It's been awhile, but for the digital spelunkers, I'm sure it's all in an email archive somewhere. (I didn't find a PEP. Did that decision predate PEPs?) * Back in the olden days, basically every candidate issue tracker required modification to make it suitable for a particular project. I don't rightly recall if Roundup was deemed easier to modify or if there were just people willing to step up and make the necessary changes. * There was a desire to eat your own dog food, and I believe Roundup is/was written in Python. That would be much less important today. Plenty of people already eat Python brand Dog Food.™ Skip ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Register-based VM [Was: Possible performance regression]
> I uploaded a tarfile I had on my PC to my web site: > > http://python.ca/nas/python/rattlesnake20010813/ > > It seems his name doesn't appear in the readme or source but I think > Rattlesnake was Skip Montanaro's project. I suppose my idea of > unifying the local variables and the registers could have came from > Rattlesnake. Very little new in the world. ;-P Lot of water under the bridge since then. I would have to poke around a bit, but I think "from module import *" stumped me long enough that I got distracted by some other shiny thing. S ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] About "python-porting" mail list
> The interwebs has been collecting ton of resources about porting py2 > to 3 during these years. Any not-yet-answered question surely can be > done in a list with more participants. > > Can we kill this list? Would it perhaps make sense to replace the list with an auto-reply about the list's demise, then auto-forward their message to python-list? Skip ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python3 compiled listcomp can't see local var - bug or feature?
> Skip, I think you have misunderstood the point I was making. It was > not whether the loop variable should leak out of a list comprehension. > Rather, it was whether a local variable should, so to speak, "leak into" > a list comprehension. And the answer is: it depends on whether the code > is executed normally, or via exec/eval. > Got it. Yes, you'll have to pass in locals to exec. (Can't verify, as I'm on the train, on my phone.) Builtins like range are global to everything, so no problem there. Your clarification also make it more of a Python programming question , I think. Skip > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python3 compiled listcomp can't see local var - bug or feature?
> Is this a bug or a feature? The bug was me being so excited about the new construct (I pushed in someone else's work, can't recall who now, maybe Fredrik Lundh?) that I didn't consider that leaking the loop variable out of the list comprehension was a bad idea. Think of the Py3 behavior as one of those "corrections" to things which were "got wrong" in Python 1 or 2. :-) Skip ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "make test" routinely fails to terminate
> me> On the 3.7 branch, "make test" routinely fails to terminate. > Antoine> Can you try to rebuild Python? Use "make distclean" if that helps. > Thanks, Antoine. That solved the termination problem. I still have problems > with test_asyncio failing, but I can live with that for now. Final follow-up. I finally got myself a workable, updateable 3.7 branch in my fork. It looks like the asyncio issues are alsy resolved on both 3.7 and master. Skip ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] My fork lacks a 3.7 branch - can I create it somehow?
> You don't really need copies of official branches on your Github fork if you're not a maintainer for these branches. I explicitly wanted to run with 3.7 in the run-up to release. On that branch, the built ./python reports 3.7.0b4+ at startup. Master tells me 3.8.0a0 on startup. Since my local repo is a clone of my fork, it made sense to me to have a 3.7 branch on my fork which I could switch to. Am I only nutcase who thinks that might be mildly useful? (Or that if I want to test an application across multiple versions using tox that it makes sense to have pre-release visibility of point releases.) Skip ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] My fork lacks a 3.7 branch - can I create it somehow?
> Create it from upstream? Yep! Try this: > git checkout -b 3.7 upstream/3.7 > git push -u origin 3.7 Thanks, Chris! Didn't have to chug for too long either, just a few seconds. S ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "make test" routinely fails to terminate
me> On the 3.7 branch, "make test" routinely fails to terminate. Antoine> Can you try to rebuild Python? Use "make distclean" if that helps. Thanks, Antoine. That solved the termination problem. I still have problems with test_asyncio failing, but I can live with that for now. If "make distclean" is required, I suspect there is a missing/incorrect/incomplete Make dependency somewhere. I suppose "make distclean" is cheap enough that I should do it whenever I switch branches. Skip ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] My fork lacks a 3.7 branch - can I create it somehow?
My GitHub fork of the cpython repo was made awhile ago, before a 3.7 branch was created. I have no remotes/origin/3.7. Is there some way to create it from remotes/upstream/3.7? I asked on GitHub's help forums. The only recommendation was to to delete my fork and recreate it. That seemed kind of drastic, and I will do it if that's really the only way, but this seems like functionality Git and/or GitHub probably supports. Thx, Skip ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com