Re: [Python-Dev] Network Security Backport Status
I have to agree with Antoine -- I don't think there's a shortcut that avoids *someone* actually having to understand the code to the point of being able to recreate the same behavior in the different context (pun not intended) of Python 2. On Tue, Jul 1, 2014 at 1:54 PM, Antoine Pitrou wrote: > Le 01/07/2014 14:26, Alex Gaynor a écrit : > > >> I can do all the work of reviewing each commit, but I need some help from >> a >> mercurial expert to automate the cherry-picking/rebasing of every single >> commit. >> >> What do folks think? Does this approach make sense? Anyone willing to >> help with >> the mercurial scripting? >> > > I don't think this makes much sense; Mercurial won't be smarter than you > are. I think you'd have a better chance of succeeding by backporting one > feature at a time. IMO, you'd first want to backport the _SSLContext base > class and SSLContext.wrap_socket(). The latter *will* require some manual > coding to adapt to 2.7's different SSLSocket implementation, not just > applying patch hunks around. > > Regards > > Antoine. > > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] buildbot.python.org down again?
It would still be nice to know who "the appropriate persons" are. Too much of our infrastructure seems to be maintained by house elves or the ITA. On Sun, Jul 6, 2014 at 11:33 PM, Terry Reedy wrote: > On 7/6/2014 7:54 PM, Ned Deily wrote: > >> As of the moment, buildbot.python.org seems to be down again. >> > > Several hours later, back up. > > > > Where is the best place to report problems like this? > > We should have, if not already, an automatic system to detect down servers > and report (email) to appropriate persons. > > -- > Terry Jan Reedy > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] buildbot.python.org down again?
It's a reference to Neil Stephenson's Anathem. On Jul 7, 2014 8:55 AM, "Benjamin Peterson" wrote: > On Mon, Jul 7, 2014, at 08:44, Guido van Rossum wrote: > > It would still be nice to know who "the appropriate persons" are. Too > > much > > of our infrastructure seems to be maintained by house elves or the ITA. > > :) Is ITA "International Trombone Association"? > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] buildbot.python.org down again?
May the true owner of buildbot.python.org stand up! (But I do think there may well not be anyone who feels they own it. And that's a problem for its long term viability.) Generally speaking, as an organization we should set up a process for managing ownership of *all* infrastructure in a uniform way. I don't mean to say that we need to manage all infrastructure uniformly, just that we need to have a process for identifying and contacting the owner(s) for each piece of infrastructure, as well as collecting other information that people besides the owners might need to know. You can use a wiki page for that list for all I care, but have a process for what belongs there, how/when to update it, and even an owner for the wiki page! Stuff like this shouldn't be just in a few people's heads (even if they are board members) nor should it be in a file in a repo that nobody has ever heard of. On Tue, Jul 8, 2014 at 12:33 AM, Donald Stufft wrote: > > On Jul 8, 2014, at 12:58 AM, Nick Coghlan wrote: > > > On 7 Jul 2014 10:47, "Guido van Rossum" wrote: > > > > It would still be nice to know who "the appropriate persons" are. Too > much of our infrastructure seems to be maintained by house elves or the ITA. > > I volunteered to be the board's liaison to the infrastructure team, and > getting more visibility around what the infrastructure *is* and how it's > monitored and supported is going to be part of that. That will serve a > couple of key purposes: > > - making the points of escalation clearer if anything breaks or needs > improvement (although "infrastruct...@python.org" is a good default > choice) > - making the current "todo" list of the infrastructure team more visible > (both to calibrate resolution time expectations and to provide potential > contributors an idea of what's involved) > > Noah has already set up http://status.python.org/ to track service > status, I can see about getting buildbot.python.org added to the list. > > Cheers, > Nick. > > > We (the infrastructure team) were actually looking earlier about > buildbot.python.org and we’re not entirely sure who "owns" > buildbot.python.org. > Unfortunately a lot of the *.python.org services are in a similar state > where > there is no clear owner. Generally we've not wanted to just step in and > take > over for fear of stepping on someones toes but it appears that perhaps > buildbot.p.o has no owner? > > - > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3121, 384 Refactoring Issues
I don't know the details, but I suspect that was the result of my general guideline "don't start projects cleaning up lots of stdlib code just to satisfy some new style rule or just to use a new API" -- which came from hard-won experience where such a cleanup project introduced some new bugs that weren't found by review nor by tests. Though that was admittedly a long time. Still, such a project can really sap reviewer resources for relatively little benefit. On Thu, Jul 10, 2014 at 12:59 PM, Brett Cannon wrote: > [for those that don't know, 3121 is extension module inti/finalization and > 384 is the stable ABI] > > > On Thu Jul 10 2014 at 3:47:03 PM, Mark Lawrence > wrote: > >> I'm just curious as to why there are 54 open issues after both of these >> PEPs have been accepted and 384 is listed as finished. Did we hit some >> unforeseen technical problem which stalled development? >> > > No, the PEPs were fine and were accepted properly. A huge portion of the > open issues are from Robin Schreiber who as part of GSoC 2012 -- > https://www.google-melange.com/gsoc/project/details/google/gsoc2012/robin_hood/5668600916475904 > -- went through and updated the stdlib to follow the new practices > introduced in the two PEPs. Not sure if there was some policy decision made > that updating the code wasn't worth it or people simply didn't get around > to applying the patches. > > -Brett > > >> >> For these and any other open issues if you need some Windows testing >> doing please feel free to put me on the nosy list and ask for a test run. >> >> -- >> My fellow Pythonistas, ask not what our language can do for you, ask >> what you can do for our language. >> >> Mark Lawrence >> >> --- >> This email is free from viruses and malware because avast! Antivirus >> protection is active. >> http://www.avast.com >> >> >> ___ >> Python-Dev mailing list >> Python-Dev@python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/ >> brett%40python.org >> > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] sum(...) limitation
No. We just can't put all possible use cases in the docstring. :-) On Fri, Aug 1, 2014 at 2:48 PM, Andrea Griffini wrote: > help(sum) tells clearly that it should be used to sum numbers and not > strings, and with strings actually fails. > > However sum([[1,2,3],[4],[],[5,6]], []) concatenates the lists. > > Is this to be considered a bug? > > Andrea > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Exposing the Android platform existence to Python modules
Or SL4A? (https://github.com/damonkohler/sl4a) On Fri, Aug 1, 2014 at 8:06 PM, Steven D'Aprano wrote: > On Sat, Aug 02, 2014 at 05:53:45AM +0400, Akira Li wrote: > > > Python uses os.name, sys.platform, and various functions from `platform` > > module to provide version info: > [...] > > If Android is posixy enough (would `posix` module work on Android?) > > then os.name could be left 'posix'. > > Does anyone know what kivy does when running under Android? > > > -- > Steven > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Exposing the Android platform existence to Python modules
On Sat, Aug 2, 2014 at 12:53 AM, Phil Thompson wrote: > To me the issue is whether, for a particular value of sys.platform, the > programmer can expect a particular Python stdlib API. If so then Android > needs a different value for sys.platform. > sys.platform is for a broad indication of the OS kernel. It can be used to distinguish Windows, Mac and Linux (and BSD, Solaris etc.). Since Android is Linux it should have the same sys.platform as other Linux systems ('linux2'). If you want to know whether a specific syscall is there, check for the presence of the method in the os module. The platform module is suitable for additional vendor-specific info about the platform, and I'd hope that there's something there that indicates Android. Again, what values does the platform module return on SL4A or Kivy, which have already ported Python to Android? In particular, I'd expect platform.linux_distribution() to return a clue that it's Android. There should also be clues in /etc/lsb-release (assuming Android supports it :-). -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Exposing the Android platform existence to Python modules
Right. On Saturday, August 2, 2014, Phil Thompson wrote: > On 02/08/2014 7:36 pm, Guido van Rossum wrote: > >> On Sat, Aug 2, 2014 at 12:53 AM, Phil Thompson < >> p...@riverbankcomputing.com> >> wrote: >> >> To me the issue is whether, for a particular value of sys.platform, the >>> programmer can expect a particular Python stdlib API. If so then Android >>> needs a different value for sys.platform. >>> >>> >> sys.platform is for a broad indication of the OS kernel. It can be used to >> distinguish Windows, Mac and Linux (and BSD, Solaris etc.). Since Android >> is Linux it should have the same sys.platform as other Linux systems >> ('linux2'). If you want to know whether a specific syscall is there, check >> for the presence of the method in the os module. >> > > It's not just the os module - other modules contain code that would be > affected, but there are plenty of other parts of the Python stdlib that > aren't implemented on every platform. Using the approach you prefer then > all that's needed is to update the documentation to say that certain things > are not implemented on Android. > > Phil > -- --Guido van Rossum (on iPad) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Exposing the Android platform existence to Python modules
On Sat, Aug 2, 2014 at 12:14 PM, Shiz wrote: > Guido van Rossum wrote: > > sys.platform is for a broad indication of the OS kernel. It can be > > used to distinguish Windows, Mac and Linux (and BSD, Solaris etc.). > > Since Android is Linux it should have the same sys.platform as other > > Linux systems ('linux2'). If you want to know whether a specific > > syscall is there, check for the presence of the method in the os > > module. > > > > The platform module is suitable for additional vendor-specific info > > about the platform, and I'd hope that there's something there that > > indicates Android. Again, what values does the platform module return > > on SL4A or Kivy, which have already ported Python to Android? In > > particular, I'd expect platform.linux_distribution() to return a > > clue that it's Android. There should also be clues in > > /etc/lsb-release (assuming Android supports it :-). > > > > -- --Guido van Rossum (python.org/~guido <http://python.org/~guido>) > > To the best of my knowledge, Kivy and Py4A/SL4A don't modify that code > at all, so it just returns 'linux2'. In addition, they don't modify > platform.py either, so platform.linux_distribution() returns empty values. > OK, so personally I'd leave sys.platform but improve on platform.linux_distribution(). > My patchset[1] currently contains patches that both set sys.platform to > 'linux-android' and modifies platform.linux_distribution() to parse and > return a proper value for Android systems: > > >>> import sys, platform sys.platform > 'linux-android' > >>> platform.linux_distribution() > ('Android', '4.4.2', 'Blur_Version.174.44.9.falcon_umts.EURetail.en.EU') > > The sys.platform thing was mainly done out of curiosity on its > possibility after Phil bringing it up. Can you give a few examples of where you'd need to differentiate Android from other Linux platforms in otherwise portable code, and where testing for the presence or absence of the specific function that you'd like to call isn't possible? I know I pretty much never test for the difference between OSX and other UNIX variants (including Linux) -- the only platform distinction that regularly comes up in my own code is Windows vs. the rest. And even there, often the right thing to test for is something more specific like os.sep. > My main issue with leaving > Android detection to checking platform.linux_distribution() is that it > feels like a bit of a wonky thing for core Python modules to rely on to > change behaviour where needed on Android (as well as introducing a > dependency cycle between subprocess and platform right now). > What's the specific change in stdlib behavior that you're proposing for Android? > I'd also like to note that I wouldn't agree with following too many of > Kivy/Py4A/SL4A's design decisions on this, as they seem mostly absent. > - From what I've read, their patches mostly seem geared towards getting > Python to run on Android, not necessarily integrating it well or fixing > all inconsistencies. This also leads to things like subprocess.Popen() > indeed breaking with shell=True[2]. > I'm all for fixing subprocess.Popen(), though I'm not sure what the best way is to determine this particular choice (why is it in the first place that /bin/sh doesn't work?). However, since it's a stdlib module you could easily rely on a private API to detect Android, so this doesn't really force the sys.platform issue. (Or you could propose a fix that will work for Kivi and SL4A as well, e.g. checking for some system file that is documented as unique to Android.) > > Kind regards, > Shiz > > [1]: https://github.com/rave-engine/python3-android/tree/master/src > [2]: > > http://grokbase.com/t/gg/python-for-android/1343rm7q1w/py4a-subprocess-popen-oserror-errno-8-exec-format-error > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Exposing the Android platform existence to Python modules
Well, it really does look like checking for the presence of those ANDROID_* environment variables it the best way to recognize the Android platform. Anyone can do that without waiting for a ruling on whether Android is Linux or not (which would be necessary because the docs for sys.platform are quite clear about its value on Linux systems). Googling terms like "is Android Linux" suggests that there is considerable controversy about the issue, so I suggest you don't wait. :-) On Sat, Aug 2, 2014 at 3:49 PM, Shiz wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA512 > > Guido van Rossum wrote: > > Can you give a few examples of where you'd need to differentiate > > Android from other Linux platforms in otherwise portable code, and > > where testing for the presence or absence of the specific function > > that you'd like to call isn't possible? I know I pretty much never > > test for the difference between OSX and other UNIX variants > > (including Linux) -- the only platform distinction that regularly > > comes up in my own code is Windows vs. the rest. And even there, > > often the right thing to test for is something more specific like > > os.sep. > > > What's the specific change in stdlib behavior that you're proposing > > for Android? > > The most obvious change would be to subprocess.Popen(). The reason a > generic approach there won't work is also the reason I expect more > changes might be needed: the Android file system doesn't abide by any > POSIX file system standards. Its shell isn't located at /bin/sh, but at > /system/bin/sh. The only directories it provides that are POSIX-standard > are /dev and /etc, to my knowledge. You could check to see if > /system/bin/sh exists and use that first, but that would break the > preferred shell on POSIX systems that happen to have /system for some > reason or another. In short: the preferred shell on POSIX systems is > /bin/sh, but on Android it's /system/bin/sh. Simple existence checking > might break the preferred shell on either. For more specific stdlib > examples I'd have to check the test suite again. > > I can see the point of a sys.platform change not necessarily being > needed, but it would nice for user code too to have a sort-of trivial > way to figure out if it's running on Android. While core CPython might > in general care far less, for user applications it's a bigger deal since > they have to draw GUIs and use system services in a way that *is* > usually very different on Android. Again, platform.linux_distribution() > seems more for display purposes than for applications to check their > core logic against. > In addition, apparently platform.linux_distribution() is getting > deprecated in 3.5 and removed in 3.6[1]. > > I agree that above issue should in fact be solved by the earlier-linked > to os.get_preferred_shell() approach, however. > > > However, since it's a stdlib module you could easily rely on a > > private API to detect Android, so this doesn't really force the > > sys.platform issue. (Or you could propose a fix that will work for > > Kivi and SL4A as well, e.g. checking for some system file that is > > documented as unique to Android.) > > After checking most of the entire Android file system, I'm not sure if > such a file exists. Sure, a lot of the Android file system hierarchy > isn't really used anywhere else, but I'm not sure a check to see if e.g. > /system exists is really enough to conclude Python is running on Android > on its own. The thing that gets closest (which is the thing my > platform.py patch checks for) is several Android-specific environment > variables being defined (ANDROID_ROOT, ANDROID_DATA, > ANDROID_PROPERTY_WORKSPACE...). Wouldn't it be better to put this in the > standard Python library and expose it somehow, though? It *is* fragile > code, it seems better if applications could 'just rely' on Python to > figure it out, since it's not a trivial check. > > Kind regards, > Shiz > > [1]: http://bugs.python.org/issue1322#msg207427 > -BEGIN PGP SIGNATURE- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQQcBAEBCgAGBQJT3WrbAAoJEICfd9ZVuxW+CSEgAMgBE12MW1H+MjScIUI19cFi > yCexTCEwu1rApjGYWSUw92Ihr9LnWn4aL7tEBqGXHN5pDctw0/FlGH9d0WhpMz/b > DN0w5ukqx2YyY1EDK7hp1//6eU+tXTGQu890CWgboj5OQF8LXFyN6ReG0ynAKFC7 > gSyYGunqCIInRdnz9IRXWgQ91F/d1D3hZq9ZNffZzacA+PIA1rPdgziUuLdThl14 > P2/o98DzLRa3iTrTeW+x8f7nfbfNFmO8BLJsrce0o50BlD75YsUKVeTlwjU9IuIC > gbw5Cxo8cfBN9Eg7iLkMgxkwiEVspuLVcVmoNVL4zsuavj41jlmyZFmPvRMO7OK+ > NQMq5vGPub7q4
Re: [Python-Dev] Exposing the Android platform existence to Python modules
But *are* we going to support Android officially? What's the point? Do you have a plan for getting Python apps to first-class status in the App Store (um, Google Play)? Regardless, I recommend that you add a new method to the platform module (careful people can test for the presence of the new method before calling it) and leave poor sys.platform alone. On Sat, Aug 2, 2014 at 10:18 PM, Shiz wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA512 > > Guido van Rossum wrote: > > Well, it really does look like checking for the presence of those > > ANDROID_* environment variables it the best way to recognize the > > Android platform. Anyone can do that without waiting for a ruling on > > whether Android is Linux or not (which would be necessary because the > > docs for sys.platform are quite clear about its value on Linux > > systems). Googling terms like "is Android Linux" suggests that there > > is considerable controversy about the issue, so I suggest you don't > > wait. :-) > > Right, which brings us back to the original point I was trying to make: > any chance we could move logic like that into a sys.getandroidversion() > or platform.android_version() so user code (and standard library code > alike) doesn't have to perform those relatively nasty checks themselves? > It seems like a fair thing to do if CPython would support Android as an > official target. > > Kind regards, > Shiz > -BEGIN PGP SIGNATURE- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQQcBAEBCgAGBQJT3cYHAAoJEICfd9ZVuxW+hSogAKg8FUz/SuH6d0a4QvDctpMO > pm58gBqVYvd1y/uIiLpQgpGb1dPrNziV1IYOBJaDcU1i/03JlgGdr3HOq29KvHdQ > xgaQQbsyl63Tzhs4oA2iow7eoRO5rkZ338hxpWrUQqRek73AYXJt2r5w9dRklUh/ > Z1R+80otVRAj69uJub8yAys08QqljKG80cnfQwUcFJVDWZRmr/z/WRGoC7QkRYVK > EfIa7EVlm/3mArmueF6vxgF5qHevXIHvVSf18JJ918gxldKLJ4ht1v8L/4h4QBrC > zfNqWyg8lXh6evMMH4lM755rycCTrtyzkoxmocLkUsEHrB65eOWWSBYdQgRMpuOH > SZs+9K+P1jPwsJlcHl8j4sXoG6NtL6BBim70nlEnvdWQ6qHMivBNcyA1gEwI7Upn > hG4t7AM4c3fdbkOg4V1F7EVrS9QqIxxWFIMAfYUGstZnfbBUDDGKIkE68ZbT+scq > RTLbh78WsVA/YB/NLnxKvCTCuJb2uwg7R/VC1bMlsTUqTSfmckHl/XSRrgk+ggve > A45sOKyoWzpfZEaAL9/e2TsPul5bRatVFX2JqEuzO42OTNZRr7GRxvRgF4tmnmG2 > baSfrEhm3rcIFxT2IqLy+28g7ffGKcbbq7oo7LPvrh+zIupamygCnvMs6aSPE3zi > Vi31EiFrZ8pn3YF+yfO7D9hjtqE41IIc86dKPUyKYfG+wO1oPXNwzBEZfoRSoJaY > 9EKd1fqOm9iYHHzr+mkEko/bl+SxNFHHJ/y/uEU6ZIhBjbylDJ9AKCAm5q9gotuT > 5i3PuyOOrTuYO0ei0su5Ya9UO5vD3+gUNKTHe9IdUL/e+5qYt5tjwtfPC9UTldSy > xLv8Ca0uC7mOHLPi8ASghoO2tbjy69TNYmzljqIGUufBOKshFnNWA7DDmQdYrdTN > t+EXsUAUmqm1RT29Zhrt1LCsoByyXh5jBapyIleU8TTrmotpX3dlI7rooZSegUiy > 8lD05oIjX+JRbfXXsNg384e6Stc6UktrhIK00w3ILVP9IqnqAO+dao/uE+5lLvxU > BcL9/PjmTY+1U8ZJCb9uZXNG8jWP2lsQEKaSFURkoUjTzfRpAoa6tVpCZOOvqZC2 > F52ZSwmUBtP7vydRJ7BZjOeRxDzMD8qd0ED3fciDRbnVdXHIG+8MFL5MY1CDm9i7 > r7bngcsqSUURq/Zj4BYnM8lOX1PXC9+U4gVNEkiwf+9CjfeIyMd4QpuMyXPxeiUa > QDU8MX5VdA1oBvJ2nbXV8QwriIfODbyhD/00QhLHw5ifKjxB8ZZdF4jNT+Ay9jnR > nEWuIpat3ch2Sg/ECtBvcA8hHYE9TfFZGdrdZVvib7fHsS+AUFXuhjAnkEyOVB4= > =m+JD > -END PGP SIGNATURE- > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Exposing the Android platform existence to Python modules
On Sun, Aug 3, 2014 at 10:16 AM, Phil Thompson wrote: > On 03/08/2014 4:58 pm, Guido van Rossum wrote: > >> But *are* we going to support Android officially? What's the point? Do you >> have a plan for getting Python apps to first-class status in the App Store >> (um, Google Play)? >> > > I do... > > http://pyqt.sourceforge.net/Docs/pyqtdeploy/introduction.html > > Phil > Oooh, that's pretty cool! -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] pathlib handling of trailing slash (Issue #21039)
Hm. I personally consider a trailing slash significant. It feels semantically different (and in some cases it is) so I don't think it should be normalized. The behavior of os.path.split() here feels right. On Wed, Aug 6, 2014 at 7:30 PM, Antoine Pitrou wrote: > > Le 06/08/2014 22:12, Ben Finney a écrit : > > You seem to be saying that ‘pathlib’ is not intended to be helpful for >> constructing a shell command. >> > > pathlib lets you do operations on paths. It also gives you a string > representation of the path that's expected to designate that path when > talking to operating system APIs. It doesn't give you the possibility to > store other semantic variations ("whether a new directory level must be > created"); that's up to you to add those. > > (similarly, it doesn't have separate classes to represent "a file", "a > directory", "a non-existing file", etc.) > > Regards > > Antoine. > > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Multiline with statement line continuation
On Tue, Aug 12, 2014 at 3:43 AM, Devin Jeanpierre wrote: > I think this thread is probably Python-Ideas territory... > > On Mon, Aug 11, 2014 at 4:08 PM, Allen Li wrote: > > Currently, this works with explicit line continuation, but as all style > > guides favor implicit line continuation over explicit, it would be nice > > if you could do the following: > > > > with (open('foo') as foo, > > open('bar') as bar, > > open('baz') as baz, > > open('spam') as spam, > > open('eggs') as eggs): > > pass > > The parentheses seem unnecessary/redundant/weird. Why not allow > newlines in-between "with" and the terminating ":"? > > with open('foo') as foo, >open('bar') as bar, >open('baz') as baz: > pass > That way lies Coffeescript. Too much guessing. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Documenting enum types
The enemy must be documented and exported, since users will encounter them. On Aug 14, 2014 4:54 AM, "Nick Coghlan" wrote: > On 14 August 2014 19:25, Victor Stinner wrote: > > Hi, > > > > IMO we should not document enum types because Python implementations > other > > than CPython may want to implement them differently (ex: not all Python > > implementations have an enum module currently). By experience, exposing > too > > many things in the public API becomes a problem later when you want to > > modify the code. > > Implementations claiming conformance with Python 3.4 will have to have > an enum module - there just aren't any of those other than CPython at > this point (I expect PyPy3 will catch up before too long, since the > changes between 3.2 and 3.4 shouldn't be too dramatic from an > implementation perspective). > > In this particular case, though, I think the relevant question is "Why > are they enums?" and the answer is "for the better representations". > I'm not clear on the use case for exposing and documenting the enum > types themselves (although I don't have any real objection either). > > Regards, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 467: Minor API improvements for bytes & bytearray
become > especially confusing now that some other ``bytes`` interfaces treat > integers > and the corresponding length 1 bytes instances as equivalent input. > Compare:: > > >>> b"\x03" in bytes([1, 2, 3]) > True > >>> 3 in bytes([1, 2, 3]) > True > > >>> bytes(b"\x03") > b'\x03' > >>> bytes(3) > b'\x00\x00\x00' > > This PEP proposes that the current handling of integers in the bytes and > bytearray constructors by deprecated in Python 3.5 and targeted for > removal in Python 3.7, being replaced by two more explicit alternate > constructors provided as class methods. The initial python-ideas thread > [ideas-thread1]_ that spawned this PEP was specifically aimed at > deprecating > this constructor behaviour. > > Firstly, a ``byte`` constructor is proposed that converts integers > in the range 0 to 255 (inclusive) to a ``bytes`` object:: > > >>> bytes.byte(3) > b'\x03' > >>> bytearray.byte(3) > bytearray(b'\x03') > >>> bytes.byte(512) > Traceback (most recent call last): > File "", line 1, in > ValueError: bytes must be in range(0, 256) > > One specific use case for this alternate constructor is to easily convert > the result of indexing operations on ``bytes`` and other binary sequences > from an integer to a ``bytes`` object. The documentation for this API > should note that its counterpart for the reverse conversion is ``ord()``. > The ``ord()`` documentation will also be updated to note that while > ``chr()`` is the counterpart for ``str`` input, ``bytes.byte`` and > ``bytearray.byte`` are the counterparts for binary input. > > Secondly, a ``zeros`` constructor is proposed that serves as a direct > replacement for the current constructor behaviour, rather than having to > use > sequence repetition to achieve the same effect in a less intuitive way:: > > >>> bytes.zeros(3) > b'\x00\x00\x00' > >>> bytearray.zeros(3) > bytearray(b'\x00\x00\x00') > > The chosen name here is taken from the corresponding initialisation > function > in NumPy (although, as these are sequence types rather than N-dimensional > matrices, the constructors take a length as input rather than a shape > tuple) > > While ``bytes.byte`` and ``bytearray.zeros`` are expected to be the more > useful duo amongst the new constructors, ``bytes.zeros`` and > `bytearray.byte`` are provided in order to maintain API consistency between > the two types. > > > Iteration > - > > While iteration over ``bytes`` objects and other binary sequences produces > integers, it is sometimes desirable to iterate over length 1 bytes objects > instead. > > To handle this situation more obviously (and more efficiently) than would > be > the case with the ``map(bytes.byte, data)`` construct enabled by the above > constructor changes, this PEP proposes the addition of a new ``iterbytes`` > method to ``bytes``, ``bytearray`` and ``memoryview``:: > > for x in data.iterbytes(): > # x is a length 1 ``bytes`` object, rather than an integer > > Third party types and arbitrary containers of integers that lack the new > method can still be handled by combining ``map`` with the new > ``bytes.byte()`` alternate constructor proposed above:: > > for x in map(bytes.byte, data): > # x is a length 1 ``bytes`` object, rather than an integer > # This works with *any* container of integers in the range > # 0 to 255 inclusive > > > Open questions > ^^ > > * The fallback case above suggests that this could perhaps be better > handled > as an ``iterbytes(data)`` *builtin*, that used ``data.__iterbytes__()`` > if defined, but otherwise fell back to ``map(bytes.byte, data)``:: > > for x in iterbytes(data): > # x is a length 1 ``bytes`` object, rather than an integer > # This works with *any* container of integers in the range > # 0 to 255 inclusive > > > References > == > > .. [ideas-thread1] > https://mail.python.org/pipermail/python-ideas/2014-March/027295.html > .. [empty-buffer-issue] http://bugs.python.org/issue20895 > .. [GvR-initial-feedback] > https://mail.python.org/pipermail/python-ideas/2014-March/027376.html > > > Copyright > = > > This document has been placed in the public domain. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 4000 to explicitly declare we won't be doing a Py3k style compatibility break again?
On Sat, Aug 16, 2014 at 6:28 PM, Nick Coghlan wrote: > I've seen a few people on python-ideas express the assumption that > there will be another Py3k style compatibility break for Python 4.0. > There used to be only joking references to 4.0 or py4k -- how things have changed! I've seen nothing that a gentle correction on the list couldn't fix though. > I've also had people express the concern that "you broke compatibility > in a major way once, how do we know you won't do it again?". > Well, they won't, really. You can't predict the future. But really, that's a pretty poor way to say "please don't do it again." I'm not sure why, but I hate when someone starts a suggestion or a question with "why doesn't Python ..." and I have to fight the urge to reply in a flippant way without answering the real question. (And just now I did it again.) I suppose this phrasing may actually be meant as a form of politeness, but to me it often sounds passive-aggressive, pretend-polite. (Could it be a matter of cultural difference? The internet is full of broken English, my own often included.) > Both of those contrast strongly with Guido's stated position that he > never wants to go through a transition like the 2->3 one again. > Right. What's more, when I say that, I don't mean that you should wait until I retire -- I think it's genuinely a bad idea. I also don't expect that it'll be necessary -- in fact, I am counting on tools (e.g. static analysis!) to improve to the point where there won't be a reason for such a transition. (Don't understand this to mean that we should never deprecate things. Deprecations will happen, they are necessary for the evolution of any programming language. But they won't ever hurt in the way that Python 3 hurt.) > Barry wrote PEP 404 to make it completely explicit that python-dev had > no plans to create a Python 2.8 release. Would it be worth writing a > similarly explicit "not an option" PEP explaining that the regular > deprecation and removal process (roughly documented in PEP 387) is the > *only* deprecation and removal process? It could also point to the > fact that we now have PEP 411 (provisional APIs) to help reduce our > chances of being locked indefinitely into design decisions we aren't > happy with. > > If folks (most significantly, Guido) are amenable to the idea, it > shouldn't take long to put such a PEP together, and I think it could > help reduce some of the confusions around the expectations for Python > 4.0 and the evolution of 3.x in general. > But what should it say? It's easy to say there won't be a 2.8 because we already have 3.0 (and 3.1, and 3.2, and ...). But can we really say there won't be a 4.0? Never? Why not? Who is to say that at some point some folks won't be going off on their own to design a whole new language and name it Python 4, following Larry Wall's Perl 6 example? I think it makes sense to occasionally remind the more eager contributors that we want the future to come gently (that's not to say in our sleep :-). But I'm not sure a PEP is the best form for such a reminder. Even the Pope has a Twitter account. :-) -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 4000 to explicitly declare we won't be doing a Py3k style compatibility break again?
I think this would be a great topic for a blog post. Once you've written it I can even bless it by Tweeting about it. :-) PS. Why isn't PEP 387 accepted yet? On Sat, Aug 16, 2014 at 8:48 PM, Nick Coghlan wrote: > On 17 August 2014 12:43, Guido van Rossum wrote: > > On Sat, Aug 16, 2014 at 6:28 PM, Nick Coghlan > wrote: > >> I've also had people express the concern that "you broke compatibility > >> in a major way once, how do we know you won't do it again?". > > > > > > Well, they won't, really. You can't predict the future. But really, > that's a > > pretty poor way to say "please don't do it again." > > > > I'm not sure why, but I hate when someone starts a suggestion or a > question > > with "why doesn't Python ..." and I have to fight the urge to reply in a > > flippant way without answering the real question. (And just now I did it > > again.) > > > > I suppose this phrasing may actually be meant as a form of politeness, > but > > to me it often sounds passive-aggressive, pretend-polite. (Could it be a > > matter of cultural difference? The internet is full of broken English, my > > own often included.) > > I don't mind it if the typical answers are accepted as valid: > > * "because it has these downsides, and those are considered to > outweigh the benefits" > * "because it's difficult, and it never bothered anyone enough for > them to put in the work to do something about it" > > Those aren't always obvious, especially to folks that don't have a lot > of experience with long lived software projects (I had only just > started high school when Python was first released!), so I don't mind > explaining them when I have time. > > >> Both of those contrast strongly with Guido's stated position that he > >> never wants to go through a transition like the 2->3 one again. > > > > Right. What's more, when I say that, I don't mean that you should wait > until > > I retire -- I think it's genuinely a bad idea. > > Absolutely agreed - I think the Unicode change was worthwhile (even > with the impact proving to be higher than expected), but there isn't > any such fundamental change to the data model lurking for Python 3. > > > I also don't expect that it'll be necessary -- in fact, I am counting on > > tools (e.g. static analysis!) to improve to the point where there won't > be a > > reason for such a transition. > > The fact that things like Hylang and MacroPy can already run on the > CPython VM also shows that other features (like import hooks and the > AST compiler) have evolved to the point where the Python data model > and runtime semantics can be more effectively decoupled from syntactic > details. > > > (Don't understand this to mean that we should never deprecate things. > > Deprecations will happen, they are necessary for the evolution of any > > programming language. But they won't ever hurt in the way that Python 3 > > hurt.) > > Right. I think Python 2 has been stable for so long that I sometimes > wonder if folks forget (or never knew?) we used to deprecate things > within the Python 2 series as well, such that code that ran on Python > 2.x wasn't necessarily guaranteed to run on Python 2.(x+2). "Never > deprecate anything" is a recipe for unbounded growth in complexity. > > Benjamin has made a decent start on documenting that normal > deprecation process in PEP 387, so I'd also suggest refining that a > bit and getting it to "Accepted" as part of any explicit "Python 4.x > won't be as disruptive as 3.x" clarification. > > >> no plans to create a Python 2.8 release. Would it be worth writing a > >> similarly explicit "not an option" PEP explaining that the regular > >> deprecation and removal process (roughly documented in PEP 387) is the > >> *only* deprecation and removal process? It could also point to the > >> fact that we now have PEP 411 (provisional APIs) to help reduce our > >> chances of being locked indefinitely into design decisions we aren't > >> happy with. > >> > >> If folks (most significantly, Guido) are amenable to the idea, it > >> > >> shouldn't take long to put such a PEP together, and I think it could > >> help reduce some of the confusions around the expectations for Python > >> 4.0 and the evolution of 3.x in general. > > > > But what should it say? > > The specific things I was thi
Re: [Python-Dev] "embedded NUL character" exceptions
Sounds good to me. On Sun, Aug 17, 2014 at 7:47 AM, Serhiy Storchaka wrote: > Currently most functions which accepts string argument which then passed > to C function as NUL-terminated string, reject strings with embedded NUL > character and raise TypeError. ValueError looks more appropriate here, > because argument type is correct (str), only its value is wrong. But this > is backward incompatible change. > > I think that we should get rid of this legacy inconsistency sooner or > later. Why not fix it right now? I have opened an issue on the tracker [1], > but this issue requires more broad discussion. > > [1] http://bugs.python.org/issue22215 > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Fwd: PEP 467: Minor API improvements for bytes & bytearray
On Sun, Aug 17, 2014 at 5:22 PM, Barry Warsaw wrote: > On Aug 18, 2014, at 10:08 AM, Nick Coghlan wrote: > > >There's actually another aspect to your idea, independent of the naming: > >exposing a view rather than just an iterator. I'm going to have to look at > >the implications for memoryview, but it may be a good way to go (and would > >align with the iterator -> view changes in dict). > > Yep! Maybe that will inspire a better spelling. :) > +1. It's just as much about b[i] as it is about "for c in b", so a view sounds right. (The view would have to be mutable for bytearrays and for writable memoryviews.) On the rest, it's sounding more and more as if we will just need to live with both bytes(1000) and bytearray(1000). A warning sounds worse than a deprecation to me. bytes.zeros(n) sounds fine to me; I value similar interfaces for bytes and bytearray pretty highly. I'm lukewarm on bytes.byte(c); but bytes([c]) does bother me because a size one list is (or at least feels) more expensive to allocate than a size one bytes object. So, okay. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 4000 to explicitly declare we won't be doing a Py3k style compatibility break again?
On Sun, Aug 17, 2014 at 6:29 AM, Barry Warsaw wrote: > On Aug 16, 2014, at 07:43 PM, Guido van Rossum wrote: > > >(Don't understand this to mean that we should never deprecate things. > >Deprecations will happen, they are necessary for the evolution of any > >programming language. But they won't ever hurt in the way that Python 3 > >hurt.) > > It would be useful to explore what causes the most pain in the 2->3 > transition? IMHO, it's not the deprecations or changes such as print -> > print(). It's the bytes/str split - a fundamental change to core and > common > data types. The question then is whether you foresee any similar looming > pervasive change? [*] > I'm unsure about what's the single biggest pain moving to Python 3. In the past I would have said that it's for sure the bytes/str split (which both the biggest pain and the biggest payoff). But if I look carefully into the soul of teams that are still on 2.7 (I know a few... :-), I think the real reason is that Python 3 changes so many different things, you have to actually understand your code to port it (unlike with minor version transitions, where the changes usually spike in one specific area, and you can leave the rest to normal attrition and periodic maintenance). -Barry > > [*] I was going to add a joke about mandatory static type checking, but > sometimes jokes are blown up into apocalyptic prophesy around here. ;) > Heh. :-) -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Fwd: PEP 467: Minor API improvements for bytes & bytearray
On Tue, Aug 19, 2014 at 5:25 AM, Nick Coghlan wrote: > On 18 August 2014 10:45, Guido van Rossum wrote: > > On Sun, Aug 17, 2014 at 5:22 PM, Barry Warsaw wrote: > >> > >> On Aug 18, 2014, at 10:08 AM, Nick Coghlan wrote: > >> > >> >There's actually another aspect to your idea, independent of the > naming: > >> >exposing a view rather than just an iterator. I'm going to have to look > >> > at > >> >the implications for memoryview, but it may be a good way to go (and > >> > would > >> >align with the iterator -> view changes in dict). > >> > >> Yep! Maybe that will inspire a better spelling. :) > > > > > > +1. It's just as much about b[i] as it is about "for c in b", so a view > > sounds right. (The view would have to be mutable for bytearrays and for > > writable memoryviews.) > > > > On the rest, it's sounding more and more as if we will just need to live > > with both bytes(1000) and bytearray(1000). A warning sounds worse than a > > deprecation to me. > > I'm fine with keeping bytearray(1000), since that works the same way > in both Python 2 & 3, and doesn't seem likely to be invoked > inadvertently. > > I'd still like to deprecate "bytes(1000)", since that does different > things in Python 2 & 3, while "b'\x00' * 1000" does the same thing in > both. > I think any argument based on what "bytes" does in Python 2 is pretty weak, since Python 2's bytes is just an alias for str, so it has tons of behavior that differ -- why single this out? In Python 3, I really like bytes and bytearray to be as similar as possible, and that includes the constructor. > $ python -c 'print("{!r}\n{!r}".format(bytes(10), b"\x00" * 10))' > '10' > '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' > $ python3 -c 'print("{!r}\n{!r}".format(bytes(10), b"\x00" * 10))' > b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' > b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' > > Hitting the deprecation warning in single-source code would seem to be > a strong hint that you have a bug in one version or the other rather > than being intended behaviour. > > > bytes.zeros(n) sounds fine to me; I value similar interfaces for bytes > and > > bytearray pretty highly. > > With "bytearray(1000)" sticking around indefinitely, I'm less > concerned about adding a "zeros" constructor. > That's fine. > > I'm lukewarm on bytes.byte(c); but bytes([c]) does bother me because a > size > > one list is (or at least feels) more expensive to allocate than a size > one > > bytes object. So, okay. > > So, here's an interesting thing I hadn't previously registered: we > actually already have a fairly capable "bytesview" option, and have > done since Stefan implemented "memoryview.cast" in 3.3. The trick lies > in the 'c' format character for the struct module, which is parsed as > a length 1 bytes object rather than as an integer: > > >>> data = bytearray(b"Hello world") > >>> bytesview = memoryview(data).cast('c') > >>> list(bytesview) > [b'H', b'e', b'l', b'l', b'o', b' ', b'w', b'o', b'r', b'l', b'd'] > >>> b''.join(bytesview) > b'Hello world' > >>> bytesview[0:5] = memoryview(b"olleH").cast('c') > >>> list(bytesview) > [b'o', b'l', b'l', b'e', b'H', b' ', b'w', b'o', b'r', b'l', b'd'] > >>> b''.join(bytesview) > b'olleH world' > > For the read-only case, it covers everything (iteration, indexing, > slicing), for the writable view case, it doesn't cover changing the > shape of the target array, and it doesn't cover assigning arbitrary > buffer objects (you need to wrap them in a similar cast for memoryview > to allow the assignment). > > It's hardly the most *intuitive* spelling though - I was one of the > reviewers for Stefan's memoryview rewrite back in 3.3, and I only made > the connection today when looking to see how a view object like the > one we were discussing elsewhere in the thread might be implemented as > a facade over arbitrary memory buffers, rather than being specific to > bytes and bytearray. > Maybe the 'future
Re: [Python-Dev] Bytes path support
The official policy is that we want them to go away, but reality so far has not budged. We will continue to hold our breath though. :-) On Tue, Aug 19, 2014 at 1:37 AM, Serhiy Storchaka wrote: > Builting open(), io classes, os and os.path functions and some other > functions in the stdlib support bytes paths as well as str paths. But many > functions doesn't. There are requests about adding this support ([1], [2]) > in some modules. It is easy (just call os.fsdecode() on argument) but I'm > not sure it is worth to do. Pathlib doesn't support bytes path and it looks > intentional. What is general policy about support of bytes path in the > stdlib? > > [1] http://bugs.python.org/issue19997 > [2] http://bugs.python.org/issue20797 > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Bytes path support
I'm sorry my moment of levity was taken so seriously. With my serious hat on, I would like to claim that *conceptually* filenames are most definitely text. Due to various historical accidents the UNIX system calls often encoded text as arguments, and we sometimes need to control that encoding. Hence the occasional need for bytes arguments. But most of the time you don't have to think about that, and forcing users to worry about it is mostly as counter-productive as forcing to think about the encoding of every text file. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Bytes path support
On Tuesday, August 19, 2014, Stephen J. Turnbull wrote: > Greg Ewing writes: > > Stephen J. Turnbull wrote: > > > > > This case can be handled now using the surrogateescape > > > error handler, > > > > So maybe the way to make bytes paths go away is to always > > use surrogateescape for paths on unix? > > Backward compatibility rules that out, I think. I certainly would > recommend that for new code, but even for new code there are many > users who vehemently object to using Unicode as an intermediate > representation of things they think of as binary blobs. Not worth the > hassle to even seriously propose removing those APIs IMO. But maybe we don't have to add new ones? --Guido -- --Guido van Rossum (on iPad) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Bytes path support
gingly accepts that there are Windows users out there that might > like to use their software. > > The third approach is the one we tried for a long time with Python 2, > and essentially found to be an "experts only" solution. Yes, you can > *make* it work, but the runtime isn't set up so it works *by default*. > > The Unicode changes in Python 3 are a result of the Python core > development team saying "it really shouldn't be this hard for > application developers to get cross-platform interoperability between > correctly configured systems when dealing solely with correctly > encoded data and metadata". The idea of Python 3 is that applications > should require additional complexity solely to deal with *incorrectly* > configured systems and improperly encoded data and metadata (and, > ideally, the detection of the need for such handling should be "Python > 3 threw an exception" rather than "something further down the line > detected corrupted data"). > > This is software rather than magic, though - these improvements only > happen through people actually knuckling down and solving the related > problems. When folks complain about Python 3's operating system > interface handling causing problems in some situations? They're almost > always referring to areas where we're still relying on the locale > system on POSIX or the code page system on Windows. Both of those > approaches are irredeemably broken - the answer is to stop relying on > them, but appropriately updating the affected subsystems generally > isn't a trivial task. A lot of the affected code runs before the > interpreter is fully initialised, which makes it really hard to test, > and a lot of it is incredibly convoluted due to various configuration > options and platform specific details, which makes it incredibly hard > to modify without breaking anything. > > One of those areas is the fact that we still use the old 8-bit APIs to > interact with the Windows console. Those are just as broken in a > multilingual world as the other Windows 8-bit APIs, so Drekin came up > with a project to expose the Windows console as a UTF-16-LE stream > that uses the 16-bit APIs instead: > https://pypi.python.org/pypi/win_unicode_console > > I personally hope we'll be able to get the issues Drekin references > there resolved for Python 3.5 - if other folks hope for the same > thing, then one of the best ways to help that happen is to try out the > win_unicode_console module and provide feedback on what does and > doesn't work. > > Another was getting exceptions attempting to write OS data to > sys.stdout when the locale settings had been scrubbed from the > environment. For Python 3.5, we better tolerate that situation by > setting "errors=surrogateescape" on sys.stdout when the environment > claims "ascii" as a suitable encoding for talking to the operating > system (this is our way of saying "we don't actually believe you, but > also don't have the data we need to overrule you completely"). > > While I was going to wait for more feedback from Fedora folks before > pushing the idea again, this thread also makes me think it would be > worth our while to add more tools for dealing with surrogate escapes > and latin-1 binary data smuggling just to help make those techniques > more discoverable and accessible: > http://bugs.python.org/issue18814#msg225791 > > These various discussions are also giving me plenty of motivation to > get back to working on PEP 432 (the rewrite of the interpreter startup > sequence) for Python 3.5. A lot of these things are just plain hard to > change because of the complexity of the current startup code. > Redesigning that to use a cleaner, multiphase startup sequence that > gets the core interpreter running *before* configuring the operating > system integration should give us several more options when it comes > to dealing with some of these challenges. > > Regards, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Bytes path related questions for Guido
Yes on #1 -- making the low-level functions more usable for edge cases by supporting bytes seems fine (as long as the support for strings, where it exists, is not compromised). The status of pathlib is a little unclear to me -- is there a plan to eventually support bytes or not? For #2 I think you should probably just work with the others you have mentioned. On Sat, Aug 23, 2014 at 9:44 PM, Nick Coghlan wrote: > At Guido's request, splitting out two specific questions from Serhiy's > thread where I believe we could do with an explicit "yes or no" from > him. > > 1. Should we accept patches adding support for the direct use of bytes > paths in lower level filesystem manipulation APIs? (i.e. everything > that isn't pathlib) > > This was Serhiy's original question (due to some open issues [1,2]). I > think the answer is yes, as we already do in some cases, and the > "pathlib doesn't support binary paths" design decision is a high level > platform independent API vs low level potentially platform dependent > API one rather than being about disallowing the use of bytes paths in > general. > > [1] http://bugs.python.org/issue19997 > [2] http://bugs.python.org/issue20797 > > 2. Should we add some additional helpers to the string module for > dealing with surrogate escaped bytes and other techniques for > smuggling arbitrary binary data as text? > > My proposal [3] is to add: > > * string.escaped_surrogates (constant with the 128 escaped code points) > * string.clean(s): replaces surrogates with '\ufffd' or another > specified code point > * string.redecode(s, encoding): encodes a string back to bytes and > then decodes it again using the specified encoding (the old encoding > defaults to 'latin-1' to match the assumptions in WSGI) > > "s != string.clean(s)" would then serve as a check for "does this > string contain any surrogate escaped bytes?" > > [3] http://bugs.python.org/issue18814#msg225791 > > Regards, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > _______ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 476: Enabling certificate validation by default!
On Wed, Sep 3, 2014 at 8:58 AM, R. David Murray wrote: > I'm OK with letting go of this invalid-cert issue myself, given the lack > of negative feedback Twisted got. I'll just keep my fingers crossed. > I'm with this sentiment (cautiously +1) -- and not just because of Twisted's experience or Glyph's passion. Network security is much more important now than it was five years ago -- and yet Python 2.7 is at least that old. My own experience has changed a lot: five years ago (when I worked at Google!) it was common to find internal services that required SSL but had a misconfigured certificate, and the only way to access those services was to override the browser complaints. Today (working at Dropbox, a much smaller company!) I don't even remember the last time I had to deal with such a browser complaint -- internal services here all redirect to SSL, and not a browser that can find fault with their certs. If I did get a complaint about a certificate I would fire off an email to a sysadmin alerting them to the issue. Let's take the plunge on this issue for the next 2.7 release (3.5 being a done deal). Yes, some people will find that they have an old script accessing an old service which breaks. Surely some of the other changes in the same 2.7 bugfix release will also break some other scripts. People deal with it. Probably 90% of the time it's an annoyance (but no worse than any other minor-release upgrade -- you should test upgrades before committing to them, and if all else fails, roll it back). But at least some of the time it will be a wake-up call and an expired certificate will be replaced, resulting in more security for all. I don't want to start preaching security doom and gloom (the experts are doing enough of that :-), but the scale and sophistication of attacks (whether publicized or not) is constantly increasing, and routine maintenance checks on old software are just one of the small ways that we can help the internet become more secure. (And please let the PSF sysadmin team beef up *.python.org -- sooner or later some forgotten part of our infrastructure *will* come under attack.) -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 476: Enabling certificate validation by default!
Antoine, I think we are well past the point where arguments can sway positions. There clearly is no agreement on this issue. So please treat my post as a BDFL tie-breaker. I will just give you one thing to ponder -- those small/non-profit websites that can't afford proper certs are exactly the ones that will be hosting malware soon. Sorry for them, and the certificate vendors certainly aren't in it for charity, but they must fix their certificate issues (and probably improve many other sysadmin practices). --Guido On Wed, Sep 3, 2014 at 11:37 AM, Antoine Pitrou wrote: > On Wed, 3 Sep 2014 10:54:55 -0700 > Guido van Rossum wrote: > > > > Let's take the plunge on this issue for the next 2.7 release (3.5 being a > > done deal). > > I'm entirely against this. > > > Yes, some people will find that they have an old script > > accessing an old service which breaks. Surely some of the other changes > in > > the same 2.7 bugfix release will also break some other scripts. People > deal > > with it. Probably 90% of the time it's an annoyance (but no worse than > any > > other minor-release upgrade -- you should test upgrades before committing > > to them, and if all else fails, roll it back). > > Python is routinely updated to bugfix releases by Linux distributions > and other distribution channels, you usually have no say over what's > shipped in those updates. This is not like changing the major version > used for executing the script, which is normally a manual change. > > > Today (working at Dropbox, a much smaller company!) I don't > > even remember the last time I had to deal with such a browser > > complaint -- internal services here all redirect to SSL, and not a > > browser that can find fault with their certs. > > Good for you. I still sometimes get warnings about expired certificates > - and sometimes ones that don't exactly match the domain being > fetched (for example, the certificate wouldn't be valid for that > specific subdomain - note that CAs often charge a premium for multiple > subdomains, which why small or non-profit Web sites sometimes skimp on > them). > > You shouldn't assume that the experience of well-connected people in > the Silicon Valley is representative of what people over the world > encounter. Yes, where there's a lot of money and a lot of accumulated > domain competence, security procedures are updated and followed more > scrupulously... > > > But at least some of the > > time it will be a wake-up call and an expired certificate will be > replaced, > > resulting in more security for all. > > Only if you are actually the one managing that certificate and the > machine it's installed one... > > Regards > > Antoine. > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 476: Enabling certificate validation by default!
OK, that changes my position for 2.7 (but not for 3.5). I had assumed there was a way to disable the cert check by changing one parameter to the urlopen() call. (And I had wanted to add that there should be a clear FAQ about the subject.) If this isn't possible that changes the situation. (But I still think that once we do have that simple change option we should do it, in a later 2.7 upgrade.) I apologize for speaking before I had read all facts, and I'll await what you and Nick come up with. --Guido On Wed, Sep 3, 2014 at 12:26 PM, Christian Heimes wrote: > On 03.09.2014 19:54, Guido van Rossum wrote: > > Let's take the plunge on this issue for the next 2.7 release (3.5 being > > a done deal). Yes, some people will find that they have an old script > > accessing an old service which breaks. Surely some of the other changes > > in the same 2.7 bugfix release will also break some other scripts. > > People deal with it. Probably 90% of the time it's an annoyance (but no > > worse than any other minor-release upgrade -- you should test upgrades > > before committing to them, and if all else fails, roll it back). But at > > least some of the time it will be a wake-up call and an expired > > certificate will be replaced, resulting in more security for all. > > I'm +1 for Python 3.5 but -1 for Python 2.7. > > The SSLContext backport will landed in Python 2.7.9 (to be released). No > Python 2 user is familiar with the feature yet. But more importantly: > None of the stdlib modules support the new feature, too. httplib, > imaplib ... they all don't take a SSLContext object as an argument. > PEP-466 does not include the backport for the network modules. Without > the context argument there is simply no clean way to configure the SSL > handshake properly. > > The default settings must stay until we decide to backport the context > argument and have a way to configure the default behavior. Nick and me > are planing a PEP. > > Christian > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Proposed schedule for 3.4.2
On Mon, Sep 8, 2014 at 5:08 AM, Nick Coghlan wrote: > > It would also be good to get Guido's official verdict on PEP 476 (the > switch to validating HTTPS by default) in time for 3.4.2. Based on the > previous discussion, Alex updated the PEP to suggest "just fix it" for > all of 3.5, 3.4.x and 2.7.x (including the httplib SSLContext support > backport in the latter case). > My opinion hasn't changed since the last time I opened my mouth prematurely. :-) I would very much like these to go in, but for 2.7 I am now worried about what we should tell people who have a script that uses an https URL to access a service that can only be accessed via SSL/TLS to a self-signed or otherwise mis-configured cert. I am not insisting on an environment variable to disable this (too easy) but I do think it must be possible to make a simple change to the code, on the order of tracking down the urlopen() call and adding a new keyword parameter. Such a band-aid needn't be backward compatible (we can introduce a new keyword parameter for this purpose) and it needn't be totally straightforward (we can assume some modicum of understanding of finding and editing .py files) but it should definitely not require a refactor of the script (e.g. swapping out urlopen and replacing it with httplib or requests would be too much of a burden). And we should have prominent documentation (perhaps in FAQ form?) with an example of how to do it. > I think that would be feasible with an rc on the 20th, but challenging > if the rc is this coming weekend. > > Note, as I stated in the previous thread, I'm now +1 on that PEP, > because I don't see any way to write an automated scan for a large > code base that ensures we're not relying on the default handling at > all. If the default behaviour is to validate HTTPS, the lack of such a > scanner isn't a problem - any failures to cope with self-signed or > invalid certs will be noisy, and we can either fix the certs, patch > the code to cope with them appropriately, or (for the self-signed cert > case) configure OpenSSL via environment variables. If dealing with > invalid certs is truly necessary, and the code can't be updated > either, then I'm OK with the answer being "keep using an older version > of Python, as that's going to be the least of your security concerns". > Yeah, I am not interested in helping out the case where the user is incapable (for whatever reason) of tracking down and changing a couple of lines of code. Such users are dependent on someone else with wizard powers anyway (who gave them the script?). -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Proposed schedule for 3.4.2
I will pronounce for 3.4 once you point me to the documentation that explains how to disable cert validation for an example program that currently pulls down an https URL using urlopen. Without adding package dependencies. On Mon, Sep 8, 2014 at 10:25 AM, Alex Gaynor wrote: > Guido van Rossum python.org> writes: > > > > > > > Would you be willing to officially pronounce on PEP-476 in the context of > 3.4.x, > so we can get it into the release, and then we can defer on officially > approving > it for 2.7.X until we figure out all the moving pieces? > > Cheers, > Alex > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Proposed schedule for 3.4.2
Well, get cracking then! :-) On Mon, Sep 8, 2014 at 10:44 AM, Alex Gaynor wrote: > *Shifts uncomfortably* it looks like presently there's not a good way to > change anything about the SSL configuration for urllib.request.urlopen. It > does not take a `context` argument, as the http.client API does: > https://docs.python.org/3/library/urllib.request.html#module-urllib.request > and instead takes the cafile, capath, cadefault args. > > This would need to be updated first, once it *did* take such an argument, > this would be accomplished by: > > context = ssl.create_default_context() > context.verify_mode = CERT_OPTIONACERT_NONE > context.verify_hostname = False > urllib.request.urlopen(" > https://something-i-apparently-dont-care-much-about";, context=context) > > Alex > > > On Mon, Sep 8, 2014 at 10:35 AM, Guido van Rossum > wrote: > >> I will pronounce for 3.4 once you point me to the documentation that >> explains how to disable cert validation for an example program that >> currently pulls down an https URL using urlopen. Without adding package >> dependencies. >> >> On Mon, Sep 8, 2014 at 10:25 AM, Alex Gaynor >> wrote: >> >>> Guido van Rossum python.org> writes: >>> >>> > >>> > >>> >>> Would you be willing to officially pronounce on PEP-476 in the context >>> of 3.4.x, >>> so we can get it into the release, and then we can defer on officially >>> approving >>> it for 2.7.X until we figure out all the moving pieces? >>> >>> Cheers, >>> Alex >>> >>> ___________ >>> Python-Dev mailing list >>> Python-Dev@python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> > > > > -- > "I disapprove of what you say, but I will defend to the death your right > to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > GPG Key fingerprint: 125F 5C67 DFE9 4084 > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Proposed schedule for 3.4.2
I still prefer having a parameter on urlopen (or thereabouts) -- it feels wrong to make it easier to change this globally than on a per-call basis, and if you don't understand monkey-patching, it's impossible to debug if you put the patch in the wrong place. For the poor soul who has a script with many urlopen("https"//") calls, well, they probably don't mind the busywork of editing each and every one of them. I'm fine with giving the actual keyword parameter a scary-sounding ugly name. On Mon, Sep 8, 2014 at 3:48 PM, Donald Stufft wrote: > > On Sep 8, 2014, at 6:43 PM, Nick Coghlan wrote: > > > On 9 Sep 2014 08:30, "Donald Stufft" wrote: > > > > If someone wants to do this, can’t they write their own 6 line function? > > Unfortunately not, as the domain knowledge required to know what those six > lines should look like is significant. > > Keeping the old unsafe behaviour around with a more obviously dangerous > name is much simpler than explaining to people "Here, copy this chunk of > code you don't understand". > > If we were starting with a blank slate there's no way we'd offer such a > thing, but as Jim pointed out, we do want to make it relatively easy for > Standard Operating Environment maintainers to hack around it if necessary. > > Cheers, > Nick. > > > > > import ssl > > import urllib.request > > _real_urlopen = urllib.request.urlopen > > def _unverified(*args, **kwargs): > > if not kwargs.keys() & {“context”, “cafile”, “capath”, “cadefault”}: > > ctx = ssl.create_default_context() > > ctx.verify_mode = CERT_NONE > > ctx.verify_hostname = False > > kwargs[“context”] = ctx > > return _real_urlopen(*args, **kwargs) > > > > --- > > Donald Stufft > > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > > > Why isn’t documentation with appropriate red warnings a suitable place if > we really must have it? That sounds like a much better solution that some > weird function people monkeypatch. It gives them more control over things > (maybe they have a valid certificate chain, but an invalid host name!), > it’ll work across all Python implementations, and most importantly, it > gives us a place where there is some long form location to be like “yea you > really probably don’t want to be doing this” in big red letters. > > Overall I’m -1 on either offering the function or documenting it at all, > but if we must do something then I think documentation is more than enough. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Proposed schedule for 3.4.2
Replacing urllib.urlopen(url) with urllib._unsafe_urlopen_without_secure_https(url) would be fine too (actual name to be picked by whoever writes the code) but I don't see that it offers much more of a barrier against abuse of this compatibility feature compared to a keyword argument. Requiring a monkeypatch feels unnecessarily mean -- I see no reason why the code can't be in the standard library. It's a bit like the emergency hammer on a train -- what keeps riders from misusing it is convention (and the sign next to it), since locking it up would miss the point. Do note that there are a couple of different common patterns for how this is used in legacy code, e.g. urllib vs.urllib2, URLOpener vs FancyURLOpener, urlopen vs. urlretrieve; there are also some internal calls, e.g. in response to redirects. The ultimate form of the solution (keyword argument of alternate function or whatever) may depend on the needs of these various (ancient) architectures. Regarding 3.4 and 3.5, there's presumably much less legacy code for 3.4, but its expected lifetime is also much shorter than 2.7's (since we're already close to releasing 3.5). So I'm still a bit torn -- in the end one reason to do it in 3.4 is that 3.4 shouldn't have a weaker default than 2.7. Onwards, On Mon, Sep 8, 2014 at 7:46 PM, Glenn Linderman wrote: > Well, this thread seems to be top-posted so... > > Why not provide _urlopen_with_scary_keyword_parameter as the monkey-patch > option? > > So after the (global to the module) monkeypatch, they would _still_ have > to add the keyword parameter. > > > > On 9/8/2014 4:31 PM, Guido van Rossum wrote: > > I still prefer having a parameter on urlopen (or thereabouts) -- it feels > wrong to make it easier to change this globally than on a per-call basis, > and if you don't understand monkey-patching, it's impossible to debug if > you put the patch in the wrong place. > > For the poor soul who has a script with many urlopen("https"//") > calls, well, they probably don't mind the busywork of editing each and > every one of them. > > I'm fine with giving the actual keyword parameter a scary-sounding ugly > name. > > On Mon, Sep 8, 2014 at 3:48 PM, Donald Stufft wrote: > >> >> On Sep 8, 2014, at 6:43 PM, Nick Coghlan wrote: >> >> >> On 9 Sep 2014 08:30, "Donald Stufft" wrote: >> > >> > If someone wants to do this, can’t they write their own 6 line function? >> >> Unfortunately not, as the domain knowledge required to know what those >> six lines should look like is significant. >> >> Keeping the old unsafe behaviour around with a more obviously dangerous >> name is much simpler than explaining to people "Here, copy this chunk of >> code you don't understand". >> >> If we were starting with a blank slate there's no way we'd offer such a >> thing, but as Jim pointed out, we do want to make it relatively easy for >> Standard Operating Environment maintainers to hack around it if necessary. >> >> Cheers, >> Nick. >> >> > >> > import ssl >> > import urllib.request >> > _real_urlopen = urllib.request.urlopen >> > def _unverified(*args, **kwargs): >> > if not kwargs.keys() & {“context”, “cafile”, “capath”, “cadefault”}: >> > ctx = ssl.create_default_context() >> > ctx.verify_mode = CERT_NONE >> > ctx.verify_hostname = False >> > kwargs[“context”] = ctx >> > return _real_urlopen(*args, **kwargs) >> > >> > --- >> > Donald Stufft >> > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> > >> >> >> Why isn’t documentation with appropriate red warnings a suitable place >> if we really must have it? That sounds like a much better solution that >> some weird function people monkeypatch. It gives them more control over >> things (maybe they have a valid certificate chain, but an invalid host >> name!), it’ll work across all Python implementations, and most importantly, >> it gives us a place where there is some long form location to be like “yea >> you really probably don’t want to be doing this” in big red letters. >> >> Overall I’m -1 on either offering the function or documenting it at >> all, but if we must do something then I think documentation is more than >> enough. >> > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Backwards compatibility after certificate autovalidation
The multiple threads going on are confusing (or maybe GMail makes them more confusing), but the architecture you are sketching here sounds good. I can't find get_default_context() in the repo, but perhaps I need to refresh, or perhaps you're talking about a design in a PEP. On Mon, Sep 8, 2014 at 8:03 PM, Nick Coghlan wrote: > > On 9 Sep 2014 10:48, "Jim J. Jewett" wrote: > > I assume that adding _unverified_urlopen or urlopen(context=...) do > > provide incremental improvements compatible with the eventual full > > opt-in. If so, adding them is probably reasonable, but I think the > > PEP should explicitly list all such approved half-measures as a guard > > against API feature creep. > > From Guido's and your feedback, I think we may need two things to approve > this for 3.4.2 (putting 2.7 aside for now): > > 1. "context" parameter support in urllib.request (to opt out on a per-call > basis) > 2. a documented way to restore the old behaviour via sitecustomize (which > may involve monkeypatching) > > The former change seems non-controversial. > > I think the more fine-grained solution for the latter can wait until 3.5 > (and will be an independent PEP), we just need an interim workaround for > 3.4 that could conceivably be backported to 2.7. > > On that front, ssl currently has two context factories: > get_default_context() and _get_stdlib_context. One possible option would be > to: > > 1. Rename "_get_stdlib_context" to "_get_unverified_context" > 2. Add "_get_https_context" as an alias for "get_default_context" > > Opting out on a per-call basis means passing an unverified context. > > Opting out globally would mean monkeypatching _get_https_context to refer > to _get_unverified_context instead. > > These would both be documented as part of transition, but with clear > security warnings. The use of the leading underscores in the names is > intended to emphasise "you probably don't want to be using this". > > Regards, > Nick. > > > > > > -jJ > > ___ > > Python-Dev mailing list > > Python-Dev@python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Suggested changes to verify HTTPS by default (was Re: Proposed schedule for 3.4.2)
I'm going to leave the design up to Nick and friends for a while. Let me know when there is a patch to review. On Tue, Sep 9, 2014 at 3:52 AM, Nick Coghlan wrote: > On 9 September 2014 03:44, Alex Gaynor wrote: > > *Shifts uncomfortably* it looks like presently there's not a good way to > > change anything about the SSL configuration for urllib.request.urlopen. > It > > does not take a `context` argument, as the http.client API does: > > > https://docs.python.org/3/library/urllib.request.html#module-urllib.request > > and instead takes the cafile, capath, cadefault args. > > > > This would need to be updated first, once it *did* take such an argument, > > this would be accomplished by: > > > > context = ssl.create_default_context() > > context.verify_mode = CERT_OPTIONACERT_NONE > > context.verify_hostname = False > > urllib.request.urlopen(" > https://something-i-apparently-dont-care-much-about";, > > context=context) > > I'd never needed to use the existing global configuration settings in > urllib.request before, but it actually *does* already support setting > the default opener for urllib.urlopen. > > To explicitly set it to use verified HTTPS by default: > > import ssl, urllib.request > https_handler = HTTPSHandler(context=ssl.create_default_context(), > check_hostname=True) > > urllib.request.install_opener(urllib.request.build_opener(https_handler) > > When the default changes, turning off verification by default for > urllib.request.urlopen would look like: > > import ssl, urllib.request > unverified_context = ssl.create_default_context() > unverified_context.verify_mode = CERT_OPTIONACERT_NONE > unverified_context.verify_hostname = False > unverified_handler = HTTPSHandler(context=unverified_context, > check_hostname=False) > > urllib.request.install_opener(urllib.request.build_opener(unverified_handler) > > However, even setting the opener like that still leaves > http.client.HTTPSConnection, urllib.request.URLOpener and > urllib.request.FancyURLOpener using unverified HTTPS with no easy way > to change their default behaviour. > > That means some other approach to global configuration is going to be > needed to cover the "coping with legacy corporate infrastructure" > case, and I still think a monkeypatching based hack is likely to be > our best bet. > > So, focusing on 3.4, but in a way that should be feasible to backport, > the changes that I now believe would be needed are: > > 1. Add "context" arguments to urlopen, URLOpener and FancyURLOpener > (the latter two have been deprecated since 3.3, but this would make > things easier for a subsequent 2.7 backport) > 2. Add a ssl._create_https_context() alias for ssl.create_default_context() > 3. Change urllib.request.urlopen() and http.client.HTTPSConnection to > call ssl_create_https_context() rather than > ssl._create_stdlib_context() > 4. Rename ssl._create_stdlib_context() to > ssl._create_unverified_context() (updating call sites accordingly) > > To revert any given call site to the old behaviour: > > http.client.HTTPSConnection(context=ssl._create_unverified_context()) > urllib.request.urlopen(context=ssl._create_unverified_context()) > urllib.request.URLOpener(context=ssl._create_unverified_context()) > urllib.request.FancyURLOpener(context=ssl._create_unverified_context()) > > And to revert to the old default behaviour globally: > > import ssl > ssl._create_https_context = ssl._create_unverified_context > > The backport to 2.7 would then be a matter of bringing urllib, > urllib2, httplib and ssl into line with their 3.4.2 counterparts. > > Regards, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] List insert at index that is well out of range - behaves like append
This functionality has existed since the earliest days of Python, and even if we all agreed it was wrong we couldn't change it -- it would just break too much existing code. I can't quite remember why I did it that way but it was definitely a conscious choice; probably some symmetry or edge case. (Note that it works this way at the other end too -- a.insert(-100, x) will insert x at the beginning of a, if a has fewer than 100 elements.) On Mon, Sep 15, 2014 at 3:29 PM, Mark Shannon wrote: > > > On 15/09/14 12:31, Tal Einat wrote: > >> On Mon, Sep 15, 2014 at 6:18 AM, Harish Tech >> wrote: >> >>> I had a list >>> >>> a = [1, 2, 3] >>> >>> when I did >>> >>> a.insert(100, 100) >>> >>> [1, 2, 3, 100] >>> >>> as list was originally of size 4 and I was trying to insert value at >>> index >>> 100 , it behaved like append instead of throwing any errors as I was >>> trying >>> to insert in an index that did not even existed . >>> >>> >>> Should it not throw >>> >>> >>> IndexError: list assignment index out of range >>> >>> >>> exception as it throws when I attempt doing >>> >>> >>> a[100] = 100 >>> >>> Question : 1. Any idea Why has it been designed to silently handle this >>> instead of informing the user with an exception ? >>> >>> >>> Personal Opinion : Lets see how other dynamic languages behave in such a >>> situation : Ruby : >>> >>> >>> > a = [1, 2] >>> >>> > a[100] = 100 >>> >>> > a >>> >>> => [1, 2, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, >>> nil, >>> nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, >>> nil, >>> nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, >>> nil, >>> nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, >>> nil, >>> nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, >>> nil, >>> nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, >>> nil, >>> nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, 100] >>> >>> The way ruby handles this is pretty clear and sounds meaningful (and >>> this is >>> how I expected to behave and it behaved as per my expectation) at least >>> to >>> me . So what I felt was either it should throw exception or do the way >>> ruby >>> handles it . >>> >>> >>> Is ruby way of handling not the obvious way ? >>> >>> I even raised it in stackoverflow >>> http://stackoverflow.com/questions/25840177/list- >>> insert-at-index-that-is-well-out-of-range-behaves-like-append >>> >>> and got some responses . >>> >> >> Hello Harish, >> >> The appropriate place to ask questions like this is python-list [1], >> or perhaps Stack Overflow. >> > > I think this is an OK forum for this question. > If someone isn't sure if something is a bug or not, then why not ask here > before reporting it on the bug tracker? > > This does seem strange behaviour, and the documentation for list.insert > gives no clue as to why this behaviour was chosen. > > Cheers, > Mark. > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] List insert at index that is well out of range - behaves like append
On Mon, Sep 15, 2014 at 3:46 PM, Mark Lawrence wrote: > > I assume it's based on the concepts of slicing. From the docs > "s.insert(i, x) - inserts x into s at the index given by i (same as s[i:i] > = [x])". Ah, right. It matches thigs like s[100:] which is the empty string if s is shorter than 100. > Although shouldn't that read s[i:i+1] = [x] ? > Should've stopped while you were ahead. :-) 'Nuff said. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 394 - Clarification of what "python" command should invoke
"python" should always be the same as "python2". On Fri, Sep 19, 2014 at 8:03 AM, Steven D'Aprano wrote: > On Fri, Sep 19, 2014 at 10:41:58AM -0400, Barry Warsaw wrote: > > On Sep 19, 2014, at 10:23 AM, Donald Stufft wrote: > > > > >My biggest problem with ``python3``, is what happens after 3.9. > > > > FWIW, 3.9 by my rough calculation is 7 years away. > > That makes it 2021, one year after Python 2.7 free support ends, but two > years before Red Hat commercial support for it ends. > > > I seem to recall Guido saying that *if* there's a 4.0, it won't be a > major > > break like Python 3, whatever that says about the numbering scheme after > 3.9. > > > > Is 7 years enough to eradicate Python 2 the way we did for Python 1? > Then > > maybe Python 4 can reclaim /usr/bin/python. > > I expect not quite. Perhaps 10 years though. > > > > -- > Steven > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 394 - Clarification of what "python" command should invoke
On Sep 19, 2014 8:36 AM, "Antoine Pitrou" wrote: > > On Fri, 19 Sep 2014 08:20:48 -0700 > Guido van Rossum wrote: > > "python" should always be the same as "python2". > > "Always" as in "eternally"? Until I say so. Which will happen in the distant future. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP476: Enabling certificate validation by default
The PEP doesn't specify any of the API changes for Python 2.7. I feel it is necessary for the PEP to show a few typical code snippets using urllib in Python 2.7 and how one would modify these in order to disable the cert checking. There are also a few typos; especially this paragraph puzzled me: This will be acheived by adding a new ``ssl._create_default_https_context`` function, which is the same as ``ssl.create_default``. ``http.client`` can then replace it's usage of ``ssl._create_stdlib_context`` with the new ``ssl._create_default_https_context``. (1) spelling: it's achieved, not achieved (2) method name: it's ssl.create_default_context, not ssl.create_default (3) There's not enough whitespace (in the rendered HTML on legacy.python.org) before http.client -- I kept reading it as "... which is the same as ssl.create_default.http.client ..." (4) There's no mention of the Python 2 equivalent of http.client. Finally, it's kind of non-obvious in the PEP that this affects Python 2.7.X (I guess the one after the next) as well as 3.4 and 3.5. On Fri, Sep 19, 2014 at 9:53 AM, Alex Gaynor wrote: > Hi all, > > I've just updated the PEP to reflect the API suggestions from Nick, and the > fact that the necessary changes to urllib were landed. > > I think this is ready for pronouncement, Guido? > > Cheers, > Alex > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP476: Enabling certificate validation by default
+1 on Nick's suggestion. (Might also mention that this is the reason why both functions should exist and have compatible signatures.) Also please, please, please add explicit mention of Python 2.7, 3.4 and 3.5 in the Abstract (for example in the 3rd paragraph of the abstract). On Fri, Sep 19, 2014 at 3:52 PM, Nick Coghlan wrote: > On 20 September 2014 08:34, Alex Gaynor wrote: > > Pushed a new version which I believe adresses all of these. I added an > > example of opting-out with urllib.urlopen, let me know if there's any > other > > APIs you think I should show an example with. > > It would be worth explicitly stating the process global monkeypatching > hack: > > import ssl > ssl._create_default_https_context = ssl._create_unverified_context > > Adding that hack to sitecustomize allows corporate sysadmins that can > update their standard operating environment more easily than they can > fix invalid certificate infrastructure to work around the problem on > behalf of their users. It also helps out users that will be able to > deal with such broken infrastructure without updating each and every > one of their scripts. > > It's deliberately ugly because it's a genuinely bad idea that folks > should want to avoid using, but as a matter of practical reality, > corporate IT departments are chronically understaffed, and often fully > committed to fighting the crisis du jour, without sufficient time > being available for regular infrastructure maintenance tasks. > > Regards, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP476: Enabling certificate validation by default
Nice. I just realized the release candidate for 3.4.2 is really close (RC1 Monday, final Oct 6, see PEP 429). What's your schedule for 3.4? I see no date for 2.7.9 yet (but that could just be that PEP 373 hasn't been updated). What about the Apple and Microsoft issues Christian pointed out? Regarding the approval process, I want to get this into 2.7 and 3.4, but I want it done right, and I'm not convinced that the implementation is sufficiently worked out. I don't want you to feel rushed, and I don't want you to feel that you can't start coding until the PEP is approved, but I also feel that I want to see more working code and some beta testing before it goes live. Perhaps I should just approve the PEP but separately get to approve the code? (Others will have to review it for correctness -- but I want to understand and review the API.) On Sat, Sep 20, 2014 at 8:54 AM, Alex Gaynor wrote: > Done and done. > > Alex > > On Fri, Sep 19, 2014 at 4:13 PM, Guido van Rossum > wrote: > >> +1 on Nick's suggestion. (Might also mention that this is the reason why >> both functions should exist and have compatible signatures.) >> >> Also please, please, please add explicit mention of Python 2.7, 3.4 and >> 3.5 in the Abstract (for example in the 3rd paragraph of the abstract). >> >> On Fri, Sep 19, 2014 at 3:52 PM, Nick Coghlan wrote: >> >>> On 20 September 2014 08:34, Alex Gaynor wrote: >>> > Pushed a new version which I believe adresses all of these. I added an >>> > example of opting-out with urllib.urlopen, let me know if there's any >>> other >>> > APIs you think I should show an example with. >>> >>> It would be worth explicitly stating the process global monkeypatching >>> hack: >>> >>> import ssl >>> ssl._create_default_https_context = ssl._create_unverified_context >>> >>> Adding that hack to sitecustomize allows corporate sysadmins that can >>> update their standard operating environment more easily than they can >>> fix invalid certificate infrastructure to work around the problem on >>> behalf of their users. It also helps out users that will be able to >>> deal with such broken infrastructure without updating each and every >>> one of their scripts. >>> >>> It's deliberately ugly because it's a genuinely bad idea that folks >>> should want to avoid using, but as a matter of practical reality, >>> corporate IT departments are chronically understaffed, and often fully >>> committed to fighting the crisis du jour, without sufficient time >>> being available for regular infrastructure maintenance tasks. >>> >>> Regards, >>> Nick. >>> >>> -- >>> Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia >>> >> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> > > > > -- > "I disapprove of what you say, but I will defend to the death your right > to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > GPG Key fingerprint: 125F 5C67 DFE9 4084 > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP476: Enabling certificate validation by default
Sounds good. Maybe we should put the specifically targeted releases in PEP 476? Nick, do Christian's issues need to be mentioned in the PEP or should we just keep those in the corresponding tracker items? On Sat, Sep 20, 2014 at 3:05 PM, Nick Coghlan wrote: > On 21 September 2014 03:05, Alex Gaynor wrote: > > That sounds reasonable to me -- at this point I don't expect this to > make it > > into 3.4.2; Nick has some working code on the ticket: > > http://bugs.python.org/issue22417 it's mostly missing documentation. > > I also think it's more sensible to target 2.7.9 & 3.4.3 for this > change, especially given the remaining rough edges in custom trust > database configuration on WIndows and Mac OS X that Christian pointed > out in http://bugs.python.org/issue22449 > > I don't believe Benjamin has picked a specific date for 2.7.9 yet, but > the regular maintenance release cadence (ignoring security releases) > would put it some time in November, which should be sufficient time to > get the remaining issues ironed out for 3.5 under the normal > development process, and then included under the banner of PEP 476 for > backporting to the maintenance branches. > > Regards, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP476: Enabling certificate validation by default
OK, I'll hold off a bit on approving the PEP, but my intention is to approve it. Go Alex go! On Sat, Sep 20, 2014 at 4:03 PM, Nick Coghlan wrote: > On 21 September 2014 08:22, Guido van Rossum wrote: > > Sounds good. Maybe we should put the specifically targeted releases in > PEP > > 476? > > > > Nick, do Christian's issues need to be mentioned in the PEP or should we > > just keep those in the corresponding tracker items? > > They should be mentioned in the PEP, as they will impact the way the > proposed change interacts with the platform trust database - I didn't > realise the differences on Windows and Mac OS X myself until Christian > mentioned them. > > To be completely independent of the system trust database in a > reliable, cross-platform way, folks will need to use a custom SSL > context that doesn't enable the system trust store, rather than > relying on the OpenSSL config options - the latter will reliably *add* > certificates, but they won't reliably ignore the default ones provided > by the system. > > We may also need some clarification from Ned regarding the status of > OpenSSL and the potential impact switching from dynamic linking to > static linking of OpenSSL may have in terms of the > "OPENSSL_X509_TEA_DISABLE" setting. > > Regards, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Backporting ensurepip to 2.7, Which commands to install?
That is copying the (alt)install targets of Python's own Makefile, and I think those are exactly right. On Oct 3, 2014 3:07 PM, "Donald Stufft" wrote: > I'm working on the backport of ensurepip to Python 2.7, and I realized that > I'm not sure which commands to install. Right now by default pip (outside > of > the context of ensurepip) will install pip, pip2, and pip2.7 if installed > in > Python 2.7. In Python 3's ensurepip we modified it so that it would install > pip3, and pip3.4, but *not* pip if it was an "install", and only pip3.4 if > it > was an "alt install". > > My question is, does this behavior make sense for ensurepip in 2.7? Or > should > it also install the "pip" command if it is an "install"? > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Backporting ensurepip to 2.7, Which commands to install?
That's not what I meant. Python 2.7 does install "python" unless you use altinstall. On Oct 3, 2014 5:33 PM, "Donald Stufft" wrote: > Ok, so neither Python 2.7 nor Python 3.x’s ensure pip command will install > a > ``pip`` binary by default without a flag. That's fine with me, just wanted > to > make sure it made sense for Python 2.x. Thanks! > > On Oct 3, 2014, at 8:31 PM, Guido van Rossum wrote: > > That is copying the (alt)install targets of Python's own Makefile, and I > think those are exactly right. > On Oct 3, 2014 3:07 PM, "Donald Stufft" wrote: > >> I'm working on the backport of ensurepip to Python 2.7, and I realized >> that >> I'm not sure which commands to install. Right now by default pip (outside >> of >> the context of ensurepip) will install pip, pip2, and pip2.7 if installed >> in >> Python 2.7. In Python 3's ensurepip we modified it so that it would >> install >> pip3, and pip3.4, but *not* pip if it was an "install", and only pip3.4 >> if it >> was an "alt install". >> >> My question is, does this behavior make sense for ensurepip in 2.7? Or >> should >> it also install the "pip" command if it is an "install"? >> >> --- >> Donald Stufft >> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> >> ___ >> Python-Dev mailing list >> Python-Dev@python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Backporting ensurepip to 2.7, Which commands to install?
Yes. On Friday, October 3, 2014, Donald Stufft wrote: > Whoops, I misred. > > So to be clear, you think: > > install -> pip, pip2, pip2.7 > altinstall -> pip2.7 > > On Oct 3, 2014, at 8:46 PM, Guido van Rossum > wrote: > > That's not what I meant. Python 2.7 does install "python" unless you use > altinstall. > On Oct 3, 2014 5:33 PM, "Donald Stufft" > wrote: > >> Ok, so neither Python 2.7 nor Python 3.x’s ensure pip command will >> install a >> ``pip`` binary by default without a flag. That's fine with me, just >> wanted to >> make sure it made sense for Python 2.x. Thanks! >> >> On Oct 3, 2014, at 8:31 PM, Guido van Rossum > > wrote: >> >> That is copying the (alt)install targets of Python's own Makefile, and I >> think those are exactly right. >> On Oct 3, 2014 3:07 PM, "Donald Stufft" > > wrote: >> >>> I'm working on the backport of ensurepip to Python 2.7, and I realized >>> that >>> I'm not sure which commands to install. Right now by default pip >>> (outside of >>> the context of ensurepip) will install pip, pip2, and pip2.7 if >>> installed in >>> Python 2.7. In Python 3's ensurepip we modified it so that it would >>> install >>> pip3, and pip3.4, but *not* pip if it was an "install", and only pip3.4 >>> if it >>> was an "alt install". >>> >>> My question is, does this behavior make sense for ensurepip in 2.7? Or >>> should >>> it also install the "pip" command if it is an "install"? >>> >>> --- >>> Donald Stufft >>> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >>> >>> ___________ >>> Python-Dev mailing list >>> Python-Dev@python.org >>> >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >> >> --- >> Donald Stufft >> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> >> > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > -- --Guido van Rossum (on iPad) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP476: Enabling certificate validation by default
I see no reason to hold up this PEP's approval any longer, so I hereby approve PEP 476. It looks like a fair amount of work is still needed to backport this to Python 2.7 (and a smaller amount for 3.4) but I trust that this will all happen before the next releases of these two. Congrats Alex! On Fri, Oct 3, 2014 at 2:57 PM, Alex Gaynor wrote: > Guido van Rossum python.org> writes: > > > > > OK, I'll hold off a bit on approving the PEP, but my intention is to > approve > > it. Go Alex go! > > > > A patch for the environmental variable overrides on Windows has landed; > thanks > Benjamin! > > Alex > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] How io.IOBase.readline() should behave when used on non-blocking obj and no data available?
On Thu, Oct 16, 2014 at 4:34 AM, Antoine Pitrou wrote: > On Thu, 16 Oct 2014 03:54:32 +0300 > Paul Sokolovsky wrote: > > Hello, > > > > io.RawIOBase.read() is well specified for behavior in case it > > immediately gets a would-block condition: "If the object is in > > non-blocking mode and no bytes are available, None is returned." > > (https://docs.python.org/3/library/io.html#io.RawIOBase.read). > > > > However, nothing is said about such condition for io.IOBase.readline(), > > which is mixin method in a base class, default implementation of which > > thus would use io.RawIOBase.read(). Looking at 3.4.0 source, iobase.c: > > iobase_readline() has: > > > > b = _PyObject_CallMethodId(self, &PyId_read, "n", nreadahead); > > [...] > > if (!PyBytes_Check(b)) { > > PyErr_Format(PyExc_IOError, > > "read() should have returned a bytes object, " > > "not '%.200s'", Py_TYPE(b)->tp_name); > > > > I.e. it's not even ready to receive legitimate return value of None > > from read(). I didn't try to write a testcase though, so may be missing > > something. > > > > So, how readline() should behave in this case, and can that be > > specified in the Library Reference? > > Well, the problem is that it's not obvious how to implement such methods > in a non-blocking context. > > Let's says some data is received but there isn't a complete line. > Should readline() return just that data (an incomplete line)? That > breaks the API's contract. Should readline() buffer the incomplete line > and keep it for the next readline() call? But then the internal buffer > becomes unbounded: perhaps there is no new line in the next 4GB of > incoming data... > > And besides, raw I/O objects *shouldn't* have an internal buffer. That's > the role of the buffered I/O layer. > Well, occasionally this occurs, and I think it's reasonable for readline() to deal with it. The argument about a 4 GB buffer is irrelevant -- this can happen with a blocking underlying stream too. I think that at the point where the readline() function says to itself "I need more data" it should ask the underlying stream for data. If that returns an empty string, meaning EOF, readline() is satisfied and return whatever it has buffered (even if it's empty). If that returns some bytes containing a newline, readline() is satisfied, returns the data up to that point, and buffers the rest (if any). If the underlying stream returns None, I think it makes sense for readline() to return None too -- attempting to read more will just turn into a busy-wait loop, and that's the opposite of what should happen. You may argue that the caller of readline() doesn't expect this. Sure. But in the end, if the stream is unbuffered and the caller isn't prepared for that, the caller will always get in trouble. Maybe it'll treat the None as EOF. That's fine -- it would be the same if it was calling read() on the underlying stream and it got None (the EOF signalling is the same in both cases). At least, by being prepared for the None from the underlying read() in the readline() code, someone who knows what they are doing can use readline() on a non-blocking stream -- when they receive None they will have to ask their selector (or whatever they use) to wait for the underlying FD and then they can try again. (Alternatively, we could raise BlockingIOError, which is that the OS level read() raises if there's no data immediately available on a non-blocking FD; but it seems that streams have already gotten a convention of returning None instead, so I think that should be propagated up the stack.) Oh, BTW, I tested this a little bit. Currently readline() returns an empty string (or empty bytes, depending on which level you use) when the stream is nonblocking. I think returning None makes muck more sense. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] isinstance() on old-style classes in Py 2.7
This is one of the unfortunate effects of the existence of "old-style" classes in Python 2. The old-style class hierarchy is distinct from the new-style class hierarchy, but instances of old-style classes are still objects (since in Python, *everything* is an object). For new code, and whenever you have an opportunity to refactor old code, you should use new-style classes, by inheriting your class from object (or from another class that inherits from object). On Tue, Oct 21, 2014 at 9:43 AM, Andreas Maier wrote: > > Hi. Today, I ran across this, in Python 2.7.6: > > >>> class C: > ... pass > ... > >>> issubclass(C,object) > False > >>> isinstance(C(),object) > True <-- ??? > > The description of isinstance() in Python 2.7 does not reveal this result > (to my reading). > > From a duck-typing perspective, one would also not guess that an instance > of C would be considered an instance of object: > > >>> dir(C()) > ['__doc__', '__module__'] > >>> dir(object()) > ['__class__', '__delattr__', '__doc__', '__format__', '__getattribute__', > '__hash__', '__init__', '__new__', '__reduce__ > ', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', > '__subclasshook__'] > > -> What is the motivation for isinstance(C,object) to return True in Python > 2.7? > > Andy > > Andreas Maier > IBM Senior Technical Staff Member, Systems Management Architecture & Design > IBM Research & Development Laboratory Boeblingen, Germany > mai...@de.ibm.com, +49-7031-16-3654 > > IBM Deutschland Research & Development GmbH > Vorsitzende des Aufsichtsrats: Martina Koederitz > Geschaeftsfuehrung: Dirk Wittkopp > Sitz der Gesellschaft: Boeblingen > Registergericht: Amtsgericht Stuttgart, HRB 243294 > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] isinstance() on old-style classes in Py 2.7
Hm. I've never been a fan of that. EIBTI and such... On Tue, Oct 21, 2014 at 10:53 AM, Barry Warsaw wrote: > On Oct 21, 2014, at 10:13 AM, Guido van Rossum wrote: > > >For new code, and whenever you have an opportunity to refactor old code, > >you should use new-style classes, by inheriting your class from object (or > >from another class that inherits from object). > > One nice way to do this module-globally is to set: > > __metaclass__ = type > > at the top of your file. Then when you're ready to drop Python 2, it's an > easy clean up. > > Cheers, > -Barry > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Cross compiling Python (for Android)
s_test fails to > >load because it needs sqrt(). I'll happily update the patch in > >21668. > > > >Is there any fundamental objection to adding the -lm flag to the > >link step where it is necessary? > > > > 3. What is ossaudiodev? It tries to include "sys/soundcard.h", which > >I don't have on my system. (The rule in setup.py is > >wrapped in a test for host of Linux/FreeBSD/Darwin, but Android x86 > >gets configured with --host=i686-linux-android so to turn it off > >requires an extra test for "and not cross_compiling".) > > > >Can I just turn off ossaudiodev for cross compiling or might > >someone want it in a different type of cross build? (In which case > >I think I'll have to write some kind autoconf rule for it, which I > >don't quite know how to do yet.) > > > > 4. Module _decimal is failing to compile. The problem is that it has > >a header called memory.h. Android's libc has the problem that > >/usr/include/stdlib.h includes . But the build system > >puts -I. on the include path before the system dirs (as it should) > >so when compiling _decimal, Modules/_decimal/libmpdec/memory.h gets > >found instead of /usr/include/memory.h. Shiz has a patch here: > > > https://github.com/rave-engine/python3-android/blob/master/mk/python/3.3.5/p\ > > ython-3.3.5-android-libmpdec.patch > >(which renames memory.h -> mpmemory.h) but I don't know > > > >a. Is there a tracker for this yet? and > >b. Is Shiz's fix the desired one or should I be looking for > >another approach? (Maybe modifying the -I flags for the build > >of just the build of _decimal or something?) > > > > 5. I'm not sure what test configure is actually doing for gethostby*() > >in a cross-compile environment. In any case Android has a bug > >where gethostbyaddr_r() is declared in the headers, but not > >actually implemented in libc. So I have to modify my pyconfig.h by > >hand to define HAVE_GETHOSTBYNAME and undef HAVE_GETHOSTBYNAME_R > >and HAVE_GETHOSTBYNAME_R_6_ARG. > > > >Is there a variable (like ac_cv_little_endian_double) that I can > >give to `configure` to make it set HAVE_GETHOSTBYNAME* the way I > >need? If so I've been unable to figure it out. > > > > 6. Android's header mysteriously leaves the pw_gecos field out > >of struct passwd. Is a fix like defining a new variable > >HAVE_BROKEN_GECOS_FIELD the appropriate way to go with this? (If > >this is an okay solution then the patch to Modules/pwdmodule.c is > >shown below, but I still have to figure out how to patch > >configure.ac to test for the condition and set the variable > >appropriately, so a pointer to a similar block of code in > >configure.ac would be appreciated.) > > > > Sorry for the TL;DR. I appreciate your having taken the time to read > > this far. > > > > Thanks, > > -Matt > > > > Proposed patch for pwdmodule.c: > > > > --- a/Modules/pwdmodule.c 2014-05-19 00:19:39.0 -0500 > > +++ b/Modules/pwdmodule.c 2014-10-21 18:00:35.676331205 -0500 > > @@ -57,6 +57,10 @@ > >} > > } > > > > +#if defined(HAVE_BROKEN_GECOS_FIELD) > > +static char fakePwGecos[256] = ""; > > +#endif > > + > > static PyObject * > > mkpwent(struct passwd *p) > > { > > @@ -72,7 +76,11 @@ > > SETS(setIndex++, p->pw_passwd); > > PyStructSequence_SET_ITEM(v, setIndex++, _PyLong_FromUid(p->pw_uid)); > > PyStructSequence_SET_ITEM(v, setIndex++, _PyLong_FromGid(p->pw_gid)); > > +#if !defined(HAVE_BROKEN_GECOS_FIELD) > > SETS(setIndex++, p->pw_gecos); > > +#else > > +SETS(setIndex++, fakePwGecos); > > +#endif > > SETS(setIndex++, p->pw_dir); > > SETS(setIndex++, p->pw_shell); > > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Status of C compilers for Python on Windows
On Sat, Oct 25, 2014 at 1:10 PM, Ray Donnelly wrote: > On Sat, Oct 25, 2014 at 6:13 PM, Steve Dower > wrote: > > Building CPython for Windows is not something that needs solving. The > > culture on Windows is to redistribute binaries, not source, and both the > > core team and a number of redistributors have this figured out (and it > will > > only become easier with VC14 and Python 3.5). > > This is the second time you've used the vacuous "culture on Windows" > argument, now with an added appeal to (vague) authority. That may be > your opinion and that of some others, but there's a large number of > people who don't care for using non-Free tools. IMHO building CPython > on Windows using Open Source toolchains is very much something that > needs merging upstream and supporting by default. What is it that you > are afraid of if CPython can be compiled out of the box using > mingw/MinGW-w64? Why are you fighting so hard against having option. > If CPython wants to truly call itself an Open Source project then I > consider being able to compile and cross-compile it with capable Open > Source toolchains on all major platforms a requirement. > Please stop this ridiculous argument. There's no definition of "truly open source project" that has such a requirement, and if you took it to the extreme you should not be using Windows at all. I appreciate your concern that building Python for your favorite platform using your favorite toolchain doesn't work, and if you have patches (or even bug reports) those are appreciated. But please take your rhetoric about open source elsewhere. > > I'd rather see this effort thrown behind compiling extensions, including > > cross compilation. The ABI is well defined enough that any compiler > should > > be usable, especially once the new CRT is in use. However, there is work > > needed to update the various tool chains to link to VC14's CRT and we > need > > to figure out the inconsistencies between tools so we can document and > work > > through them. > > > > Having different builds of CPython out there will only fragment the > > community and hurt extension authors far more than it may seem to help. > Here's the crux of the matter. We want compiled extension modules distributed via PyPI to work with the binaries distributed from python.org. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] results of id() and weakref.getweakrefs() sometimes break on object resurrection
On Saturday, October 25, 2014, Stefan Richthofer wrote: > Okay, sorry, I was thinking too Jython-like. I fixed runGC() just to > see now that it does not even trigger resurrection, since under > CPython there are no finalizers executed in ref cycles (i.e. I find my > objects in gc.garbage). > So I realize, my xy_cyclic tests are pointless anyway since in cyclic > gc no resurrection can happen. > > > The second problem (with weakref) is different: weakrefs are cleared > > before __del__ is called, so resurrection doesn't affect the whole > > process. > It appears weakrefs are only cleared if this is done by gc (where no > resurrection can happen anyway). If a resurrection-performing-__del__ is > just called by ref-count-drop-to-0, weakrefs persist - a behavior that is > very difficult and inefficient to emulate in Jython, but I'll give it > some more thoughts... > > You shouldn't have to emulate that. The exact behavior of GC is allowed to vary between systems. > However thanks for the help! > > -Stefan > > > > Gesendet: Sonntag, 26. Oktober 2014 um 01:22 Uhr > > Von: "Antoine Pitrou" > > > An: python-dev@python.org > > Betreff: Re: [Python-Dev] results of id() and weakref.getweakrefs() > sometimes break on object resurrection > > > > > > Hello Stefan, > > > > On Sun, 26 Oct 2014 00:20:47 +0200 > > "Stefan Richthofer" > wrote: > > > Hello developers, > > > > > > I observed strange behaviour in CPython (tested in 2.7.5 and 3.3.3) > > > regarding object resurrection. > > > > Your runGC() function is buggy, it does not run the GC under CPython. > > Fix it and the first problem (with id()) disappears. > > > > The second problem (with weakref) is different: weakrefs are cleared > > before __del__ is called, so resurrection doesn't affect the whole > > process. Add a callback to the weakref and you'll see it is getting > > called. > > > > In other words, CPython behaves as expected. Your concern is > > appreciated, though. > > > > Regards > > > > Antoine. > > > > > > ___ > > Python-Dev mailing list > > Python-Dev@python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/stefan.richthofer%40gmx.de > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (on iPad) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 475
I would like this to happen, but I'm afraid of breakage, and I don't have time. I would be okay if Antoine agrees to be the PEP-BDFL. On Tue, Oct 28, 2014 at 2:13 PM, Victor Stinner wrote: > Oh, I forgot the link to the PEP: > http://legacy.python.org/dev/peps/pep-0475/ > > Victor > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The role of NotImplemented: What is it for and when should it be used?
Gotta be brief, but NotImplemented is for all binary ops. Power may be an exception because it's ternary? On Nov 3, 2014 8:08 AM, "Brett Cannon" wrote: > > > On Mon Nov 03 2014 at 5:31:21 AM Ethan Furman wrote: > >> Just to be clear, this is about NotImplemented, not NotImplementedError. >> >> tl;dr When a binary operation fails, should an exception be raised or >> NotImplemented returned? >> > > The docs for NotImplemented suggest it's only for rich comparison methods > and not all binary operators: > https://docs.python.org/3/library/constants.html#NotImplemented . But > then had I not read that I would have said all binary operator methods > should return NotImplemented when the types are incompatible. > > -Brett > > >> >> >> When a binary operation in Python is attempted, there are two >> possibilities: >> >>- it can work >>- it can't work >> >> The main reason [1] that it can't work is that the two operands are of >> different types, and the first type does not know >> how to deal with the second type. >> >> The question then becomes: how does the first type tell Python that it >> cannot perform the requested operation? The most >> obvious answer is to raise an exception, and TypeError is a good >> candidate. The problem with the exception raising >> approach is that once an exception is raised, Python doesn't try anything >> else to make the operation work. >> >> What's wrong with that? Well, the second type might know how to perform >> the operation, and in fact that is why we have >> the reflected special methods, such as __radd__ and __rmod__ -- but if >> the first type raises an exception the __rxxx__ >> methods will not be tried. >> >> Okay, how can the first type tell Python that it cannot do what is >> requested, but to go ahead and check with the second >> type to see if it does? That is where NotImplemented comes in -- if a >> special method (and only a special method) >> returns NotImplemented then Python will check to see if there is anything >> else it can do to make the operation succeed; >> if all attempts return NotImplemented, then Python itself will raise an >> appropriate exception [2]. >> >> In an effort to see how often NotImplemented is currently being returned >> I crafted a test script [3] to test the types >> bytes, bytearray, str, dict, list, tuple, Enum, Counter, defaultdict, >> deque, and OrderedDict with the operations for >> __add__, __and__, __floordiv__, __iadd__, __iand__, __ifloordiv__, >> __ilshift__, __imod__, __imul__, __ior__, __ipow__, >> __irshift__, __isub__, __itruediv__, __ixor__, __lshift__, __mod__, >> __mul__, __or__, __pow__, __rshift__, __sub__, >> __truediv__, and __xor__. >> >> Here are the results of the 275 tests: >> >> >> testing control... >> >> ipow -- Exception > and 'subtype'> raised >> errors in Control -- misunderstanding or bug? >> >> testing types against a foreign class >> >> iadd(Counter()) -- Exception <'SomeOtherClass' object has no attribute >> 'items'> raised instead of TypeError >> iand(Counter()) -- NotImplemented not returned, TypeError not raised >> ior(Counter()) -- Exception <'SomeOtherClass' object has no attribute >> 'items'> raised instead of TypeError >> isub(Counter()) -- Exception <'SomeOtherClass' object has no attribute >> 'items'> raised instead of TypeError >> >> >> testing types against a subclass >> >> mod(str()) -- NotImplemented not returned, TypeError not raised >> >> iadd(Counter()) -- Exception <'subtype' object has no attribute 'items'> >> raised (should have worked) >> iand(Counter()) -- NotImplemented not returned, TypeError not raised >> ior(Counter()) -- Exception <'subtype' object has no attribute 'items'> >> raised (should have worked) >> isub(Counter()) -- Exception <'subtype' object has no attribute 'items'> >> raised (should have worked) >> >> >> >> Two observations: >> >>- __ipow__ doesn't seem to behave properly in the 3.x line (that error >> doesn't show up when testing against 2.7) >> >>- Counter should be returning NotImplemented instead of raising an >> AttributeError, for three reasons [4]: >> - a TypeError is more appropriate >> - subclasses /cannot/ work with the current implementation >> - __iand__ is currently a silent failure if the Counter is empty, >> and the other operand should trigger a failure >> >> Back to the main point... >> >> So, if my understanding is correct: >> >>- NotImplemented is used to signal Python that the requested operation >> could not be performed >>- it should be used by the binary special methods to signal type >> mismatch failure, so any subclass gets a chance to work. >> >> Is my understanding correct? Is this already in the docs somewhere, and >> I just missed it? >> >> -- >> ~Ethan~ >> >> [1] at least, it's the main reason in my code >> [2] usually a T
Re: [Python-Dev] The role of NotImplemented: What is it for and when should it be used?
Not those. On Nov 3, 2014 8:56 AM, "Antoine Pitrou" wrote: > On Mon, 3 Nov 2014 08:48:07 -0800 > Guido van Rossum wrote: > > Gotta be brief, but NotImplemented is for all binary ops. > > Even in-place ops? > > Regards > > Antoine. > > > > Power may be an > > exception because it's ternary? > > On Nov 3, 2014 8:08 AM, "Brett Cannon" wrote: > > > > > > > > > > > On Mon Nov 03 2014 at 5:31:21 AM Ethan Furman > wrote: > > > > > >> Just to be clear, this is about NotImplemented, not > NotImplementedError. > > >> > > >> tl;dr When a binary operation fails, should an exception be raised or > > >> NotImplemented returned? > > >> > > > > > > The docs for NotImplemented suggest it's only for rich comparison > methods > > > and not all binary operators: > > > https://docs.python.org/3/library/constants.html#NotImplemented . But > > > then had I not read that I would have said all binary operator methods > > > should return NotImplemented when the types are incompatible. > > > > > > -Brett > > > > > > > > >> > > >> > > >> When a binary operation in Python is attempted, there are two > > >> possibilities: > > >> > > >>- it can work > > >>- it can't work > > >> > > >> The main reason [1] that it can't work is that the two operands are of > > >> different types, and the first type does not know > > >> how to deal with the second type. > > >> > > >> The question then becomes: how does the first type tell Python that it > > >> cannot perform the requested operation? The most > > >> obvious answer is to raise an exception, and TypeError is a good > > >> candidate. The problem with the exception raising > > >> approach is that once an exception is raised, Python doesn't try > anything > > >> else to make the operation work. > > >> > > >> What's wrong with that? Well, the second type might know how to > perform > > >> the operation, and in fact that is why we have > > >> the reflected special methods, such as __radd__ and __rmod__ -- but if > > >> the first type raises an exception the __rxxx__ > > >> methods will not be tried. > > >> > > >> Okay, how can the first type tell Python that it cannot do what is > > >> requested, but to go ahead and check with the second > > >> type to see if it does? That is where NotImplemented comes in -- if a > > >> special method (and only a special method) > > >> returns NotImplemented then Python will check to see if there is > anything > > >> else it can do to make the operation succeed; > > >> if all attempts return NotImplemented, then Python itself will raise > an > > >> appropriate exception [2]. > > >> > > >> In an effort to see how often NotImplemented is currently being > returned > > >> I crafted a test script [3] to test the types > > >> bytes, bytearray, str, dict, list, tuple, Enum, Counter, defaultdict, > > >> deque, and OrderedDict with the operations for > > >> __add__, __and__, __floordiv__, __iadd__, __iand__, __ifloordiv__, > > >> __ilshift__, __imod__, __imul__, __ior__, __ipow__, > > >> __irshift__, __isub__, __itruediv__, __ixor__, __lshift__, __mod__, > > >> __mul__, __or__, __pow__, __rshift__, __sub__, > > >> __truediv__, and __xor__. > > >> > > >> Here are the results of the 275 tests: > > >> > > >> > > >> testing control... > > >> > > >> ipow -- Exception 'Control' > > >> and 'subtype'> raised > > >> errors in Control -- misunderstanding or bug? > > >> > > >> testing types against a foreign class > > >> > > >> iadd(Counter()) -- Exception <'SomeOtherClass' object has no attribute > > >> 'items'> raised instead of TypeError > > >> iand(Counter()) -- NotImplemented not returned, TypeError not raised > > >> ior(Counter()) -- Exception <'SomeOtherClass' object has no attribute > > >> 'items'> raised instead of TypeError > > >> isub(Counter()) -- Exception <&
Re: [Python-Dev] The role of NotImplemented: What is it for and when should it be used?
Sorry, was too quick. For immutable types __iop__ may not exist and then the fallback machinery should work normally using NotImplemented. But if __iop__ exists it can choose not to allow __rop__, because the type would presumably change. This is probably more predictable. I don't even know if the byte code interpreter looks for Not implemented from __iop__. On Nov 3, 2014 9:00 AM, "Guido van Rossum" wrote: > Not those. > On Nov 3, 2014 8:56 AM, "Antoine Pitrou" wrote: > >> On Mon, 3 Nov 2014 08:48:07 -0800 >> Guido van Rossum wrote: >> > Gotta be brief, but NotImplemented is for all binary ops. >> >> Even in-place ops? >> >> Regards >> >> Antoine. >> >> >> > Power may be an >> > exception because it's ternary? >> > On Nov 3, 2014 8:08 AM, "Brett Cannon" wrote: >> > >> > > >> > > >> > > On Mon Nov 03 2014 at 5:31:21 AM Ethan Furman >> wrote: >> > > >> > >> Just to be clear, this is about NotImplemented, not >> NotImplementedError. >> > >> >> > >> tl;dr When a binary operation fails, should an exception be raised >> or >> > >> NotImplemented returned? >> > >> >> > > >> > > The docs for NotImplemented suggest it's only for rich comparison >> methods >> > > and not all binary operators: >> > > https://docs.python.org/3/library/constants.html#NotImplemented . But >> > > then had I not read that I would have said all binary operator methods >> > > should return NotImplemented when the types are incompatible. >> > > >> > > -Brett >> > > >> > > >> > >> >> > >> >> > >> When a binary operation in Python is attempted, there are two >> > >> possibilities: >> > >> >> > >>- it can work >> > >>- it can't work >> > >> >> > >> The main reason [1] that it can't work is that the two operands are >> of >> > >> different types, and the first type does not know >> > >> how to deal with the second type. >> > >> >> > >> The question then becomes: how does the first type tell Python that >> it >> > >> cannot perform the requested operation? The most >> > >> obvious answer is to raise an exception, and TypeError is a good >> > >> candidate. The problem with the exception raising >> > >> approach is that once an exception is raised, Python doesn't try >> anything >> > >> else to make the operation work. >> > >> >> > >> What's wrong with that? Well, the second type might know how to >> perform >> > >> the operation, and in fact that is why we have >> > >> the reflected special methods, such as __radd__ and __rmod__ -- but >> if >> > >> the first type raises an exception the __rxxx__ >> > >> methods will not be tried. >> > >> >> > >> Okay, how can the first type tell Python that it cannot do what is >> > >> requested, but to go ahead and check with the second >> > >> type to see if it does? That is where NotImplemented comes in -- if >> a >> > >> special method (and only a special method) >> > >> returns NotImplemented then Python will check to see if there is >> anything >> > >> else it can do to make the operation succeed; >> > >> if all attempts return NotImplemented, then Python itself will raise >> an >> > >> appropriate exception [2]. >> > >> >> > >> In an effort to see how often NotImplemented is currently being >> returned >> > >> I crafted a test script [3] to test the types >> > >> bytes, bytearray, str, dict, list, tuple, Enum, Counter, defaultdict, >> > >> deque, and OrderedDict with the operations for >> > >> __add__, __and__, __floordiv__, __iadd__, __iand__, __ifloordiv__, >> > >> __ilshift__, __imod__, __imul__, __ior__, __ipow__, >> > >> __irshift__, __isub__, __itruediv__, __ixor__, __lshift__, __mod__, >> > >> __mul__, __or__, __pow__, __rshift__, __sub__, >> > >> __truediv__, and __xor__. >> > >> >> > >> Here are the results of the 275 tests: >> > >> >>
Re: [Python-Dev] The role of NotImplemented: What is it for and when should it be used?
That must be so that an immutable type can still implement __iop__ as an optimization. On Mon, Nov 3, 2014 at 9:10 AM, Antoine Pitrou wrote: > On Mon, 3 Nov 2014 09:05:43 -0800 > Guido van Rossum wrote: > > Sorry, was too quick. For immutable types __iop__ may not exist and then > > the fallback machinery should work normally using NotImplemented. But if > > __iop__ exists it can choose not to allow __rop__, because the type would > > presumably change. This is probably more predictable. I don't even know > if > > the byte code interpreter looks for Not implemented from __iop__. > > Apparently it can tell it to fallback on __op__: > > >>> class C(list): > ... def __iadd__(self, other): > ... print("here") > ... return NotImplemented > ... > >>> c = C() > >>> c += [1] > here > >>> c > [1] > >>> type(c) > > > > Regards > > Antoine. > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The role of NotImplemented: What is it for and when should it be used?
Sounds good! On Mon, Nov 3, 2014 at 11:33 AM, Ethan Furman wrote: > Summary: > > NotImplemented _should_ be used by the normal and reflected binary methods > (__lt__, __add__, __xor__, __rsub__, etc.) > > NotImplemented _may_ be used by the in-place binary methods (__iadd__, > __ixor__, etc.), but the in-place methods are also free to raise an > exception. > > Correct? > > -- > ~Ethan~ > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Real-world use of Counter
On Thu, Nov 6, 2014 at 1:10 AM, Nick Coghlan wrote: > Right. Especially in a ducktyping context, AttributeError and TypeError > are often functionally equivalent - it usually isn't worthwhile adding code > specifically to turn one into the other. > Yeah, these are so often interchangeable that I wish they had a common ancestor. Then again when you are catching these you might as well be catching all exceptions. > The case that doesn't throw an exception at all seems a little strange, > but I haven't looked into the details. > It comes from a simple approach to creating an intersection; paraphrasing, the code does this: def intesection(a, b): result = set() for x in a: if x in b: result.add(x) return result If a is empty this never looks at b. I think it's okay not to raise in this case (though it would also be okay if it *did* raise). -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Static checker for common Python programming errors
Also, I should mention mypy (mypy-lang.org), which is a much more ambitious project that uses type annotations. I am trying to find time to work on a PEP that standardizes type annotations to match mypy's syntax (with probably some improvements and caveats). It's too early to post the PEP draft but if you're designing a type checker or IDE that could use help from type annotations, email me. On Mon, Nov 17, 2014 at 6:49 AM, Stefan Bucur wrote: > I'm developing a Python static analysis tool that flags common programming > errors in Python programs. The tool is meant to complement other tools like > Pylint (which perform checks at lexical and syntactic level) by going > deeper with the code analysis and keeping track of the possible control > flow paths in the program (path-sensitive analysis). > > For instance, a path-sensitive analysis detects that the following snippet > of code would raise an AttributeError exception: > > if object is None: # If the True branch is taken, we know the object is > None > object.doSomething() # ... so this statement would always fail > > I'm writing first to the Python developers themselves to ask, in their > experience, what common pitfalls in the language & its standard library > such a static checker should look for. For instance, here [1] is a list of > static checks for the C++ language, as part of the Clang static analyzer > project. > > My preliminary list of Python checks is quite rudimentary, but maybe could > serve as a discussion starter: > > * Proper Unicode handling (for 2.x) > - encode() is not called on str object > - decode() is not called on unicode object > * Check for integer division by zero > * Check for None object dereferences > > Thanks a lot, > Stefan Bucur > > [1] http://clang-analyzer.llvm.org/available_checks.html > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Static checker for common Python programming errors
Please also check python-static-type-check...@googlegroups.com. On Nov 18, 2014 3:06 AM, "Stefan Bucur" wrote: > Thanks for the pointer! There seem indeed to be more formal analysis tools > for JavaScript than for Python (e.g., the most recent one for JS I know of > is the Jalangi framework [1]). I assume the main reason is that JavaScript > is standardized and somewhat simpler, so it's easier to construct formal > specs for all language features than it is for Python, which is also > evolving faster and relies on a lot of hard-to-model native functionality. > > That's why I'm planning to reuse as much as possible the "implicit specs" > of the interpreter implementation, instead of re-stating them in an > explicit model. > > We already have an execution engine that uses the interpreter to > automatically explore multiple paths through a piece of Python code (you > can read here [2] the academic paper, with case studies for Python and > Lua). In turn, we could use that engine to discover paths, while checking > program properties along each path. > > Guido's suggestion for a type checker raises some interesting applications > of this multi-path analysis. For instance, we could examine the type of the > objects assigned to a static variable across all discovered execution paths > and determine its consistency. This analysis could either start with no > type annotations and output suggested types, or take existing annotations > and check them against the actual types. > > Thanks again, > Stefan > > [1] https://github.com/SRA-SiliconValley/jalangi > [2] http://dslab.epfl.ch/pubs/chef.pdf > > On Mon Nov 17 2014 at 8:50:21 PM Francis Giraldeau < > francis.girald...@gmail.com> wrote: > >> If I may, there are prior work on JavaScript that may be worth >> investigating. Formal verification of dynamically typed software is a >> challenging endeavour, but it is very valuable to avoid errors at runtime, >> providing benefits from strongly type language without the rigidity. >> >> http://cs.au.dk/~amoeller/papers/tajs/ >> >> Good luck! >> >> Francis >> >> 2014-11-17 9:49 GMT-05:00 Stefan Bucur : >> >>> I'm developing a Python static analysis tool that flags common >>> programming errors in Python programs. The tool is meant to complement >>> other tools like Pylint (which perform checks at lexical and syntactic >>> level) by going deeper with the code analysis and keeping track of the >>> possible control flow paths in the program (path-sensitive analysis). >>> >>> For instance, a path-sensitive analysis detects that the following >>> snippet of code would raise an AttributeError exception: >>> >>> if object is None: # If the True branch is taken, we know the object is >>> None >>> object.doSomething() # ... so this statement would always fail >>> >>> I'm writing first to the Python developers themselves to ask, in their >>> experience, what common pitfalls in the language & its standard library >>> such a static checker should look for. For instance, here [1] is a list of >>> static checks for the C++ language, as part of the Clang static analyzer >>> project. >>> >>> My preliminary list of Python checks is quite rudimentary, but maybe >>> could serve as a discussion starter: >>> >>> * Proper Unicode handling (for 2.x) >>> - encode() is not called on str object >>> - decode() is not called on unicode object >>> * Check for integer division by zero >>> * Check for None object dereferences >>> >>> Thanks a lot, >>> Stefan Bucur >>> >>> [1] http://clang-analyzer.llvm.org/available_checks.html >>> >>> >>> ___ >>> Python-Dev mailing list >>> Python-Dev@python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> >> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/francis.giraldeau%40gmail.com >>> >>> >> > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 479: Change StopIteration handling inside generators
There's a new PEP proposing to change how to treat StopIteration bubbling up out of a generator frame (not caused by a return from the frame). The proposal is to replace such a StopIteration with a RuntimeError (chained to the original StopIteration), so that only *returning* from a generator (or falling off the end) causes the iteration to terminate. The proposal unifies the behavior of list comprehensions and generator expressions along the lines I had originally in mind when they were introduced. It renders useless/illegal certain hacks that have crept into some folks' arsenal of obfuscated Python tools. In Python 3.5 the proposed change is conditional on: from __future__ import replace_stopiteration_in_generators This would affect all generators (including generator expressions) compiled under its influence. The feature would become standard in Python 3.6 or 3.7. The PEP is here: https://www.python.org/dev/peps/pep-0479/ To avoid a lot of requests for clarification you may also want to read up on the python-ideas discussion, e.g. here: https://groups.google.com/forum/#!topic/python-ideas/yJi1gRot9yY I am leaning towards approving this PEP, but not until we've had a review here at python-dev. I would like to thank Chris Angelico for writing the original PEP draft. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
I've made some updates to the PEP: - added 19-Nov-2014 to Post-History - removed "implicitly-raised" from the abstract - changed the __future__ thing to generator_return - added a clarifying paragraph that Chris added to his own draft - added a link to http://bugs.python.org/issue22906 which has a proof-of-concept patch There's still a lively discussion on python-ideas; Steven D'Aprano has dug up quite a bit of evidence that StopIteration is used quite a bit in ways that will break under the new behavior, and there also seems to be quite a bit of third-party information that recommends StopIteration over return to terminate a generator early. However I don't see much evidence that the current behavior is *better* than the proposal -- I see the current behavior as a definite wart, and I have definitely seen people struggle to debug silent early loop termination due to an "escaped" StopIteration. That said, I think for most people the change won't matter, some people will have to apply one of a few simple fixes, and a rare few will have to rewrite their code in a non-trivial way (sometimes this will affect "clever" libraries). I wonder if the PEP needs a better transition plan, e.g. - right now, start an education campaign - with Python 3.5, introduce "from __future__ import generator_return", and silent deprecation warnings - with Python 3.6, start issuing non-silent deprecation warnings - with Python 3.7, make the new behavior the default (subject to some kind of review) It would also be useful if we could extend the PEP with some examples of the various categories of fixes that can be applied easily, e.g. a few examples of "raise StopIteration" directly in a generator that can be replaced with "return" (or omitted, if it's at the end); a few examples of situations where "yield from" can supply an elegant fix (and an alternative for code that needs to be backward compatible with Python 3.2 or 2.7); and finally (to be honest) an example of code that will require being made more complicated. Oh, and it would also be nice if the PEP included some suggested words that 3rd party educators can use to explain the relationship between StopIteration and generators in a healthier way (preferably a way that also applies to older versions). Chris, are you up to drafting these additions? On Thu, Nov 20, 2014 at 2:05 AM, Nick Coghlan wrote: > On 20 November 2014 06:15, Benjamin Peterson wrote: > >> >> On Wed, Nov 19, 2014, at 15:10, Guido van Rossum wrote: >> > There's a new PEP proposing to change how to treat StopIteration >> bubbling >> > up out of a generator frame (not caused by a return from the frame). The >> > proposal is to replace such a StopIteration with a RuntimeError (chained >> > to >> > the original StopIteration), so that only *returning* from a generator >> > (or >> > falling off the end) causes the iteration to terminate. >> > >> > The proposal unifies the behavior of list comprehensions and generator >> > expressions along the lines I had originally in mind when they were >> > introduced. It renders useless/illegal certain hacks that have crept >> into >> > some folks' arsenal of obfuscated Python tools. >> > >> > In Python 3.5 the proposed change is conditional on: >> > >> > from __future__ import replace_stopiteration_in_generators >> >> Drive-by comment: This seems like a terribly awkward name. Could a >> shorter and sweeter name not be found? >> > > I think my suggestion was something like "from __future__ import > generator_return". > > I saw that style as somewhat similar to "from __future__ import division" > - it just tells you what the change affects (in this case, returning from > generators), while requiring folks to look up the documentation to find out > the exact details of the old behaviour and the new behaviour. > > Cheers, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
On Thu, Nov 20, 2014 at 12:13 PM, Serhiy Storchaka wrote: > On 20.11.14 21:58, Antoine Pitrou wrote: > >> To me "generator_return" sounds like the addition to generator syntax >> allowing for return statements (which was done as part of the "yield >> from" PEP). How about "generate_escape"? >> > > Or may be "generator_stop_iteration"? > Or just "generator_stop"? -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
On Thu, Nov 20, 2014 at 3:13 PM, Antoine Pitrou wrote: > On Thu, 20 Nov 2014 14:04:24 -0800 > Guido van Rossum wrote: > > > On Thu, Nov 20, 2014 at 12:13 PM, Serhiy Storchaka > > wrote: > > > > > On 20.11.14 21:58, Antoine Pitrou wrote: > > > > > >> To me "generator_return" sounds like the addition to generator syntax > > >> allowing for return statements (which was done as part of the "yield > > >> from" PEP). How about "generate_escape"? > > >> > > > > > > Or may be "generator_stop_iteration"? > > > > > > > Or just "generator_stop"? > > That sounds good. > OK, updated the PEP. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
On Fri, Nov 21, 2014 at 8:47 AM, Antoine Pitrou wrote: > On Fri, 21 Nov 2014 05:47:58 -0800 > Raymond Hettinger wrote: > > > > Another issue is that it breaks the way I and others have taught for > years that generators are a kind of iterator (an object implementing the > iterator protocol) and that a primary motivation for generators is to > provide a simpler and more direct way of creating iterators. However, > Chris explained that, "This proposal causes a separation of generators and > iterators, so it's no longer possible to pretend that they're the same > thing." That is a major and worrisome conceptual shift. > > I agree with Raymond on this point. > Pretending they're the same thing has always been fraught with subtle errors. From the *calling* side a generator implements the same protocol as any other iterator (though it also has a few others -- send(), throw(), close()). However *inside* they are not at all similar -- generators produce a value is done through "yield", __next__() methods use return. Even if we end up rejecting the PEP we should campaign for better understanding of generators. Raymond may just have to fix some of his examples. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
On Fri, Nov 21, 2014 at 9:18 AM, Steven D'Aprano wrote: > I fear that there is one specific corner case that will be impossible to > deal with in a backwards-compatible way supporting both Python 2 and 3 > in one code base: the use of `return value` in a generator. > > In Python 2.x through 3.1, `return value` is a syntax error inside > generators. Currently, the only way to handle this case in 2+3 code is > by using `raise StopIteration(value)` but if that changes in 3.6 or 3.7 > then there will be no (obvious?) way to deal with this case. Note that using StopIteration for this purpose is a recent invention (I believe I invented it for the Google App Engine NDB library). Before Python 3.3 this had to be essentially a private protocol implemented by the framework, and typically the framework defines a custom exception for this purpose -- either an alias for StopIteration, or a subclass of it, or a separate exception altogether. I did a little survey: - ndb uses "return Return(v)" where Return is an alias for StopIteration. - monocle uses "yield Return(v)", so it doesn't even use an exception. - In Twisted you write "returnValue(v)" -- IMO even more primitive since it's not even a control flow statement. - In Tornado you write "raise tornado.gen.Return(v)", where Return does not inherit from StopIteration. In Python 3.3 and later you can also write "return v", and the framework treats Return and StopIteration the same -- but there is no mention of "raise StopIteration(v)" in the docs and given that they have Return there should be no need for it, ever. - In Trollius (the backport of asyncio) you write "raise Return(v)", where Return is currently a subclass of StopIteration -- but it doesn't really have to be, it could be a different exception (like in Tornado). So I haven't found any framework that recommends "raise StopIteration(v)". Sure, some frameworks will have to be changed, but they have until Python 3.6 or 3.6, and the changes can be made to work all the way back to Python 2.7. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?
Like it or not, github is easily winning this race. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?
On Fri, Nov 21, 2014 at 9:26 PM, Tshepang Lekhonkhobe wrote: > On Fri, Nov 21, 2014 at 8:46 PM, Guido van Rossum > wrote: > > Like it or not, github is easily winning this race. > > Are you considering moving CPython development to Github? > No, but I prefer it for new projects. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
In order to save everyone's breath, I am *accepting* the proposal of PEP 479. The transition plan is: - "from __future__ import generator_stop" in 3.5, and a silent deprecation if StopIteration is allowed to bubble out of a generator (i.e. no warning is printed unless you explicitly turn it on) - non-silent deprecation in 3.6 - feature enabled by default in 3.7 The PEP hasn't been updated to include this and it also could use some more editing -- I'll try to get to that Monday. But the specification of the proposal is crystal-clear and I have no doubt that this is the right thing going forward. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?
This thread seems to beg for a decision. I think Donald Stufft has it exactly right: we should move to GitHub, because it is the easiest to use and most contributors already know it (or are eager to learn thee). Honestly, the time for core devs (or some other elite corps of dedicated volunteers) to sysadmin their own machines (virtual or not) is over. We've never been particularly good at this, and I don't see us getting better or more efficient. Moving the CPython code and docs is not a priority, but everything else (PEPs, HOWTOs etc.) can be moved easily and I am in favor of moving to GitHub. For PEPs I've noticed that for most PEPs these days (unless the primary author is a core dev) the author sets up a git repo first anyway, and the friction of moving between such repos and the "official" repo is a pain. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?
On Saturday, November 22, 2014, Nick Coghlan wrote: > On 23 November 2014 at 15:19, Guido van Rossum > wrote: > > This thread seems to beg for a decision. I think Donald Stufft has it > > exactly right: we should move to GitHub, because it is the easiest to use > > and most contributors already know it (or are eager to learn thee). > > Honestly, the time for core devs (or some other elite corps of dedicated > > volunteers) to sysadmin their own machines (virtual or not) is over. > We've > > never been particularly good at this, and I don't see us getting better > or > > more efficient. > > The learning curve on git is still awful - it offers no compelling > advantages over hg, and GitHub doesn't offer any huge benefits over > BitBucket for Sphinx based documentation (ReadTheDocs works just as > well with either service). Git may well have a learning curve, but ever since I "got" it I started preferring it over hg. Too bad for BitBucket, but most people who started contributing to open source in the past 5 years already have a GitHub account. > > > Moving the CPython code and docs is not a priority, but everything else > > (PEPs, HOWTOs etc.) can be moved easily and I am in favor of moving to > > GitHub. For PEPs I've noticed that for most PEPs these days (unless the > > primary author is a core dev) the author sets up a git repo first anyway, > > and the friction of moving between such repos and the "official" repo is > a > > pain. > > Note that if folks prefer Git, BitBucket supports both. I would object > strongly to unilaterally forcing existing contributors to switch from > Mercurial to git. > What about potential new contributors? And the hg-git bridges that git fans are always referred to work in the opposite direction too... :-) -- --Guido van Rossum (on iPad) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?
On Sat, Nov 22, 2014 at 10:49 PM, Nick Coghlan wrote: > More generally, I'm very, very disappointed to see folks so willing to > abandon fellow community members for the sake of following the crowd. > Perhaps we should all just abandon Python and learn Ruby or JavaScript > because they're better at getting press in Silicon Valley? That's a really low blow, Nick. I think these are the facts: - Hg/Git are equivalent in functionality (at least to the extent that the difference can't be used to force a decision), and ditto for BitBucket/GitHub, with one crucial exception (see below) - We're currently using Hg for most projects under the PSF umbrella (however, there's https://github.com/python/pythondotorg) - Moving from Hg to Git is a fair amount of one-time work (converting repos) and is inconvenient to core devs who aren't already used to Git (learning a new workflow) - Most newer third-party projects are already on GitHub - GitHub is way more popular than BitBucket and slated for long-term success But here's the kicker for me: **A DVCS repo is a social network, so it matters in a functional way what everyone else is using.** So I give you that if you want a quick move into the modern world, while keeping the older generation of core devs happy (not counting myself :-), BitBucket has the lowest cost of entry. But I strongly believe that if we want to do the right thing for the long term, we should switch to GitHub. I promise you that once the pain of the switch is over you will feel much better about it. I am also convinced that we'll get more contributions this way. Note: I am not (yet) proposing we switch CPython itself. Switching it would be a lot of work, and it is specifically out of scope for this discussion. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
n of generators and next() is just a special case of this. > > StopIteration is not a normal exception, indicating a problem, rather it > exists to signal exhaustion of an iterator. > However, next() raises StopIteration for an exhausted iterator, which > really is an error. > Any iterator code (generator or __next__ method) that calls next() treats > the StopIteration as a normal exception and propogates it. > The controlling loop then interprets StopIteration as a signal to stop and > thus stops. > *The problem is the implicit shift from signal to error and back to > signal.* > > 2. The proposed solution does not address this issue at all, but rather > legislates against generators raising StopIteration. > > 3. Generators and the iterator protocol were introduced in Python 2.2, 13 > years ago. > For all of that time the iterator protocol has been defined by the > __iter__(), next()/__next__() methods and the use of StopIteration to > terminate iteration. > > Generators are a way to write iterators without the clunkiness of explicit > __iter__() and next()/__next__() methods, but have always obeyed the same > protocol as all other iterators. This has allowed code to rewritten from > one form to the other whenever desired. > > Do not forget that despite the addition of the send() and throw() methods > and their secondary role as coroutines, generators have primarily always > been a clean and elegant way of writing iterators. > > 4. Porting from Python 2 to Python 3 seems to be hard enough already. > > 5. I think I've already covered this in the other points, but to reiterate > (excuse the pun): > Calling next() on an exhausted iterator is, I would suggest, a logical > error. > However, next() raises StopIteration which is really a signal to the > controlling loop. > The fault is with next() raising StopIteration. > Generators raising StopIteration is not the problem. > > It also worth noting that calling next() is the only place a StopIteration > exception is likely to occur outside of the iterator protocol. > > An example > -- > > Consider a function to return the value from a set with a single member. > def value_from_singleton(s): > if len(s) < 2: #Intentional error here (should be len(s) == 1) >return next(iter(s)) > raise ValueError("Not a singleton") > > Now suppose we pass an empty set to value_from_singleton(s), then we get a > StopIteration exception, which is a bit weird, but not too bad. > > However it is when we use it in a generator (or in the __next__ method of > an iterator) that we get a serious problem. > Currently the iterator appears to be exhausted early, which is wrong. > However, with the proposed change we get RuntimeError("generator raised > StopIteration") raised, which is also wrong, just in a different way. > > Solutions > - > My preferred "solution" is to do nothing except improving the > documentation of next(). Explain that it can raise StopIteration which, if > allowed to propogate can cause premature exhaustion of an iterator. > > If something must be done then I would suggest changing the behaviour of > next() for an exhausted iterator. > Rather than raise StopIteration it should raise ValueError (or > IndexError?). > > Also, it might be worth considering making StopIteration inherit from > BaseException, rather than Exception. > > > Cheers, > Mark. > > P.S. 5 days seems a rather short time to respond to a PEP. > Could we make it at least a couple of weeks in the future, > or better still specify a closing date for comments. > > > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Move selected documentation repos to PSF BitBucket account?
On Sunday, November 23, 2014, Skip Montanaro wrote: > > > git-push(1) is over 650 lines and it's nearly > > impossible to dig out the most important > > bits. > > I use git daily at work. I try to use it in the most simple way possible. > My frustration with the man pages got to the point where I basically use > Google to ask my questions, then bookmark the solutions I find (which often > turn out to be on stackoverflow). > Then there's this. http://git-man-page-generator.lokaltog.net/ -- --Guido van Rossum (on iPad) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
On Mon, Nov 24, 2014 at 8:14 AM, Isaac Schwabacher wrote: > On 11/23/14, Guido van Rossum wrote: > > > It wouldn't be so bad if we had the occasional generator author writing > "raise StopIteration" instead of "return" to exit from a generator. (We > could just add a recommendation against this to the style guide.) But the > problem is that an unguarded next() call also raises StopIteration. > Sometimes this is intentional (as in some itertools examples). But > sometimes an unguarded next() call occurs deep in the bowels of some code > called by the generator, and this situation is often hard to debug, since > there is no stack track. > > I'll admit I've only skimmed the massive volume of correspondence this PEP > has generated, but it seems to me that this is the main argument for this > change. I can only assume that your support for this PEP is informed by > your experience building Tulip, but isn't this the kind of thing that can > be accomplished with a warning? Then you can get the same behavior without > even needing a __future__ import to protect code bases that expect > StopIteration to propagate (which seems like the more elegant and natural > thing to do, even if it is more error-prone). > Yes, this is my main reason for wanting the change -- but not just for tulip/asyncio. The issue can be just as baffling for anyone using unprotected next() calls in the context of a generator. But I'm not sure where to put the warning. Are you proposing to issue a warning under the same conditions the PEP says? But then the itertools examples would issue warnings -- and I bet the advice would typically be "disable warnings" rather than "fix the code, otherwise it will break hard in Python 3.7". -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
On Mon, Nov 24, 2014 at 1:32 PM, Isaac Schwabacher wrote: > On 11/24/14, Guido van Rossum wrote: > > On Mon, Nov 24, 2014 at 8:14 AM, Isaac Schwabacher < > ischwabac...@wisc.edu(javascript:main.compose()> wrote: > > > > > On 11/23/14, Guido van Rossum wrote: > > > > > > > It wouldn't be so bad if we had the occasional generator author > writing "raise StopIteration" instead of "return" to exit from a generator. > (We could just add a recommendation against this to the style guide.) But > the problem is that an unguarded next() call also raises StopIteration. > Sometimes this is intentional (as in some itertools examples). But > sometimes an unguarded next() call occurs deep in the bowels of some code > called by the generator, and this situation is often hard to debug, since > there is no stack track. > > > > > > I'll admit I've only skimmed the massive volume of correspondence this > PEP has generated, but it seems to me that this is the main argument for > this change. I can only assume that your support for this PEP is informed > by your experience building Tulip, but isn't this the kind of thing that > can be accomplished with a warning? Then you can get the same behavior > without even needing a __future__ import to protect code bases that expect > StopIteration to propagate (which seems like the more elegant and natural > thing to do, even if it is more error-prone). > > > > Yes, this is my main reason for wanting the change -- but not just for > tulip/asyncio. The issue can be just as baffling for anyone using > unprotected next() calls in the context of a generator. But I'm not sure > where to put the warning. Are you proposing to issue a warning under the > same conditions the PEP says? > > Yes, I'm proposing issuing the warning at the point where the PEP raises, > so that the PEP's behavior can be obtained with a warning filter (and such > a filter could be installed by default around the asyncio main loop). > > > But then the itertools examples would issue warnings -- > > That's definitely problematic. They should either be fixed, or have the > warning silenced with a comment about how the bubbling-up case is expected. > So you agree with the problem that the PEP is trying to solve, you want people to fix their code in exactly the same way that the PEP is trying to get them to fix it, you want all new code that exhibits the problem to be flagged by a warning, and yet you do not support adding a __future__ statement and a a transition plan that replaces the warnings with hard failures in Python 3.7 (whose release date is going to be at least about four years in the future)? That sounds like the most loyal opposition I can wish for! :-) > > and I bet the advice would typically be "disable warnings" rather than > "fix the code, otherwise it will break hard in Python 3.7". > > I don't think it's the language's responsibility to second guess a user > who decides to explicitly silence such a warning. And if this *is* > accomplished with a warning, then the user can just continue silencing it > in 3.7. In my experience, though, python's documentation, StackOverflow > presence, blogs, etc. have been absolutely stellar in terms of explaining > why things are the way they are and how one should write pythonic code. I > don't doubt the community's ability to educate users on this. > Python's philosophy for (runtime) warnings is pretty clear -- a warning should never be silenced indefinitely. Warnings mean something's wrong with your code that won't get better by ignoring it, and you should fix it at some point. Until then you can silence the warning. Silencing warnings is an important mechanism for users who have no control over the code that issues the warning, and for devs who have more pressing priorities. But they should not be used to permanently enable coding in an "alternate universe" where the language has different features. > I think the biggest stumbling block for this proposal is the fact that the > current warning machinery doesn't appear to be up to the task of silencing > a known-harmless warning in one generator without silencing meaningful > warnings in generators it calls. > You can get pretty darn specific with the warnings silencing machinery: up to the module and line number. It's intentional that you can't specify a class/method -- the latter would just encourage devs to silence a specific warning because they think they know better. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
On Mon, Nov 24, 2014 at 3:07 PM, Isaac Schwabacher wrote: > On 11/24/14, Guido van Rossum wrote: > > On Mon, Nov 24, 2014 at 1:32 PM, Isaac Schwabacher < > ischwabac...@wisc.edu > ischwabac...@wisc.edu> wrote: > > > > > On 11/24/14, Guido van Rossum wrote: > > > > On Mon, Nov 24, 2014 at 8:14 AM, Isaac Schwabacher < > python.org/~guido(javascript:main.compose('new', > 't=ischwabac...@wisc.edu>(java_script:main.compose()> > wrote: > > > > > > > > > On 11/23/14, Guido van Rossum wrote: > > > > > > > > > > > It wouldn't be so bad if we had the occasional generator author > writing "raise StopIteration" instead of "return" to exit from a generator. > (We could just add a recommendation against this to the style guide.) But > the problem is that an unguarded next() call also raises StopIteration. > Sometimes this is intentional (as in some itertools examples). But > sometimes an unguarded next() call occurs deep in the bowels of some code > called by the generator, and this situation is often hard to debug, since > there is no stack track. > > > > > > > > > > I'll admit I've only skimmed the massive volume of correspondence > this PEP has generated, but it seems to me that this is the main argument > for this change. I can only assume that your support for this PEP is > informed by your experience building Tulip, but isn't this the kind of > thing that can be accomplished with a warning? Then you can get the same > behavior without even needing a __future__ import to protect code bases > that expect StopIteration to propagate (which seems like the more elegant > and natural thing to do, even if it is more error-prone). > > > > > > > > Yes, this is my main reason for wanting the change -- but not just > for tulip/asyncio. The issue can be just as baffling for anyone using > unprotected next() calls in the context of a generator. But I'm not sure > where to put the warning. Are you proposing to issue a warning under the > same conditions the PEP says? > > > > > > Yes, I'm proposing issuing the warning at the point where the PEP > raises, so that the PEP's behavior can be obtained with a warning filter > (and such a filter could be installed by default around the asyncio main > loop). > > > > > > > But then the itertools examples would issue warnings -- > > > > > > That's definitely problematic. They should either be fixed, or have > the warning silenced with a comment about how the bubbling-up case is > expected. > > > > So you agree with the problem that the PEP is trying to solve, you want > people to fix their code in exactly the same way that the PEP is trying to > get them to fix it, you want all new code that exhibits the problem to be > flagged by a warning, and yet you do not support adding a __future__ > statement and a a transition plan that replaces the warnings with hard > failures in Python 3.7 (whose release date is going to be at least about > four years in the future)? > > > > That sounds like the most loyal opposition I can wish for! :-) > > I agree with you that escaping StopIteration should be easier to notice, > but with the opposition that allowing StopIteration to escape on purpose is > a useful technique. But when you put it that way... > > > > > and I bet the advice would typically be "disable warnings" rather > than "fix the code, otherwise it will break hard in Python 3.7". > > > > > > I don't think it's the language's responsibility to second guess a > user who decides to explicitly silence such a warning. And if this *is* > accomplished with a warning, then the user can just continue silencing it > in 3.7. In my experience, though, python's documentation, StackOverflow > presence, blogs, etc. have been absolutely stellar in terms of explaining > why things are the way they are and how one should write pythonic code. I > don't doubt the community's ability to educate users on this. > > > > Python's philosophy for (runtime) warnings is pretty clear -- a warning > should never be silenced indefinitely. Warnings mean something's wrong with > your code that won't get better by ignoring it, and you should fix it at > some point. Until then you can silence the warning. Silencing warnings is > an important mechanism for users who have no control over the code that > issues the warning, and for devs who have more pressing priorities. But > they should not be used to permanently enable coding in an "alternate &
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
On Mon, Nov 24, 2014 at 4:21 PM, Alexander Belopolsky < alexander.belopol...@gmail.com> wrote: > > On Wed, Nov 19, 2014 at 3:10 PM, Guido van Rossum > wrote: > >> There's a new PEP proposing to change how to treat StopIteration bubbling >> up out of a generator frame (not caused by a return from the frame). The >> proposal is to replace such a StopIteration with a RuntimeError (chained to >> the original StopIteration), so that only *returning* from a generator (or >> falling off the end) causes the iteration to terminate. > > > I think the PEP should also specify what will happen if the generator's > __next__() method is called again after RuntimeError is handled. The two > choices are: > > 1. Raise StopIteration (current behavior for all exceptions). > 2. Raise RuntimeError (may be impossible without gi_frame). > > I think choice 1 is implied by the PEP. > Good catch. It has to be #1 because the generator object doesn't retain exception state. I am behind with updating the PEP but I promise I won't mark it as Accepted without adding this, the transition plan, and a discussion of some of the objections that were raised. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
On Tue, Nov 25, 2014 at 9:49 AM, Chris Angelico wrote: > On Wed, Nov 26, 2014 at 4:45 AM, Isaac Schwabacher > wrote: > > Yield can also raise StopIteration, if it's thrown in. The current > interaction of generator.throw(StopIteration) with yield from can't be > emulated under the PEP's behavior, though it's not clear that that's a > problem. > > > > Hrm. I have *absolutely* no idea when you would use that, and how > you'd go about reworking it to fit this proposal. Do you have any > example code (production or synthetic) which throws StopIteration into > a generator? > Sounds like a good one for the obfuscated Python contest. :-) Unless the generator has a try/except surrounding the yield point into which the exception is thrown, it will bubble right out, and PEP 479 will turn this into a RuntimeError. I'll clarify this in the PEP (even though it logically follows from the proposal) -- I don't think there's anything to worry about. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
On Tue, Nov 25, 2014 at 10:12 AM, Isaac Schwabacher wrote: > On 11/25/14, Guido van Rossum wrote: > > On Tue, Nov 25, 2014 at 9:49 AM, Chris Angelico ros...@gmail.com')" target="1">ros...@gmail.com> wrote: > > > > > On Wed, Nov 26, 2014 at 4:45 AM, Isaac Schwabacher > > > ischwabac...@wisc.edu>> wrote: > > > > Yield can also raise StopIteration, if its thrown in. The current > interaction of generator.throw(StopIteration) with yield from cant be > emulated under the PEPs behavior, though its not clear that thats a problem. > > > > > > Hrm. I have *absolutely* no idea when you would use that, > > To close the innermost generator in a yield-from chain. No, I don't know > why you'd want to do that, either. For that purpose you should call the generator's close() method. This throws a GeneratorExit into the generator to give the generator a chance of cleanup (typically using try/finally). Various reasonable things happen if the generator misbehaves at this point -- if you want to learn what, read the code or experiment a bit on the command line (that's what I usually do). -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
On Tue, Nov 25, 2014 at 4:58 PM, Greg wrote: > I'm not particularly opposed to PEP 479, but the Abstract and > Rationale could do with considerable clarification. I know. > They currently > appear to promise things that are in disagreement with what the PEP > actually delivers. > > The Abstract claims that the proposal will "unify the behaviour of > list comprehensions and generator expressions", but it doesn't do > that. What it actually does is provide special protection against > escaped StopIteration exceptions in one particular context (the > body of a generator). It doesn't prevent StopIteration from > escaping anywhere else, including from list comprehensions, so if > anything it actually *increases* the difference between generators > and comprehensions. > Hm, that sounds like you're either being contrarian or Chris and I have explained it even worse than I thought. Currently, there are cases where list(x for x in xs if P(x)) works while [x for x in xs if P(x)] fails (when P(x) raises StopIteration). With the PEP, both cases will raise some exception -- though you (and several others who've pointed this out) are right that the exception raised is different (RuntimeError vs. StopIteration) and if this occurs inside a __next__() method (not a generator) the StopIteration will cause the outer iteration to terminate silently. > There may be merit in preventing rogue StopIterations escaping > from generators, but the PEP should sell the idea on that basis, not > on what sounds like a false promise that it will make comprehensions > and generators behave identically. > I will weaken that language. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
On Wed, Nov 26, 2014 at 8:54 AM, Paul Moore wrote: > On 26 November 2014 at 16:24, Isaac Schwabacher > wrote: > > This actually leads to a good example of why the PEP is necessary: > [...] > > Oh! If that's the current behaviour, then it probably needs to go into > the PEP as a motivating example. It's far more convincing than most of > the other arguments I've seen. Just one proviso - is it fixable in > contextlib *without* a language change? If so, then it loses a lot of > its value. > It's hard to use as an example because the behavior of contextlib is an integral part of it -- currently for me the example boils down to "there is a bug in contextlib". Maybe it would have been caught earlier with the change in the PEP, but when using it as a motivating example you have to show the code containing the bug, not just a demonstration. If you want to try though, I'm happy to entertain a pull request for the PEP. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
On Wed, Nov 26, 2014 at 3:24 AM, Nick Coghlan wrote: > On 26 November 2014 at 18:30, Greg Ewing > wrote: > > Guido van Rossum wrote: > >> > >> Hm, that sounds like you're either being contrarian or Chris and I have > >> explained it even worse than I thought. > > > > I'm not trying to be contrary, I just think the PEP could > > explain more clearly what you're trying to achieve. The > > rationale is too vague and waffly at the moment. > > > >> Currently, there are cases where list(x for x in xs if P(x)) works while > >> [x for x in xs if P(x)] fails (when P(x) raises StopIteration). With the > >> PEP, both cases will raise some exception > > > > That's a better explanation, I think. > It's now in the PEP. > The other key aspect is that it changes the answer to the question > "How do I gracefully terminate a generator function?". The existing > behaviour has an "or" in the answer: "return from the generator frame, > OR raise StopIteration from the generator frame". That then leads to > the follow on question: "When should I use one over the other?". > > The "from __future__ import generator_stop" answer drops the "or", so > it's just: "return from the generator frame". > That's now also in the PEP. > Raising *any* exception inside the generator, including StopIteration, > then counts as non-graceful termination, bringing generators into line > with the PEP 343 philosophy that "hiding flow control in macros makes > your code inscrutable", where here, the hidden flow control is relying > on the fact that a called function raising StopIteration will > currently always gracefully terminate generator execution. > Right. > The key downside is that it means relatively idiomatic code like: > > def my_generator(): > ... > yield next(it) > ... > I probably considered this an upside of generators when they were introduced. :-( > Now needs to be written out explicitly as: > > def my_generator(): > ... >try: > yield next(it) > except StopIteration > return > ... > > That's not especially easy to read, and it's also going to be very > slow when working with generator based producer/consumer pipelines. > I want to consider this performance argument seriously. Chris did a little benchmark but I don't think he compared the right things -- he showed that "yield from" becomes 5% slower with his patch and that a while loop is twice as slow as "yield from" with or without his patch. I have no idea why his patch would slow down "yield from" but I doubt it's directly related -- his change only adds some extra code when a generator frame is left with an exception, but his "yield from" example code ( https://github.com/Rosuav/GenStopIter/blob/485d1/perftest.py) never raises (unless I really don't understand how the implementation of "yield from" actually works :-). I guess what we *should* benchmark is this: def g(depth): if depth > 0: it = g(depth-1) yield next(it) else: yield 42 vs. the PEP-479-ly corrected version: def g(depth): if depth > 0: it = g(depth-1) try: yield next(it) except StopIteration: pass else: yield 42 This sets up "depth" generators each with a try/except, and then at the very bottom yields a single value (42) which pops up all the way to the top, never raising StopIteration. I wrote the benchmark and here are the code and results: https://gist.github.com/gvanrossum/1adb5bee99400ce615a5 It's clear that the extra try/except setup aren't free, even if the except is never triggered. My summary of the results is that the try/except setup costs 100-200 nsec, while the rest of the code executed in the frame takes about 600-800 nsec. (Hardware: MacBook Pro with 2.8 GHz Intel Core i7.) Incidentally, the try/except cost has come down greatly from Python 2.7, where it's over a microsecond! I also tried a variation where the bottommost generator doesn't yield a value. The conclusion is about the same -- the try/except version is 150 nsec slower. So now we have a number to worry about (150 nsec for a try/except) and I have to think about whether or not that's likely to have a noticeable effect in realistic situations. One recommendation follows: if you have a loop inside your generator, and there's a next() call in the loop, put the try/except around the loop, so you pay the setup cost only once (unless the loop is most likely to have zero iterations, and unlikely to h
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
Can you summarize that in a self-contained form for inclusion in the PEP? (That was a rhetorical question. :-) On Wed, Nov 26, 2014 at 12:17 PM, Isaac Schwabacher wrote: > On 14-11-26, Guido van Rossum wrote: > > On Wed, Nov 26, 2014 at 8:54 AM, Paul Moore wrote: > > > > > On 26 November 2014 at 16:24, Isaac Schwabacher wrote: > > > > This actually leads to a good example of why the PEP is necessary: > > > [...] > > > > > > Oh! If that's the current behaviour, then it probably needs to go into > > > the PEP as a motivating example. It's far more convincing than most of > > > the other arguments I've seen. Just one proviso - is it fixable in > > > contextlib *without* a language change? If so, then it loses a lot of > > > its value. > > > > It's hard to use as an example because the behavior of contextlib is an > integral part of it -- currently for me the example boils down to "there is > a bug in contextlib". Maybe it would have been caught earlier with the > change in the PEP, but when using it as a motivating example you have to > show the code containing the bug, not just a demonstration. > > How is this a bug in contextlib? The example behaves the way it does > because gen.throw(StopIteration) behaves differently depending on whether > gen is paused at a yield or a yield from. What *should* > contextlib.contextmanager do in this instance? It has faithfully forwarded > the StopIteration raised in the protected block to the generator, and the > generator has forwarded this to the subgenerator, which has elected to fail > and report success. The bug is in the subgenerator, because it fails to > treat StopIteration as an error. But the subgenerator can't in general be > converted to treat StopIteration as an error, because clearly it's used in > other places than as a nested context manager (otherwise, it would itself > be decorated with @contextlib.contextmanager and accessed as such, instead > of yielded from). And in those places, perhaps it needs to simply allow > StopIteration to bubble up. And can we factor out the error checking so > that we don't have to duplicate subgenerator? Well... yes, but it's tricky > because we'll introduce an extra yield from in the process, so we have to > put the handling in the subgenerator itself and wrap the > *non*-context-manager uses. > > ijs > > > If you want to try though, I'm happy to entertain a pull request for the > PEP. > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
You can use the README here: https://github.com/Rosuav/GenStopIter On Wed, Nov 26, 2014 at 1:57 PM, Isaac Schwabacher wrote: > > Can you summarize that in a self-contained form for inclusion in the PEP? > > > > (That was a rhetorical question. :-) > > Sure. Is it on GitHub? ;D > > ijs > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators
On Wed, Nov 26, 2014 at 3:15 PM, Nick Coghlan wrote: > > On 27 Nov 2014 03:58, "Paul Moore" wrote: > > > > On 26 November 2014 at 17:19, Guido van Rossum wrote: > > > It's hard to use as an example because the behavior of contextlib is an > > > integral part of it -- currently for me the example boils down to > "there is > > > a bug in contextlib" > > > > Hmm, fair point. I was assuming that the bug in contextlib can't be > > fixed with the current language behaviour (and I'd personally be OK > > with the example simply adding a comment "this can't be fixed without > > changing Python as proposed in the PEP"). But I'm not sure how true > > that is, so maybe it's not quite as compelling as it seemed to me at > > first. > > The "contextlib only" change would be to map StopIteration in the body of > the with statement to gen.close() on the underlying generator rather than > gen.throw(StopIteration). (That's backwards incompatible in its own way, > since it means you *can't* suppress StopIteration via a generator based > context manager any more) > > This is actually the second iteration of this bug: the original > implementation *always* suppressed StopIteration. PJE caught that one > before Python 2.5 was released, but we didn't notice that 3.3 had brought > it back in a new, more subtle form :( > > It's worth noting that my "allow_implicit_stop" idea in the other thread > wouldn't affect subgenerators - those would still convert StopIteration to > RuntimeError unless explicitly silenced. > You've lost me in this subthread. Am I right to conclude that the PEP change doesn't cause problems for contextlib(*), but that the PEP change also probably wouldn't have helped diagnose any contextlib bugs? (*) Except perhaps that some old 3rd party copy of contextlib may eventually break if it's not updated. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
The design just copies the code object with one flag set differently. Code objects are immutable but they can be copied (though the interface to do that is kind of hidden). On Wed, Nov 26, 2014 at 4:03 PM, Chris Angelico wrote: > On Thu, Nov 27, 2014 at 9:53 AM, Nick Coghlan wrote: > > The implicit stop decorator would then check the flags on the code object > > attached to the passed in function. If GENERATOR wasn't set, that would > be > > an immediate ValueError, while if EXPLICIT_STOP wasn't set, the generator > > function would be passed through unmodified. However, if EXPLICIT_STOP > *was* > > set, the generator function would be replaced by a *new* generator > function > > with a *new* code object, where the only change was to clear the > > EXPLICIT_STOP flag. > > Is it possible to replace the code object without replacing the > function? Imagine if you have multiple decorators, one of which > retains a reference to the function and then this one which replaces > it - the order of decoration would be critical. OTOH, I don't know > that anyone would retain references to __code__. > > ChrisA > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
No, that was a figure of speech. The proposed decorator returns a new function object that references a new code object. The original function and code object are unchanged. On Wed, Nov 26, 2014 at 4:38 PM, Chris Angelico wrote: > On Thu, Nov 27, 2014 at 11:33 AM, Guido van Rossum > wrote: > > The design just copies the code object with one flag set differently. > Code > > objects are immutable but they can be copied (though the interface to do > > that is kind of hidden). > > Yes, but the proposal as written spoke of replacing the generator > *function*, which has broader consequences. If it's simply replacing > the __code__ attribute of that function, it ought to be safe, I think? > > ChrisA > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
Well, that's just a general problem with decorator ordering. On Wed, Nov 26, 2014 at 4:57 PM, Chris Angelico wrote: > On Thu, Nov 27, 2014 at 11:50 AM, Guido van Rossum > wrote: > > No, that was a figure of speech. The proposed decorator returns a new > > function object that references a new code object. The original function > and > > code object are unchanged. > > Then it has a potentially-confusing interaction with decorators like > Flask's app.route(), which return the original function unchanged, but > also save a reference to it elsewhere. The order of decoration > determines the effect of the @hettinger decorator; there will be two > functions around which are almost, but not entirely, identical, and > it'd be very easy to not notice that you decorated in the wrong order. > > ChrisA > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
A decorator with a side effect *elsewhere* (like the route registrations) is acceptable; one with a side effect *on the decorated function* is questionable, and instead the decorator should behave "functionally", i.e. return a new object instead. On Wed, Nov 26, 2014 at 5:07 PM, Chris Angelico wrote: > On Thu, Nov 27, 2014 at 12:01 PM, Guido van Rossum > wrote: > > Well, that's just a general problem with decorator ordering. > > Indeed. I was hoping it could be avoided in this instance by just > altering __code__ on an existing function, but if that's not possible, > we fall back to what is, after all, a known and documented concern. > > ChrisA > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
On Wed, Nov 26, 2014 at 2:53 PM, Nick Coghlan wrote: > On 27 Nov 2014 06:35, "Guido van Rossum" wrote: > [...] > > > I think we can put a number to "much faster" now -- 150 nsec per > try/except. > > > > I have serious misgivings about that decorator though -- I'm not sure > how viable it is to pass a flag from the function object to the execution > (which takes the code object, which is immutable) and how other Python > implementations would do that. But I'm sure it can be done through sheer > willpower. I'd call it the @hettinger decorator in honor of the PEP's most > eloquent detractor. :-) > > I agree with everything you wrote in your reply, so I'll just elaborate a > bit on my proposed implementation for the decorator idea. > This remark is ambiguous -- how strongly do you feel that this decorator should be provided? (If so, it should be in the PEP.) (I'm snipping the rest of what you said, since I understand it: the flag on the code object even has a name in the PEP, it's REPLACE_STOPITERATION -- although I could imagine renaming it to GENERATOR_STOP to match the __future__.) -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please reconsider PEP 479.
On Thu, Nov 27, 2014 at 3:04 AM, Nick Coghlan wrote: > On 27 November 2014 at 11:15, Guido van Rossum wrote: > > On Wed, Nov 26, 2014 at 2:53 PM, Nick Coghlan > wrote: > >> > >> On 27 Nov 2014 06:35, "Guido van Rossum" wrote: > >> > >> [...] > >> > >> > I think we can put a number to "much faster" now -- 150 nsec per > >> > try/except. > >> > > >> > I have serious misgivings about that decorator though -- I'm not sure > >> > how viable it is to pass a flag from the function object to the > execution > >> > (which takes the code object, which is immutable) and how other Python > >> > implementations would do that. But I'm sure it can be done through > sheer > >> > willpower. I'd call it the @hettinger decorator in honor of the PEP's > most > >> > eloquent detractor. :-) > >> > >> I agree with everything you wrote in your reply, so I'll just elaborate > a > >> bit on my proposed implementation for the decorator idea. > > > > This remark is ambiguous -- how strongly do you feel that this decorator > > should be provided? (If so, it should be in the PEP.) > > I think it makes sense to standardise it, but something like > "itertools.allow_implicit_stop" would probably be better than having > it as a builtin. (The only reason I suggested a builtin initially is > because putting it in itertools didn't occur to me until later) > > Including the decorator provides a straightforward way to immediately > start writing forward compatible code that's explicit about the fact > it relies on the current StopIteration handling, without being > excessively noisy relative to the status quo: > > # In a module with a generator that relies on the current behaviour > from itertools import allow_implicit_stop > > @allow_implicit_stop > def my_generator(): > ... > yield next(itr) > ... > > In terms of code diffs to ensure forward compatibility, it's 1 import > statement per affected module, and 1 decorator line per affected > generator, rather than at least 3 lines (for try/except/return) plus > indentation changes for each affected generator. That's a useful > benefit when it comes to minimising the impact on version control code > annotation, etc. > > If compatibility with older Python versions is needed, then you could > put something like the following in a compatibility module: > > try: > from itertools import allow_implicit_stop > except ImportError: > # Allowing implicit stops is the default in older versions > def allow_implicit_stop(g): > return g > I understand that @allow_import_stop represents a compromise, an attempt at calming the waves that PEP 479 has caused. But I still want to push back pretty hard on this idea. - It means we're forever stuck with two possible semantics for StopIteration raised in generators. - It complicates the implementation, because (presumably) a generator marked with @allow_stop_import should not cause a warning when a StopIteration bubbles out -- so we actually need another flag to silence the warning. - I don't actually know whether other Python implementations have the ability to copy code objects to change flags. - It actually introduces a new incompatibility, that has to be solved in every module that wants to use it (as you show above), whereas just putting try/except around unguarded next() calls is fully backwards compatible. - Its existence encourage people to use the decorator in favor of fixing their code properly. - The decorator is so subtle that it probably needs to be explained to everyone who encounters it (and wasn't involved in this PEP discussion). Because of this I would strongly advise against using it to "fix" the itertools examples in the docs; it's just too magical. (IIRC only 2 examples actually depend on this.) Let me also present another (minor) argument for PEP 479. Sometimes you want to take a piece of code presented as a generator and turn it into something else. You can usually do this pretty easily by e.g. replacing every "yield" by a call to print() or list.append(). But if there are any bare next() calls in the code you have to beware of those. If the code was originally written without relying on bare next(), the transformation would have been easier. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479 and asyncio
On Thu, Nov 27, 2014 at 10:08 AM, Victor Stinner wrote: > I'm trying to follow the discussion about the PEP 479 (Change > StopIteration handling inside generators), but it's hard to read all > messages. I'm concerned by trollius and asyncio which heavily rely on > StopIteration. > > Trollius currently supports running asyncio coroutines: a trollius > coroutine can executes an asyncio coroutine, and and asyncio coroutine > can execute a trollius coroutine. > > I modified the Return class of Trollius to not inherit from > StopIteration. All trollius tests pass on Python 3.3 except on one > (which makes me happy, the test suite is wide enough to detect bugs > ;-)): test_trollius_in_asyncio. > > This specific test executes an asyncio which executes a trollius coroutine. > > https://bitbucket.org/enovance/trollius/src/873d21ac0badec36835ed24d13e2aeda24f2dc64/tests/test_asyncio.py?at=trollius#cl-60 > > The problem is that an asyncio coroutine cannot execute a Trollius > coroutine anymore: "yield from coro" raises a Return exception instead > of simply "stopping" the generator and return the result (value passed > to Return). > > I don't see how an asyncio coroutine calling "yield from > trollius_coroutine" can handle the Return exception if it doesn't > inherit from StopIteration. It means that I have to drop this feature > in Python 3.5 (or later when the PEP 479 becomes effective)? > > I'm talking about the current behaviour of Python 3.3, I didn't try > the PEP 479 (I don't know if an exception exists). > The issue here is that asyncio only interprets StopIteration as returning from the generator (with a possible value), while a Trollius coroutine must use "raise Return()" to specify a return value; this works as long as Return is a subclass of StopIteration, but PEP 479 will break this by replacing the StopIteration with RuntimeError. It's an interesting puzzle. The only way out I can think of is to have asyncio special-case the Return exception -- we could do that by defining a new exception (e.g. AlternateReturn) in asyncio that gets treated the same way as StopIteration, so that Trollius can inherit from AlternateReturn (if it exists). What do you think? -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 479 and asyncio
@Victor: I'm glad you found a work-around. Maybe you can let your users control it with a flag? It is often true that straddling code pays a performance cost. Hopefully the slight performance dip might be an incentive for people to start thinking about porting to asyncio. @Olemis: You never showed examples of how your code would be used, so it's hard to understand what you're trying to do and how PEP 479 affects you. On Fri, Nov 28, 2014 at 7:21 AM, Olemis Lang wrote: > correction ... > > On 11/28/14, Olemis Lang wrote: > > > > try: > >... > > except RuntimeError: > >return > > > > ... should be > > {{{#!py > > # inside generator function body > > try: >... > except StopIteration: >return > }}} > > [...] > > -- > Regards, > > Olemis - @olemislc > > Apache(tm) Bloodhound contributor > http://issues.apache.org/bloodhound > http://blood-hound.net > > Blog ES: http://simelo-es.blogspot.com/ > Blog EN: http://simelo-en.blogspot.com/ > > Featured article: > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com