Re: [Distutils] Announcement: Pip 10 is coming, and will move all internal APIs
While I understand that pip itself has to be very careful about edge cases and all the pathological things you can do in setup.py, as a higher-level tooling author my priorities are on the happy path UX and speed is a big factor there. So yes, using PackageFinder is potentially inaccurate, but it's also _usually_ accurate :) Anyways, if there is true concern that finder-based approaches are too risky, probably don't offer it in the pip list output. --Noah > On Oct 20, 2017, at 11:43 AM, xoviat <xov...@gmail.com> wrote: > > A correct dry-run implementation will do about the same amount of work as > installing to a temporary directory right now. In the future, that could be > optimized, but any patch to the finder doesn't actually detect the > requirements correctly (as they're not necessarily known until after the > wheels are built). > > 2017-10-20 13:41 GMT-05:00 Noah Kantrowitz <n...@coderanger.net>: > Installing to a temp dir is really not an option for automated tooling (if > nothing else, it takes way too long). `pip list --outdated` does already get > fairly close to this (and doesn't install anything I suspect you can actually > get a lot closer than you think) but it calculates for all packages (read: is > slow) and doesn't give a good way to restrict things (hence that hack-y > script which is a modified version of the pip list code). This is 100% a hard > requirement for config management systems and if not fixed in pip, will > require continued use of internal APIs. I would recommend just making pip > list take a set of install-compatible names/version patterns and apply that > as a filter in a similar way to what I've done there. > > --Noah > > > On Oct 20, 2017, at 11:35 AM, xoviat <xov...@gmail.com> wrote: > > > > There's no dry-run functionality that I know of so far. However, you could > > use the following: > > > > pip install --prefix=tmpdir > > > > This command is actually about the same speed as a proper implementation, > > because we can't actually know what we're installing until we build the > > requirements. > > > > 2017-10-20 12:42 GMT-05:00 Noah Kantrowitz <n...@coderanger.net>: > > So as someone on the tooling side, is there any kind of install dry-run > > yet? I've got > > https://github.com/poise/poise-python/blob/master/lib/poise_python/resources/python_package.rb#L34-L78 > > which touches a tn of internals. Basically I need a way to know > > exactly what versions `pip install` would have used in a given situation > > without actually changing the system. Happy for a better solution! > > > > --Noah > > > > > On Oct 20, 2017, at 6:22 AM, Paul Moore <p.f.mo...@gmail.com> wrote: > > > > > > We're in the process of starting to plan for a release of pip (the > > > long-awaited pip 10). We're likely still a month or two away from a > > > release, but now is the time for people to start ensuring that > > > everything works for them. One key change in the new version will be > > > that all of the internal APIs of pip will no longer be available, so > > > any code that currently calls functions in the "pip" namespace will > > > break. Calling pip's internal APIs has never been supported, and > > > always carried a risk of such breakage, so projects doing so should, > > > in theory, be prepared for such things. However, reality is not always > > > that simple, and we are aware that people will need time to deal with > > > the implications. > > > > > > Just in case it's not clear, simply finding where the internal APIs > > > have moved to and calling them under the new names is *not* what > > > people should do. We can't stop people calling the internal APIs, > > > obviously, but the idea of this change is to give people the incentive > > > to find a supported approach, not just to annoy people who are doing > > > things we don't want them to ;-) > > > > > > So please - if you're calling pip's internals in your code, take the > > > opportunity *now* to check out the in-development version of pip, and > > > ensure your project will still work when pip 10 is released. > > > > > > And many thanks to anyone else who helps by testing out the new > > > version, as well :-) > > > > > > Thanks, > > > Paul > > > ___ > > > Distutils-SIG maillist - Distutils-SIG@python.org > > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > ___ > > Distutils-SIG maillist - Distutils-SIG@python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Announcement: Pip 10 is coming, and will move all internal APIs
Installing to a temp dir is really not an option for automated tooling (if nothing else, it takes way too long). `pip list --outdated` does already get fairly close to this (and doesn't install anything I suspect you can actually get a lot closer than you think) but it calculates for all packages (read: is slow) and doesn't give a good way to restrict things (hence that hack-y script which is a modified version of the pip list code). This is 100% a hard requirement for config management systems and if not fixed in pip, will require continued use of internal APIs. I would recommend just making pip list take a set of install-compatible names/version patterns and apply that as a filter in a similar way to what I've done there. --Noah > On Oct 20, 2017, at 11:35 AM, xoviat <xov...@gmail.com> wrote: > > There's no dry-run functionality that I know of so far. However, you could > use the following: > > pip install --prefix=tmpdir > > This command is actually about the same speed as a proper implementation, > because we can't actually know what we're installing until we build the > requirements. > > 2017-10-20 12:42 GMT-05:00 Noah Kantrowitz <n...@coderanger.net>: > So as someone on the tooling side, is there any kind of install dry-run yet? > I've got > https://github.com/poise/poise-python/blob/master/lib/poise_python/resources/python_package.rb#L34-L78 > which touches a tn of internals. Basically I need a way to know exactly > what versions `pip install` would have used in a given situation without > actually changing the system. Happy for a better solution! > > --Noah > > > On Oct 20, 2017, at 6:22 AM, Paul Moore <p.f.mo...@gmail.com> wrote: > > > > We're in the process of starting to plan for a release of pip (the > > long-awaited pip 10). We're likely still a month or two away from a > > release, but now is the time for people to start ensuring that > > everything works for them. One key change in the new version will be > > that all of the internal APIs of pip will no longer be available, so > > any code that currently calls functions in the "pip" namespace will > > break. Calling pip's internal APIs has never been supported, and > > always carried a risk of such breakage, so projects doing so should, > > in theory, be prepared for such things. However, reality is not always > > that simple, and we are aware that people will need time to deal with > > the implications. > > > > Just in case it's not clear, simply finding where the internal APIs > > have moved to and calling them under the new names is *not* what > > people should do. We can't stop people calling the internal APIs, > > obviously, but the idea of this change is to give people the incentive > > to find a supported approach, not just to annoy people who are doing > > things we don't want them to ;-) > > > > So please - if you're calling pip's internals in your code, take the > > opportunity *now* to check out the in-development version of pip, and > > ensure your project will still work when pip 10 is released. > > > > And many thanks to anyone else who helps by testing out the new > > version, as well :-) > > > > Thanks, > > Paul > > ___ > > Distutils-SIG maillist - Distutils-SIG@python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Announcement: Pip 10 is coming, and will move all internal APIs
So as someone on the tooling side, is there any kind of install dry-run yet? I've got https://github.com/poise/poise-python/blob/master/lib/poise_python/resources/python_package.rb#L34-L78 which touches a tn of internals. Basically I need a way to know exactly what versions `pip install` would have used in a given situation without actually changing the system. Happy for a better solution! --Noah > On Oct 20, 2017, at 6:22 AM, Paul Moorewrote: > > We're in the process of starting to plan for a release of pip (the > long-awaited pip 10). We're likely still a month or two away from a > release, but now is the time for people to start ensuring that > everything works for them. One key change in the new version will be > that all of the internal APIs of pip will no longer be available, so > any code that currently calls functions in the "pip" namespace will > break. Calling pip's internal APIs has never been supported, and > always carried a risk of such breakage, so projects doing so should, > in theory, be prepared for such things. However, reality is not always > that simple, and we are aware that people will need time to deal with > the implications. > > Just in case it's not clear, simply finding where the internal APIs > have moved to and calling them under the new names is *not* what > people should do. We can't stop people calling the internal APIs, > obviously, but the idea of this change is to give people the incentive > to find a supported approach, not just to annoy people who are doing > things we don't want them to ;-) > > So please - if you're calling pip's internals in your code, take the > opportunity *now* to check out the in-development version of pip, and > ensure your project will still work when pip 10 is released. > > And many thanks to anyone else who helps by testing out the new > version, as well :-) > > Thanks, > Paul > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] pythonhosted.org doc upload no longer works
This was discussed several years ago in https://mail.python.org/pipermail/distutils-sig/2015-May/026381.html and a few other threads. The final phases went out earlier this year. I don't think there is any plan to re-enable uploads to pythonhosted at this time. If you want a one-off redirect change or just having the old files removed, we can probably do that though, but I very much defer to Donald and others on that :) --Noah > On Sep 4, 2017, at 9:43 PM, Giampaolo Rodola'wrote: > > I think it's wise to revert that commit. It seems pythonhosted only suggested > to migrate to RTD but there never was an official shutdown date or warning > (either via direct email or message on the web page). > > On Tue, Sep 5, 2017 at 12:19 PM, Berker Peksağ > wrote: > On Mon, Sep 4, 2017 at 4:56 PM, Nick Coghlan wrote: > > On 2 September 2017 at 15:34, Giampaolo Rodola' wrote: > >> I know it was deprecated long ago in favor of readthedocs but I kept > >> postponing it and my doc is still hosted on > >> https://pythonhosted.org/psutil/. > > > > While we've talked about deprecating it, it *hasn't* been deprecated. > > Looking at https://github.com/pypa/pypi-legacy/commits/production, I'm > > not seeing anything obvious that would have caused problems with docs > > management, but that's probably still the best issue tracker to use to > > report the bug. > > See the 'doc_upload' handler at > https://github.com/pypa/pypi-legacy/commit/1598e6ea0f7fb0393891f6c6bcbf84c191834a0e#diff-19fadc30e1b17100568adbd8c6c3cc13R2804 > I've collected all information I found at > https://github.com/pypa/pypi-legacy/issues/672#issuecomment-316125918 > Please correct me if I missed anything. > > --Berker > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > > -- > Giampaolo - http://grodola.blogspot.com > > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPi’s predictable download url
> On Jul 25, 2017, at 3:06 PM, Tres Seaver <tsea...@palladion.com> wrote: > > On 07/25/2017 05:25 PM, Noah Kantrowitz wrote: >> >>> On Jul 25, 2017, at 2:15 PM, Wes Turner <wes.tur...@gmail.com> wrote: >>> >>> >>> >>> On Tuesday, July 25, 2017, Alexander Belopolsky >>> <alexander.belopol...@gmail.com> wrote: >>> On Tue, Jul 25, 2017 at 4:18 PM, Nick Timkovich <prometheus...@gmail.com> >>> wrote: >>> .. >>>> That's because curl is kinda annoying and doesn't follow redirects by >>>> default: >>>> >>>> $ curl -i http://pypi.python.org/pypi/virtualenv/json >>>> HTTP/1.1 301 Moved Permanently >>>> ... >>> >>> Well, http://pypi.org/.. which is presumably the home of the latest >>> PyPI returns 403: >>> >>> $ curl -i http://pypi.org/pypi/virtualenv/json >>> HTTP/1.1 403 SSL is required >>> ... >>> >>> This suggests that redirects are considered to be legacy and may not >>> be supported in the future. >>> >>> Here are the warehouse routes: >>> https://github.com/pypa/warehouse/blob/master/warehouse/routes.py >>> >>> Why do you need an http to https redirect? >> >> To explain this: pypi.org is on the HSTS preload list so all major >> browsers will automatically use HTTPS for it no matter what. cURL does >> not support this feature. > Seems like having an unconditional HTTP->HTTPS redirect in place would be a > "good neighbor" kind of thing (and belt-and-suspenders, as well). Those redirects lead to a false sense of security. As pypi.org is new and we know there are no legacy links to it out there, it does not make sense to allow http://pypi.org as a thing. There is no such website as http://pypi.org. --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPi’s predictable download url
> On Jul 25, 2017, at 2:15 PM, Wes Turnerwrote: > > > > On Tuesday, July 25, 2017, Alexander Belopolsky > wrote: > On Tue, Jul 25, 2017 at 4:18 PM, Nick Timkovich > wrote: > .. > > That's because curl is kinda annoying and doesn't follow redirects by > > default: > > > > $ curl -i http://pypi.python.org/pypi/virtualenv/json > > HTTP/1.1 301 Moved Permanently > > ... > > Well, http://pypi.org/.. which is presumably the home of the latest > PyPI returns 403: > > $ curl -i http://pypi.org/pypi/virtualenv/json > HTTP/1.1 403 SSL is required > ... > > This suggests that redirects are considered to be legacy and may not > be supported in the future. > > Here are the warehouse routes: > https://github.com/pypa/warehouse/blob/master/warehouse/routes.py > > Why do you need an http to https redirect? To explain this: pypi.org is on the HSTS preload list so all major browsers will automatically use HTTPS for it no matter what. cURL does not support this feature. --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Malicious packages on PyPI
> On Jun 1, 2017, at 4:00 PM, Nick Timkovichwrote: > > This issue was also brought up in January at > https://github.com/pypa/pypi-legacy/issues/585 then just as after the initial > "typosquatting PyPI" report (June 2016) it's met with resounding silence. > Attacking the messenger doesn't seem like a winning move from a security > standpoint. > > Can we come up with a plan to address the underlying issue and protect users? If you have a systemic solution I'm sure we would love to hear it :) --Noah signature.asc Description: Message signed with OpenPGP ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] license for setuptools
Hi there, this list is for the discussion of Python's core packaging tools like distutils. We have no control over the packages made or distributed with them. You would have to contact the matplotlib authors, not us. --Noah > On Aug 12, 2016, at 7:51 AM, Marinier, Claude> wrote: > > Good afternoon (well it’s afternoon here in the EDT zone), > > I am in the process of requesting the installation of Python 3 with > matplotlib. The company needs to approve licenses but I cannot find the > license for setuptools. The description here says it uses an MIT license but > I cannot confirm this. On github, the file setup.py says the same thing. > > License :: OSI Approved :: MIT License > > Could the maintainer please add an explicit license file. > > I would really like to use matplotlib but will not get approval unless we can > confirm the license. > > Thank you. > > -- > Claude Marinier > > > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files
> On Jun 22, 2016, at 2:42 PM, Donald Stufftwrote: > > >> On Jun 22, 2016, at 5:38 PM, Glyph wrote: >> >> >>> On Jun 22, 2016, at 12:21, Nathaniel Smith wrote: >>> There are still use cases for distro-specific wheels, though -- some >>> examples include Raspbian wheels (manylinux1 is x86/x86-64 only), Alpine >>> Linux wheels (manylinux1 is glibc only), internal deploys that want to >>> build on Ubuntu 14.04 and deploy on 14.04 and don't need the hassle of >>> generating manylinux-style binaries but would like a more meaningful >>> platform tag than "linux", and for everyone who wants to extend wheel >>> metadata to allow dependencies on external distro packages then having >>> distro-specific wheels is probably a necessary first step. >>> >> If we want to treat distros as first-class deployment targets I think being >> able to use their platform features in a way that's compatible with PyPI is >> an important next step. However, wheel tags might be insufficient here; the >> main appeal of creating distro-specific wheels is being able to use >> distro-specific features, but those typically come along with specific >> package dependencies as well, and we don't have a way to express those yet. > > I don’t think these two things need to be bound together. People can already > today depend on platform specific things just by not publishing wheels. > Adding these tags naturally follows that, where people would need to manually > install items from the OS before they used them. Adding some mechanism for > automating this would be a good, further addition, but I think they are > separately useful (and even more useful and powerful when combined). I could see an argument for maybe building support into Pip but disallowing them on PyPI until we feel comfortable with the UX. That doesn't add much over existing private index support though. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files
> On Jun 22, 2016, at 2:42 PM, Donald Stufftwrote: > > >> On Jun 22, 2016, at 5:38 PM, Glyph wrote: >> >> >>> On Jun 22, 2016, at 12:21, Nathaniel Smith wrote: >>> There are still use cases for distro-specific wheels, though -- some >>> examples include Raspbian wheels (manylinux1 is x86/x86-64 only), Alpine >>> Linux wheels (manylinux1 is glibc only), internal deploys that want to >>> build on Ubuntu 14.04 and deploy on 14.04 and don't need the hassle of >>> generating manylinux-style binaries but would like a more meaningful >>> platform tag than "linux", and for everyone who wants to extend wheel >>> metadata to allow dependencies on external distro packages then having >>> distro-specific wheels is probably a necessary first step. >>> >> If we want to treat distros as first-class deployment targets I think being >> able to use their platform features in a way that's compatible with PyPI is >> an important next step. However, wheel tags might be insufficient here; the >> main appeal of creating distro-specific wheels is being able to use >> distro-specific features, but those typically come along with specific >> package dependencies as well, and we don't have a way to express those yet. > > I don’t think these two things need to be bound together. People can already > today depend on platform specific things just by not publishing wheels. > Adding these tags naturally follows that, where people would need to manually > install items from the OS before they used them. Adding some mechanism for > automating this would be a good, further addition, but I think they are > separately useful (and even more useful and powerful when combined). I could see an argument for maybe building support into Pip but disallowing them on PyPI until we feel comfortable with the UX. That doesn't add much over existing private index support though. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Proposal: using /etc/os-release in the "platform tag" definition for wheel files
Manylinux has mostly replaced it as that covers the platforms 99% of people worry about. The tooling for manylinux is more complex than this would have been, but sunk cost etc etc and now that we have it might as well save everyone some headache. --Noah > On Jun 22, 2016, at 8:51 AM, Nathaniel Smithwrote: > > I believe the status is that there's general consensus that something like > this would be useful, but there's no one who is currently actively working in > it. > > On Jun 22, 2016 5:53 AM, "Vitaly Kruglikov" wrote: > There have been no updates in over a year. Has this effort died, or > transitioned to another medium? Thx > > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Switch PyPA from IRC to Gitter or similar
Chef is in the process of navigating an IRC->Slack migration. https://github.com/chef/chef-rfc/blob/master/rfc074-community-slack.md is the document I wrote up on the pros and cons of various options. Gitter has a better UX for new users compared to Slack because it was built to be for public use from the start, but their actual chat UI/UX isn't as polished as Slack. --Noah > On Jun 10, 2016, at 6:22 AM, Jason R. Coombswrote: > > In #pypa-dev, I raised the possibility of moving our PyPA support channels > from IRC to another hosted solution that enables persistence. Although IRC > has served us well, there are systems now with clear feature advantages, > which are crucial to my continuous participation: > > - always-on experience; even if one’s device is suspended or otherwise > offline. > - mobile support — the in-cloud experience is essential for low power and > intermittently connected devices. > - push notifications allow a project leader to remain largely inactive in a > channel, but attention raised promptly when users make a relevant mention. > - continuous, integrated logging for catching up on the conversation. > > Both Gitter and Slack offer the experience I’m after, with Gitter feeling > like a better fit for open-source projects (or groups of them). > > I’ve tried using IRCCloud, and it provides a similar, suitable experience on > the same IRC infrastructure, with one big difference. While Gitter and Slack > offer the above features for free, IRCCloud requires a $5/user/month > subscription (otherwise, connections are dropped after two hours). I did > reach out to them to see if they could offer some professional consideration > for contributors, but I haven’t heard from them. Furthermore, IRCCloud > requires an additional account on top of the account required for Freenode. > > In addition to the critical features above, Gitter and Slack offer other > advantages: > > - For Gitter, single-sign on using the same Github account for authentication > and authorization means no extra accounts. Slack requires one new account. > - An elegant web-based interface as a first-class feature, a lower barrier of > entry for users. > - Zero-install or config. > - Integration with source code and other systems. > > It’s because of the limitations of these systems that I find myself rarely in > IRC, only joining when I have a specific issue, even though I’d like to be > permanently present. > > Donald has offered to run an IRC bouncer for me, but such a bouncer is only a > half-solution, not providing the push notifications, mobile apps (IRC apps > exist, but just get disconnected, and often fail to connect on mobile > provider networks), or integrated logging. > > I note that both Gitter and Slack offer IRC interfaces, so those users who > prefer their IRC workflow can continue to use that if they so choose. > > I know there are other alternatives, like self-hosted solutions, but I’d like > to avoid adding the burden of administering such a system. If someone wanted > to take on that role, I’d be open to that alternative. > > I’d like to propose we move #pypa-dev to /pypa/dev and #pypa to /pypa/support > in gitter. > > Personally, the downsides to moving to Gitter (other than enacting the move > itself) seem negligible. What do you think? What downsides am I missing? > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] If you want wheel to be successful, provide a build server.
> On May 25, 2016, at 8:22 AM, Thomas Güttler> wrote: > > > > Am 25.05.2016 um 15:55 schrieb Paul Moore: >> On 25 May 2016 at 14:42, Thomas Güttler wrote: >>> Am 25.05.2016 um 09:57 schrieb Alex Grönholm: Amen to that, but who will pay for it? I imagine a great deal of processing power would be required for this. How do implementors of other languages handle this? >>> >>> >>> I talked with someone who is member of the python software foundation, and >>> he said that >>> money for projects like this is available. Of course this was no official >>> statement. >> >> The other aspect of this is who has sufficient time/expertise to set >> something like this up? Are you volunteering to do this? > > I am volunteering for doing coordination work: > - communication > - layout of datastructures > - interchange of datastructures. > - no coding > > But we need at least ten people how say "I'm willing to help" Short answer: this is not how anything works. Long answer: This is not a question of getting some number of people to help. If you can clone us a small army of Donalds, Nicks, and Richards then we _might_ be able to pull this off. The money isn't the problem per se, it's the human cost in upkeep for a system designed explicitly to run hostile code safely. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] If you want wheel to be successful, provide a build server.
> On May 25, 2016, at 12:13 AM, Thomas Güttler> wrote: > > If you want wheel to be successful, **provide a build server**. > > Quoting the author of psutil: > > https://github.com/giampaolo/psutil/issues/824#issuecomment-221359292 > > {{{ > On Linux / Unix the only way you have to install psutil right now is via > source / tarball. I don't want to provide wheels for Linux (or other UNIX > platforms). I would have to cover all supported python versions (7) both 32 > and 64 bits, meaning 14 extra packages to compile and upload on PYPI on every > release. I do that for Windows because installing VS is an order of magnitude > more difficult than installing gcc on Linux/UNIX but again: not willing to do > extra work on that front (sorry). > What you could do is create a wheel yourself with python setup.py build > bdist_wheel by using the same python/arch version you have on the server, > upload it on the server and install it with pip. > }}} > > What do you think? The problems haven't really changed every time someone brings this up. Running untrusted code from the internet isn't impossible (eg. Travis, Heroku, Lambda) but it requires serious care and feeding at a scale we don't currently have the resources for. Until something in that equation changes, the best we can do it try to piggyback on an existing sandbox environment like Travis. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Two ways to download python packages - I prefer one
The correct way to do that these days is `pip install -e .` AFAIK. Setuptools should be considered an implementation detail of installs at best, not really used directly anymore (though entry points are still used by some projects, so this isn't really a strict dichotomy). --Noah > On May 2, 2016, at 12:03 AM, Thomas Güttler> wrote: > > I was told this: > > > `python setup.py develop` uses urllib2 to download distributions whereas > > pip uses requests > > Souce: http://stackoverflow.com/a/36958874/633961 > > This can create confusing situations and I want to avoid this. > > Is there a way to use only **one** way to install python packages? > > Do wheels help here? > > Or is there a way to use npm for python packages? > > Regards, > Thomas Güttler > > -- > Thomas Guettler http://www.thomas-guettler.de/ > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] What's up with PyPi maintenance?
> On Mar 18, 2016, at 1:29 PM, Donald Stufftwrote: > > >> On Mar 17, 2016, at 8:51 PM, Chris Barker - NOAA Federal >> wrote: >> >> When do we expect that? A lot of people rely on the current system, we >> really need to find a way to maintain it 'till it's replaced. > > Practically speaking I’m the only person maintaining the actual code behind > legacy PyPI right now. Due to some personal stuff I haven’t had the mental > bandwidth to dig through debugging the ball of twine that is legacy PyPI. > It’s been sort of falling apart a bit and fixing some of these issues require > a substantial amount of effort. I don’t know what’s wrong with search, it’s > probably not one of those kinds of issues, but the hope is that we can just > squash a bunch of these at once with the new code base. > > As far as “When do we expect that”, the answer is basically “when it’s > ready”. You can see what’s done so far at warehouse.python.org (backed by the > same live database) and the current work is being done at > github.com/pypa/warehouse. Just to make it explicit, if anyone wants to submit a patch with a cogent explanation of what was wrong I'm sure it would get fixed ASAP. Such is the eternal rallying cry of FOSS :-) --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)
> On Feb 17, 2016, at 5:58 AM, Glyph Lefkowitz <gl...@twistedmatrix.com> wrote: > > >> On Feb 16, 2016, at 6:22 PM, Noah Kantrowitz <n...@coderanger.net> wrote: >> >> I'm not concerned with if the module is importable specifically, but I am >> concerned with where the files will live overall. When building generic ops >> tooling, being unsurprising is almost always the right move and I would be >> surprised if supervisor installed to a custom virtualenv. > > Would you not be surprised if installing supervisord upgraded e.g. `six´ or > `setuptools´ and broke apport? or lsb_release? or dnf? This type of version > conflict is of course rare, but it is always possible, and every 'pip > install' takes the system from a supported / supportable state to "???" > depending on the dependencies of every other tool which may have been > installed (and pip doesn't have a constraint solver for its dependencies, so > you don't even know if the system gets formally broken by two explicitly > conflicting requirements). > >> It's a weird side effect of Python not having a great solution for >> "application packaging" I guess? We've got standards for web-ish >> applications, but not much for system services. I'm not saying I think >> creating an isolated "global-ish" environment would be worse, I'm saying >> nothing does that right now and I personally don't want to be the first >> because that bring a lot of pain with it :-) > > What makes the web-ish stuff "standard" is just that a lot of people are > doing it. So a lot of people should start doing this, and then it will also > be a standard :-). > > I can tell you that on systems where I've done this sort of thing, it has > surprised no-one that I'm aware of and I have not had any issues to speak of. > So I think you might be overestimating the risk. > > In fairness though I've never written a clear explanation anywhere of why > this is desirable; it strikes me as obvious but it is clearly not the present > best-practice, which means somebody needs to do some thought-leadering. So I > owe you a blog post. Saying it's a good idea and we should move towards it is fine and I agree, but that isn't grounds to remove the ability to do things the current way. So you can warn people off from global installs but until there is at least some community awareness of this other way to do things we can't remove support entirely. It's going to be a very slow deprecation process. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)
> On Feb 16, 2016, at 6:12 PM, Glyph Lefkowitz <gl...@twistedmatrix.com> wrote: > >> >> On Feb 16, 2016, at 5:00 PM, Noah Kantrowitz <n...@coderanger.net> wrote: >> >> >>> On Feb 16, 2016, at 4:46 PM, Glyph Lefkowitz <gl...@twistedmatrix.com> >>> wrote: >>> >>>> >>>> On Feb 16, 2016, at 4:33 PM, Noah Kantrowitz <n...@coderanger.net> wrote: >>>> >>>> >>>>> On Feb 16, 2016, at 4:27 PM, Glyph Lefkowitz <gl...@twistedmatrix.com> >>>>> wrote: >>>>> >>>>> >>>>>> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz <n...@coderanger.net> wrote: >>>>>> >>>>>> As someone that handles the tooling side, I don't care how it works as >>>>>> long as there is an override for tooling a la Chef/Puppet. For stuff >>>>>> like Supervisord, it is usually the least broken path to install the >>>>>> code globally. >>>>> >>>>> I don't know if this is the right venue for this discussion, but I do >>>>> think it would be super valuable to hash this out for good. >>>>> >>>>> Why does supervisord need to be installed in the global Python >>>>> environment? >>>> >>>> Where else would it go? I wouldn't want to assume virtualenv is installed >>>> unless absolutely needed. >>> >>> This I can understand, but: in this case, it is needed ;). >>> >>>> Virtualenv is a project-centric view of the world which breaks down for >>>> stuff that is actually global like system command line tools. >>> >>> [citation needed]. In what way does it "break down"? >>> https://pypi.python.org/pypi/pipsi is a nice proof-of-concept that >>> dedicated virtualenvs are a better model for tooling than a big-ball-of-mud >>> integrated system environment that may have multiple conflicting >>> requirements. Unfortunately it doesn't directly address this use-case >>> because it assumes that it is doing per-user installations and not a >>> system-global one, but the same principle holds: what version of >>> `ipaddress´ that supervisord wants to use is irrelevant to the tools that >>> came with your operating system, and similarly irrelevant to your >>> application. >>> >>> To be clear, what I'm proposing here is not "shove supervisord into a venv >>> with the rest of your application", but rather, "each application should >>> have its own venv". In supervisord's case, "python" is an implementation >>> detail, and therefore the public interface is /usr/bin/supervisord and >>> /usr/bin/supervisorctl, not 'import supervisord'; those should just be >>> symlinks into /usr/lib/supervisord/environment/bin/ >> >> That isn't a thing that exists currently, I would have to make it myself and >> I wouldn't expect users to assume that is how I made it work. Given the >> various flavors of user expectations and standards that exist for deploying >> Python code, global does the least harm right now. > > I don't think users who install supervisord necessarily think they ought to > be able to import supervisord. If they do expect that, they should probably > revise their expectations. > > Here, I'll make it for you. Assuming virtualenv is installed: > > python -m virtualenv /usr/lib/supervisord/environment > /usr/lib/supervisord/environment/bin/pip install supervisord > ln -vs /usr/lib/supervisord/environment/bin/supervisor* /usr/bin > > More tooling around this idiom would of course be nifty, but this is really > all it takes. > >>> In fact, given that it is security-sensitive code that runs as root, it is >>> extra important to isolate supervisord from your system environment for >>> defense in depth, so that, for example, if, due to a bug, it can be coerced >>> into importing an arbitrarily-named module, it has a restricted set and >>> won't just load anything off the system. >> >> Sounds cute but the threats that actually helps with seem really minor. If a >> user can install stuff as root, they can probably do whatever they want >> thanks to .pth files and other terrible things. > > Once malicious code is installed in a root-executable location it's game > over; I didn't mean to imply otherwise. I'm saying that since supervisord > might potentially import anything in its site-packages dir, this
Re: [Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)
> On Feb 16, 2016, at 4:46 PM, Glyph Lefkowitz <gl...@twistedmatrix.com> wrote: > >> >> On Feb 16, 2016, at 4:33 PM, Noah Kantrowitz <n...@coderanger.net> wrote: >> >> >>> On Feb 16, 2016, at 4:27 PM, Glyph Lefkowitz <gl...@twistedmatrix.com> >>> wrote: >>> >>> >>>> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz <n...@coderanger.net> wrote: >>>> >>>> As someone that handles the tooling side, I don't care how it works as >>>> long as there is an override for tooling a la Chef/Puppet. For stuff like >>>> Supervisord, it is usually the least broken path to install the code >>>> globally. >>> >>> I don't know if this is the right venue for this discussion, but I do think >>> it would be super valuable to hash this out for good. >>> >>> Why does supervisord need to be installed in the global Python environment? >> >> Where else would it go? I wouldn't want to assume virtualenv is installed >> unless absolutely needed. > > This I can understand, but: in this case, it is needed ;). > >> Virtualenv is a project-centric view of the world which breaks down for >> stuff that is actually global like system command line tools. > > [citation needed]. In what way does it "break down"? > https://pypi.python.org/pypi/pipsi is a nice proof-of-concept that dedicated > virtualenvs are a better model for tooling than a big-ball-of-mud integrated > system environment that may have multiple conflicting requirements. > Unfortunately it doesn't directly address this use-case because it assumes > that it is doing per-user installations and not a system-global one, but the > same principle holds: what version of `ipaddress´ that supervisord wants to > use is irrelevant to the tools that came with your operating system, and > similarly irrelevant to your application. > > To be clear, what I'm proposing here is not "shove supervisord into a venv > with the rest of your application", but rather, "each application should have > its own venv". In supervisord's case, "python" is an implementation detail, > and therefore the public interface is /usr/bin/supervisord and > /usr/bin/supervisorctl, not 'import supervisord'; those should just be > symlinks into /usr/lib/supervisord/environment/bin/ That isn't a thing that exists currently, I would have to make it myself and I wouldn't expect users to assume that is how I made it work. Given the various flavors of user expectations and standards that exist for deploying Python code, global does the least harm right now. > In fact, given that it is security-sensitive code that runs as root, it is > extra important to isolate supervisord from your system environment for > defense in depth, so that, for example, if, due to a bug, it can be coerced > into importing an arbitrarily-named module, it has a restricted set and won't > just load anything off the system. Sounds cute but the threats that actually helps with seem really minor. If a user can install stuff as root, they can probably do whatever they want thanks to .pth files and other terrible things. >> Compare with `npm install -g grunt-cli`. > > npm is different because npm doesn't create top-level script binaries unless > you pass the -g option, so you need to install global tooling stuff with -g. > virtualenv is different (and, at least in this case, better). Pip also doesn't generate binstubs in /usr/bin unless you install globally so pretty much same difference. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Don't Use `sudo pip install´ (was Re: [final version?] PEP 513…)
> On Feb 16, 2016, at 4:27 PM, Glyph Lefkowitz <gl...@twistedmatrix.com> wrote: > > >> On Feb 16, 2016, at 4:13 PM, Noah Kantrowitz <n...@coderanger.net> wrote: >> >> As someone that handles the tooling side, I don't care how it works as long >> as there is an override for tooling a la Chef/Puppet. For stuff like >> Supervisord, it is usually the least broken path to install the code >> globally. > > I don't know if this is the right venue for this discussion, but I do think > it would be super valuable to hash this out for good. > > Why does supervisord need to be installed in the global Python environment? Where else would it go? I wouldn't want to assume virtualenv is installed unless absolutely needed. Virtualenv is a project-centric view of the world which breaks down for stuff that is actually global like system command line tools. Compare with `npm install -g grunt-cli`. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions
> On Feb 16, 2016, at 4:10 PM, Glyph Lefkowitzwrote: > >> >> On Feb 16, 2016, at 3:05 AM, Matthias Klose wrote: >> >> On 02.02.2016 02:35, Glyph Lefkowitz wrote: >>> On Feb 1, 2016, at 3:37 PM, Matthias Klose wrote: On 30.01.2016 00:29, Nathaniel Smith wrote: > Hi all, > > I think this is ready for pronouncement now -- thanks to everyone for > all their feedback over the last few weeks! I don't think so. I am biased because I'm the maintainer for Python in Debian/Ubuntu. So I would like to have some feedback from maintainers of Python in other Linux distributions (Nick, no, you're not one of these). >>> >>> Possibly, but it would be very helpful for such maintainers to limit their >>> critique to "in what scenarios will this fail for users" and not have the >>> whole peanut gallery chiming in with "well on _my_ platform we would have >>> done it _this_ way". >>> >>> I respect what you've done for Debian and Ubuntu, Matthias, and I use the >>> heck out of that work, but honestly this whole message just comes across as >>> sour grapes that someone didn't pick a super-old Debian instead of a >>> super-old Red Hat. I don't think it's promoting any progress. >> >> You may call this sour grapes, but in the light of people installing >> these wheels to replace/upgrade system installed eggs, it becomes an issue. >> It's fine to use such wheels in a virtual environment, however people tell >> users to use these wheels to replace system installed packages, distros will >> have a problem identifying issues. > > I am 100% on board with telling people "don't use `sudo pip install´". > Frankly I have been telling the pip developers to just break this for years > (see https://pip2014.com, which, much to my chagrin, still exists); `sudo pip > install´ should just exit immediately with an error; to the extent that > packagers need it, the only invocation that should work should be `sudo pip > install --i-am-building-an-operating-system´. As someone that handles the tooling side, I don't care how it works as long as there is an override for tooling a la Chef/Puppet. For stuff like Supervisord, it is usually the least broken path to install the code globally. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] New Design Landed in Warehouse
> On Nov 20, 2015, at 11:46 PM, Antoine Pitrouwrote: > > On Fri, 20 Nov 2015 11:40:23 -0500 > Randy Syring > wrote: > >> I'm glad to see progress being made. Thanks for the time and effort >> that is being put into this. >> >> After taking a look, the one thing that really stuck out to me in a >> negative way was how much screen space the header is using up. I've >> created an issue for discussion here: > > Agreed. The look of the average project page is a bit depressing: > https://warehouse.python.org/project/six/ > > Useful content starts only 2/3 down the first page. The large "pip > install six" snippet probably doesn't deserve being that proeminent > (or being there at all), and is ironically redundant with the "how do > I install this?" link just below. I think you have a highly specialized view of what is "useful content" compared to the average user. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPI is a sick sick hoarder
On May 15, 2015, at 9:19 PM, Donald Stufft don...@stufft.io wrote: On May 15, 2015, at 2:57 PM, Robert Collins robe...@robertcollins.net wrote: So, I am working on pip issue 988: pip doesn't resolve packages at all. This is O(packages^alternatives_per_package): if you are resolving 10 packages with 10 versions each, there are approximately 10^10 or 10G combinations. 10 packages with 100 versions each - 10^100. So - its going to depend pretty heavily on some good heuristics in whatever final algorithm makes its way in, but the problem is exacerbated by PyPI's nature. Most Linux (all that i'm aware of) distributions have at most 5 versions of a package to consider at any time - installed(might be None), current release, current release security updates, new release being upgraded to, new release being upgraded to's security updates. And their common worst case is actually 2 versions: installed==current release and one new release present. They map alternatives out into separate packages (e.g. when an older soname is deliberately kept across an ABI incompatibility, you end up with 2 packages, not 2 versions of one package). To when comparing pip's challenge to apt's: apt has ~20-30K packages, with altnernatives ~= 2, or pip has ~60K packages, with alternatives ~= 5.7 (I asked dstufft) Scaling the number of packages is relatively easy; scaling the number of alternatives is harder. Even 300 packages (the dependency tree for openstack) is ~2.4T combinations to probe. I wonder if it makes sense to give some back-pressure to people, or at the very least encourage them to remove distributions that: - they don't support anymore - have security holes If folk consider PyPI a sort of historical archive then perhaps we could have a feature to select 'supported' versions by the author, and allow a query parameter to ask for all the versions. There have been a handful of projects which would only keep the latest N versions uploaded to PyPI. I know this primarily because it has caused people a decent amount of pain over time. It’s common for deployments people have to use a requirements.txt file like ``foo==1.0`` and to just continue to pull from PyPI. Deleting the old files breaks anyone doing that, so it would require either having people bundle their deps in their repositories or some way to get at those old versions. Personally I think that we shouldn’t go deleting the old versions or encouraging people to do that. +1 for this. While I appreciate why Linux distress purge old versions, it is absolutely hellish for reproducibility. If you are looking for prior art, check out the Molinillo project (https://github.com/CocoaPods/Molinillo) used by Bundler and CocoaPods. It is not as complex as the Solve gem used in Chef but offers a good balance of performance in satisfying constraints and false negatives on solution failures. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Immutable Files on PyPI
On Sep 28, 2014, at 12:31 PM, Donald Stufft donald.stu...@rackspace.com wrote: Hello All! I'd like to discuss the idea of moving PyPI to having immutable files. This would mean that once you publish a particular file you can never reupload that file again with different contents. This would still allow deleting the file or reuploading it if the checksums match what was there prior. +1. Would vastly simplify the infra side! --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PEP draft on PyPI/pip package signing
To be clear, this adds literally no security. It adds some developer experience improvement in that you could do releases securely even while PyPI is unreachable. This has been repeatedly stated as cute but not a huge priority for PyPI. Our uptime has been good enough for the last several years that this doesn't address enough of a niche to be worth the _massive_ increase in complexity. Any solution like that involves both online keys and an RBAC/trust list distributed from PyPI will share these properties. Adding offline keys puts you back in the realm of TUF, and does potentially add some security benefits, though they are way way down the long tail. Overall strong -1. --Noah On Jul 28, 2014, at 8:01 AM, Giovanni Bajo ra...@develer.com wrote: Hello, on March 2013, on the now-closed catalog-sig mailing-list, I submitted a proposal for fixing several security problems in PyPI, pip and distutils[1]. Some of my proposals were obvious things like downloading packages through SSL, which was already in progress of being designed and implemented. Others, like GPG package signing, were discussed for several days/weeks, but ended up in discussion paralysis because of the upcoming TUF framework. 16 months later, we still don’t have a deployed solution for letting people install signed packages. I see that TUF is evolving, and there is now a GitHub project with documentation, but I am very worried about the implementation timeline. I was also pointed to PEP458, which I tried to read and found it very confusing; the PEP assumes that the reader must be familiar with the TUF academic paper (which I always found quite convoluted per-se), and goes with an analysis of integration of TUF with PyPI; to the best of my understanding, the PEP does not provide a clear answer to practical questions like: * what a maintainer is supposed to do to submit a new signed package * how can differ maintainers signal that they both maintain the same package * how the user interface of PyPI will change * what are the required security maintenance that will need to be regularly performed by the PyPI ops I’m not saying that the TUF team has no answers to these questions (in fact, I’m 100% sure of the opposite); I’m saying that the PEP doesn’t clearly provide such answers. I think the PEP is very complicated to read as it goes into integration details between the TUF architecture and PyPI, and thus it is very complicated to review and accept. I would love the PEP to be updated to provide an overview on the *practical* effects of the integration of TUF within PyPI/pip, that must be fully readable to somebody with zero previous knowledge of TUF. As suggested by Richard Jones during EuroPython, I isolated the package signing sections from my original document, evolved them a little bit, and rewritten them in PEP format: https://gist.github.com/rasky/bd91cf01f72bcc931000 To the best of my recollection, in the previous review round, there were no critical issues found in the design. It might well be that TUF provides more security in some of the described attack scenarios; on the other hand, my proposal: * is in line with the security of (e.g..) existing Linux distros * is very simple to review, analyze and discuss for anybody with even a basic understanding of security * is much simpler than TUF * is a clear step forward from the current situation * cover areas not covered by PEP458 (e.g.: increasing security of account management on PyPI) * can be executed in 2-3 months (to the alpha / pre-review stage), and I volunteer for the execution. I thus solicit a second round of review of my proposal; if you want me to upload to Google Docs for easier of commenting, I can do that as well. I would love to get the PEP to its final form and then ask for a pronouncement. I apologize in advance if I made technical mistakes in the PEP format/structure; it is my first PEP. [1] See here: https://docs.google.com/a/develer.com/document/d/1DgQdDCZY5LiTY5mvfxVVE4MTWiaqIGccK3QCUI8np4k/edit# -- Giovanni Bajo :: ra...@develer.com Develer S.r.l. :: http://www.develer.com My Blog: http://giovanni.bajo.it ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PEP draft on PyPI/pip package signing
The critical path on the current system is you request the package index or package file itself from https://pypi.python.org and assert that it is correct because the certificate verifies. In the proposed system the critical path is you request the trust file from https://pypi.python.org and assert that it is correct because the certificate verifies. As you might note, these are functionally equivalent. If you can break one, you can break the other. --Noah On Jul 28, 2014, at 12:26 PM, Paul Moore p.f.mo...@gmail.com wrote: On 28 July 2014 20:19, Noah Kantrowitz n...@coderanger.net wrote: To be clear, this adds literally no security. Really? For my education, could you clarify? Is this because we can assume (with https) that every step between the developer uploading to PyPI and the user downloading to his local PC is secured? Paul signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPI lost IPv6 support?
Both, supporting IPv6 is not a priority and so no extra work will be done for it. This is true across the board for all PSF services. --Noah On Jun 10, 2014, at 2:40 AM, Wichert Akkerman wich...@wiggy.net wrote: I just noticed that my uploads to PyPI are now using IPv4 instead of IPv6. Looking closer it looks like PyPI is not reachable over IPv6 at all anymore, which is somewhat disappointing. Was dropping IPv6 a deliberate choice, or an unfortunate side-effect of switching to Fastly’s CDN? Regards, Wichert. ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] pypi suggestion
Step one, define popular in numeric terms. --Noah On Jun 2, 2014, at 2:37 PM, John Smith pronghornpar...@yahoo.com.dmarc.invalid wrote: pypi really needs a way to sort packages by popularity. Sorting by other factors, such as author, code size, pure python/compiled, etc would be a bonus, But something to put most important/popular, etc. packages first would be great. I apologize if it's there and I just don't see it :) ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Support for multiple PyPI publishing identities is rather convoluted
On Jun 1, 2014, at 8:02 AM, Paul Sokolovsky pmis...@gmail.com wrote: Hello, My usecase is: I work on different projects in parallel, with different roles. For example, I work on community project and publish packages on behalf of it, and I publish personal packages too. Obviously, I want to have 2 separate PyPI publishing accounts for those roles. Also, I don't want to cleanup after dumb mistakes, so want to explicitly specify an identity to use for each publishing operation, and get an error if I don't. PyPI has an ACL system to make this unnecessary. You can use a single account, and for the community project just grant multiple people access. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Support for multiple PyPI publishing identities is rather convoluted
On Jun 1, 2014, at 12:30 PM, Paul Sokolovsky pmis...@gmail.com wrote: Hello, On Sun, 1 Jun 2014 12:10:01 -0700 Noah Kantrowitz n...@coderanger.net wrote: On Jun 1, 2014, at 8:02 AM, Paul Sokolovsky pmis...@gmail.com wrote: Hello, My usecase is: I work on different projects in parallel, with different roles. For example, I work on community project and publish packages on behalf of it, and I publish personal packages too. Obviously, I want to have 2 separate PyPI publishing accounts for those roles. Also, I don't want to cleanup after dumb mistakes, so want to explicitly specify an identity to use for each publishing operation, and get an error if I don't. PyPI has an ACL system to make this unnecessary. You can use a single account, and for the community project just grant multiple people access. Unnecessary what exactly? On my packages' PyPI pages, I want to have Package Index Owner: pfalcon, and on other packages' pages, I don't want to have pfalcon (and want to have another specific username). Having it otherwise would be misrepresentation of package origin. If single account can do that (that would be a surprise), I'd appreciate a link to materials I can read up on the matter. If you didn't want to show up as the owner you would need to use the other account once to register it, but after that just grant your normal user access and use that for day-to-day releases. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Need for respect (was: PEP 438, pip and --allow-external)
On May 14, 2014, at 12:44 PM, M.-A. Lemburg m...@egenix.com wrote: PyPI is still mainly the Python registry for mapping package names to URLs and descriptions. Sorry, going to have to stop you here. This, and all your conclusions based on this assumption, are flat out incorrect. You are far far far in the minority of people that think this is what PyPI is. It was this at one point, but few old-timers are still around to remember those days and new users have very different expectations driven by the cites linux package servers/systems as well as tools like rubygems and cpan. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Need for respect (was: PEP 438, pip and --allow-external)
On May 14, 2014, at 1:26 PM, M.-A. Lemburg m...@egenix.com wrote: On 14.05.2014 21:48, Noah Kantrowitz wrote: On May 14, 2014, at 12:44 PM, M.-A. Lemburg m...@egenix.com wrote: PyPI is still mainly the Python registry for mapping package names to URLs and descriptions. Sorry, going to have to stop you here. This, and all your conclusions based on this assumption, are flat out incorrect. You are far far far in the minority of people that think this is what PyPI is. It was this at one point, but few old-timers are still around to remember those days and new users have very different expectations driven by the cites linux package servers/systems as well as tools like rubygems and cpan. Noah, please reread the subject line and the message that started this thread. If we want to have a useful discussion, calling someone's conclusion incorrect is not helpful. I think it is helpful, as you are working under different initial conditions than most others, and as such it is very hard to compare conclusions. If you'd read my reply to the end, you'd have noticed that my main point is that the users want easy installation of packages and don't care where these are hosted. This is why installers are attractive to users and this is also why so many people enjoy using them. However, such a requirement does not imply that all packages have to be hosted in a single place, with all the implications that arise from such a setup. Coming back to PyPI: Its main purpose is having a central place to register, search for and find packages. It doesn't matter where the distribution files are hosted, as long as the installers can find them. I understand you think that is the purpose of PyPI, but I'm trying to tell you that the people that work on PyPI and pip do not share this opinion, and as such it can be considered incorrect. I would urge you to please rebase your goals on what that actual development plans for PyPI are, which very much include package hosting. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] pip install -e . vs. python setup.py develop
On Apr 11, 2014, at 1:29 PM, Chris Withers ch...@simplistix.co.uk wrote: On 07/04/2014 04:05, Noah Kantrowitz wrote: You should recommend using pip for it, mostly because as you said that will work even with packages that don't use setuptools :-) It also is required when doing a develop install with extras, though that requires a slightly more verbose syntax due to a bug in pip. What's the syntax? pip install -e file:///path/to/thing[one,two] --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] pip install -e . vs. python setup.py develop
You should recommend using pip for it, mostly because as you said that will work even with packages that don't use setuptools :-) It also is required when doing a develop install with extras, though that requires a slightly more verbose syntax due to a bug in pip. --Noah On Apr 6, 2014, at 8:01 PM, Asheesh Laroia li...@asheesh.org wrote: Hi lovely distutils people, I have a question, as I prepare for my Python packaging simplified, for end users, app developers, and open source contributors talk. I'm sure I'll have more; I'll end up probably making a few threads about them, since they'll come to me at random times. For years, I've been recommending: $ python setup.py develop as a standard way to make something hackable and available in a virtualenv. I notice that python setup.py develop --user exists, which is great, as it means that you don't even need to bother with the virtualenv. Having said that, I also notice that: $ pip install -e . does the same thing. Should I be recommending one over the other? I'm going to learn toward pip install -e . even though I haven't been using it much personally, as it makes the talk more consistent -- I would then be able to say, Always use pip for doing your installing. But I thought I'd ask about this. It seems that pip install -e . is the same as python setup.py develop except that pip runs setup.py with setuptools available, which addresses a problem where if the maintainer of a package's setup.py file doesn't from setuptools import setup, then python setup.py develop won't work, whereas pip install -e . will always work. Unless I'm mistaken. So the question is -- can someone sanity-check the above? I'm hoping to pretend to be an outsider for the purpose of empathizing with the audience, and yet be enough of an insider to ask people on this list if what I'm saying is consistent with Modern PyPA Doctrine (which generally I'm happy to promote). -- Asheesh. ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] setup.py should use if __name__ == '__main__', right?
setup.py is not intended to be importable, so it has no import time. Pretty sure I've never seen this patten used in a setup.py, nor would I think it has much semantic utility. --Noah On Apr 6, 2014, at 8:04 PM, Asheesh Laroia li...@asheesh.org wrote: Hi nice distutils/PyPA people, I had a question that probably is worth showing up in the archives. Namely: It seems to me like bizarre bad form for the setup.py file to execute what amounts to a main() function at import time. I presume this is just some kind of historical accident, and if the early authors of the distutils docs were careful, they'd have recommended: from distutils.core import setup if __name__ == '__main__': setup() # FIXME args rather than just: from distutils.core import setup setup() # FIXME args Is that an accurate assessment? If so, that's great, because I plan to remark on this bemusedly in my talk. If there is a reason it is the way it is, however, then I will avoid making a joke that is wrong in an important way. -- Asheesh. ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Pycon
On Mar 28, 2014, at 12:06 PM, Daniel Holth dho...@gmail.com wrote: Who is going to pycon? I will be there. Attending and presenting a talk that can tl;dr'd as a summary of the last 18 months of this list. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPI Rate Limiting
On Feb 10, 2014, at 1:48 AM, Chris Jerdonek chris.jerdo...@gmail.com wrote: On Sun, Feb 9, 2014 at 12:16 PM, Noah Kantrowitz n...@coderanger.net wrote: On Feb 9, 2014, at 1:13 AM, Robert Collins robe...@robertcollins.net wrote: On 9 February 2014 19:28, Noah Kantrowitz n...@coderanger.net wrote: On Feb 8, 2014, at 6:25 PM, Robert Collins robe...@robertcollins.net wrote: 5/s sounds really low - if the RPC's take less than 200ms to answer (and I sure hope they do), a single threaded mirroring client (with low latency to PyPI's servers // pipelined requests) can easily it. Most folk I know writing API servers aim for response times in the single to low 10's of ms digits... What is the 95% percentile for PyPI to answer these problematic APIs ? If you are making lots of sequential requests, you should be putting a sleep in there. as fast as possible isn't a design goal, it is good service for all clients. As fast as possible (on the server side) and good service for all clients are very tightly correlated (and some would say there is a causative relationship in fact). On the client side, I totally support limiting concurrency, but I've yet to see a convincing explanation for rate limiting already serialised requests that doesn't boil down to 'assume the server is badly written'. Note - I'm not assuming - or implying - that about PyPI. I'm not sure what point you are trying to make. The server wouldn't artificially slow down requests, it (well, nginx) would just track requests and send 503s if limits are exceeded. Requests still complete as fast as possible, and we can ensure one client doesn't hog all the server resources. I think he's saying that, given that the problems are being caused by clients configured for high parallelism, why not choose a rate-limiting method that won't impact clients accessing it in a single-threaded fashion. It's a reasonable question. Also, if the server isn't artificially slowing down requests, what does, Client requests up to the burst limit [of 10 requests] will be delayed to maintain a 5 req/s maximum mean? Any requests beyond the rate limits will get an HTTP 503 with an empty body. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPI Rate Limiting
On Feb 9, 2014, at 1:13 AM, Robert Collins robe...@robertcollins.net wrote: On 9 February 2014 19:28, Noah Kantrowitz n...@coderanger.net wrote: On Feb 8, 2014, at 6:25 PM, Robert Collins robe...@robertcollins.net wrote: 5/s sounds really low - if the RPC's take less than 200ms to answer (and I sure hope they do), a single threaded mirroring client (with low latency to PyPI's servers // pipelined requests) can easily it. Most folk I know writing API servers aim for response times in the single to low 10's of ms digits... What is the 95% percentile for PyPI to answer these problematic APIs ? If you are making lots of sequential requests, you should be putting a sleep in there. as fast as possible isn't a design goal, it is good service for all clients. As fast as possible (on the server side) and good service for all clients are very tightly correlated (and some would say there is a causative relationship in fact). On the client side, I totally support limiting concurrency, but I've yet to see a convincing explanation for rate limiting already serialised requests that doesn't boil down to 'assume the server is badly written'. Note - I'm not assuming - or implying - that about PyPI. I'm not sure what point you are trying to make. The server wouldn't artificially slow down requests, it (well, nginx) would just track requests and send 503s if limits are exceeded. Requests still complete as fast as possible, and we can ensure one client doesn't hog all the server resources. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPI Rate Limiting
On Feb 8, 2014, at 6:25 PM, Robert Collins robe...@robertcollins.net wrote: On 9 February 2014 11:15, Ernest W. Durbin III ewdur...@gmail.com wrote: Since the launch of the new infrastructure for PyPI two weeks ago, I've been monitoring overall performance and reliability of PyPI for browsers, uploads, installers, and mirrors. The initial rates will be limited to 5 req/s per IP with bursts of 10 requests allowed. Client requests up to the burst limit will be delayed to maintain a 5 req/s maximum. Any requests past the 10 request burst will receive an HTTP 429 response code per RFC 6585. 5/s sounds really low - if the RPC's take less than 200ms to answer (and I sure hope they do), a single threaded mirroring client (with low latency to PyPI's servers // pipelined requests) can easily it. Most folk I know writing API servers aim for response times in the single to low 10's of ms digits... What is the 95% percentile for PyPI to answer these problematic APIs ? If you are making lots of sequential requests, you should be putting a sleep in there. as fast as possible isn't a design goal, it is good service for all clients. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PEX at Twitter (re: PEX - Twitter's multi-platform executable archive format for Python)
On Feb 1, 2014, at 12:43 AM, Nick Coghlan ncogh...@gmail.com wrote: On 1 February 2014 18:23, Vinay Sajip vinay_sa...@yahoo.co.uk wrote: On Fri, 31/1/14, Brian Wickman wick...@gmail.com wrote: There are myriad other practical reasons. Here are some: Thanks for taking the time to respond with the details - they are good data points to think about! Lastly, there are social reasons. It's just hard to convince most engineers to use things like pkg_resources or pkgutil to manipulate resources when for them the status quo is just using __file__. Bizarrely the social challenges are just as hard as the abovementioned technical challenges. I agree it's bizarre, but sadly it's not surprising. People get used to certain ways of doing things, and a certain kind of collective myopia develops when it comes to looking at different ways of doing things. Having worked with fairly diverse systems in my time, ISTM that sections of the Python community have this myopia too. For example, the Java hatred and PEP 8 zealotry that you see here and there. One of the things that's puzzled me, for example, is why people think it's reasonable or even necessary to have copies of pip and setuptools in every virtual environment - often the same people who will tell you that your code isn't DRY enough! It's certainly not a technical requirement, yet one of the reasons why PEP 405 venvs aren't that popular is that pip and setuptools aren't automatically put in there. It's a social issue - it's been decided that rather than exploring a technical approach to addressing any issue with installing into venvs, it's better to bundle pip and setuptools with Python 3.4, since that will seemingly be easier for people to swallow :-) FWIW, installing into a venv from outside it works fine (that's how ensurepip works in 3.4). However, it's substantially *harder* to explain to people how to use it correctly that way. In theory you could change activation so that it also affected the default install locations, but the advantage of just having them installed per venv is that you're relying more on the builtin Python path machinery rather than adding something new. So while it's wasteful of disk space and means needing to upgrade them in every virtualenv, it does actually categorically eliminate many potential sources of bugs. Doing things the way pip and virtualenv do them also meant there was a whole pile of design work that *didn't need to be done* to get a functional system up and running. Avoiding work by leveraging existing capabilities is a time honoured engineering tradition, even when the simple way isn't the most elegant way. Consider also the fact that we had full virtual machines long before we have usable Linux containers: full isolation is actually *easier* than partial isolation, because there are fewer places for things to go wrong, and less integration work to do in the first place. That said, something I mentioned to the OpenStack folks a while ago (and I think on this list, but potentially not), is that I have now realised the much-reviled (for good reason) *.pth files actually have a legitimate use case in allowing API compatible versions of packages to be shared between multiple virtual environments - you can trade reduced isolation for easier upgrades on systems containing multiple virtual environments by adding a suitable *.pth file to the venv rather than the package itself. While there's currently no convenient tooling around that, they're a feature CPython has supported for as long as I can remember, so tools built on that idea would comfortably work on all commonly supported Python versions. In all but a tiny number of cases, you could use a symlink for this. Much less magic :-) --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PEX at Twitter (re: PEX - Twitter's multi-platform executable archive format for Python)
On Feb 1, 2014, at 1:36 AM, Vinay Sajip vinay_sa...@yahoo.co.uk wrote: On Sat, 1/2/14, Noah Kantrowitz n...@coderanger.net wrote: In all but a tiny number of cases, you could use a symlink for this. Much less magic :-) That's POSIX is all there is myopia, right there. While recent versions of Windows have symlinks more like POSIX symlinks, XP only has a stunted version called reparse points or junction points which are not really fit for purpose. I think you'll find that XP environments are found in rather more than a tiny number of cases, and even though Microsoft has end-of-lifed XP in terms of support, I fear it'll be around for a while yet. Junctions on Windows are actually more flexible than POSIX symlinks, and I have in fact used them for Python packages before when doing Django dev on Windows XP. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Using Wheel with zipimport
On Jan 30, 2014, at 1:09 AM, Vinay Sajip vinay_sa...@yahoo.co.uk wrote: On Thu, 30/1/14, Ralf Gommers ralf.gomm...@gmail.com wrote: Also end user. If, as a user, I want to use inplace builds and PYTHONPATH instead of virtualenvs for whatever reason, that should be supported. Setuptools inserting stuff to sys.path that come before PYTHONPATH entries is quite annoying. If tool developers want to offer end users the option to control how they work with sys.path, that's up to them. For example, once the details are worked out, the distil tool will probably get a --mountable option for the package command, which will write metadata into the built wheel indicating whether the wheel is addable to sys.path or not (based on the builder's knowledge of the wheel's contents). Distlib, when asked to mount a wheel (add it to sys.path) will check the mountability metadata and honour the wheel publisher's intent. For everyone following along, the PEP has been updated. http://hg.python.org/peps/rev/26983acc9c11 If anyone has comments on the next text you can find it at http://www.python.org/dev/peps/pep-0427/#is-it-possible-to-import-python-code-directly-from-a-wheel-file I hope we can discuss further changes as a group before they are pushed live. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] wheels on sys.path clarification (reboot)
On Jan 29, 2014, at 2:59 PM, Nick Coghlan ncogh...@gmail.com wrote: But that's what I'm saying, there are only three ways to break this behaviour: 1. Changing the wheel format in such a way that we drop support for being able to install simple wheel files without a specialised installer 2. Break zipimport itself to explicitly disallow wheel files 3. Switch to a zipimport incompatible compression scheme The first two aren't going to happen, which leaves only the third. You appear to be saying that you would like to reserve the right to switch to a zipimport incompatible compression format in future versions of the wheel spec. If you're *not* saying that, then what independent design decision is there to be discussed that makes the new FAQ anything other than a clarification of the status quo? The rest of the behaviour is inherent in the no specialised installer needed feature. People saying I didn't realise that the current design implied zipimport compatibility is *why* I added the clarification, so it's not a compelling argument in convincing me that the clarification wasn't needed or is inappropriate. If you are going to document this, and it is not going to be explicitly supported by the spec (it isn't), the _only_ logical thing is to document that this is undefined behavior and while it works now, people should not depend on it. Under no circumstance should we document this as well it works right now without guidance about the fact that it isn't part of the spec and is _not_ a candidate for future design decisions. If someone would like to propose amending the spec that can happen separately, but as it stands right now this is nothing but convenient, undefined behavior. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PEP 427
On Jan 29, 2014, at 9:50 AM, Evgeny Sazhin eug...@sazhin.us wrote: On Wed, Jan 29, 2014 at 9:11 AM, Vinay Sajip vinay_sa...@yahoo.co.uk wrote: Does it mean that it actually makes sense to look into that direction and make wheel usage closer to jar? There is a parallel discussion going on, with the title Using Wheel with zipimport, which is relevant to this question, and other questions you raised (e.g. about supporting C extensions/pure-Python modules. I read all of it and got a bit lost in between the distil API and PEP process discussion;) I have no knowledge about c extensions scope, but i feel like it might be of less importance then pure python packaging issues? Am I wrong? A lot of Python users depend on C extensions - and while it is a subset of all Python users, it is a large (and important) subset. Example: any usage of Python in numerical analysis or scientific applications involves use of C extensions. Regards, Vinay Sajip I can see that it might be quite beneficial to have virtualenv and pip installing wheels locally for development needs, so here is what i was able to come up with so far: I have one folder on NFS where all python developed stuff should be *deployed* - pyhtonlib. It is impossible to use pip or virtualenv there - so i'm bound to artifacts. The only way something can appear there is by using the release program that knows how to put artifacts in specified locations. Currently most of the stuff there is the .py modules and few eggs (some are executable). But this stuff is not allowing for sane dependency management, neither code reuse. I actually don't like the idea of specifying dependencies in the code via sys.path. I think the resolved sys.path based on requirements.txt is much better solution. So i'm looking for a solution that would allow to use the same artifact for everything (like jar) so it can guarantee that the same subset of code that was tested, goes to production and used in dev. Currently I'm leaning towards using pip's capability to work with flat folders via --find-links, so i can deploy wheels to the pythonlib and then reuse them in the development environment. But in this setup how do i make my program executable from pythonlib location? I think I should I create some smart runner script that would be able to use the pip's dependency resolution, create the necessary sys.path basing on the wheel requirements.txt and then my program wheel should have an entry point like __main__.py As Nick pointed out the wheel is a superset of the egg - so I assume wheels can be executable, correct? How do i achieve that? Wheel is a package format. Packages are for transmitting and installing bits. If you want to make some kind of self-unpacking executable please do it with something built for it. makeself is an excellent choice for these. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] wheels on sys.path clarification (reboot)
On Jan 29, 2014, at 8:50 PM, Tres Seaver tsea...@palladion.com wrote: Signed PGP part On 01/29/2014 06:55 PM, Noah Kantrowitz wrote: If you are going to document this, and it is not going to be explicitly supported by the spec (it isn't), the _only_ logical thing is to document that this is undefined behavior and while it works now, people should not depend on it. Under no circumstance should we document this as well it works right now without guidance about the fact that it isn't part of the spec and is _not_ a candidate for future design decisions. If someone would like to propose amending the spec that can happen separately, but as it stands right now this is nothing but convenient, undefined behavior. Nick's point in this thread is that zip-importability is a *necessary corrolary* (not an implementation detaiL) of the no special installers design choice. No, thats a side effect of various other technologies that are beyond the scope of that spec. The spec should either say this is undefined behavior or commit that it is part of the spec. We can't pick this middle ground of well right now we just get it for free because of X, Y, and Z because that doesn't express to the reader the true semantics of is this something they can use. Don't assume the reader has as much information as we do about the pros and cons, and about how the implementations work. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] pip on windows experience
On Jan 23, 2014, at 4:17 PM, Oscar Benjamin oscar.j.benja...@gmail.com wrote: On 23 January 2014 23:58, Nick Coghlan ncogh...@gmail.com wrote: I really think that's our best near term workaround - still room for improvement, but pip install numpy assumes SSE2 is a much better situation than pip install numpy doesn't work on Windows. Is it? Do you have any idea what proportion of (the relevant) people would be using Windows with hardware that doesn't support SSE2? I feel confident that it's less than 10% but I don't know how to justify a tighter bound than that. You need to bear in mind that people currently have a variety of ways to install numpy on Windows that do work already without limitations on CPU instruction set. Most numpy users will not get any immediate benefit from the fact that it works using pip rather than it works using the .exe installer (or any of a number of other options). It's the unfortunate end users and the numpy folks who would have to pick up the pieces if/when the SSE2 assumption fails. This all sounds very similar to the issues with Linux binary wheels and varying system ABIs. Should probably keep in mind for any solution that might apply to both. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPI pull request #7
On Oct 31, 2013, at 4:32 AM, anatoly techtonik techto...@gmail.com wrote: On Wed, Oct 30, 2013 at 11:11 PM, Noah Kantrowitz n...@coderanger.net wrote: Please stop submitting pull requests. Development on the existing codebase is halted except for critical fixes or security issues. You are making extra work for people on this list and it will not be tolerated. Please consider this your final warning. I can't live as long as you are to see the new incantation of Python website (by PyCon 2013) or PyPI. I am willing to help, and this stuff you're saying is rather discouraging and like no, go waste your time somewhere else, we are not giving any code reviews for free. I understand that my reputation precedes me, but can we keep this strictly technical? What I am trying to do is to send small, incremental fixes. They don't affect security. I can commit it directly to avoid distracting overloaded PyPI (bus factor 2) team, and you can blame me for breaking things - ok, and ban if I break something - that's also ok. If learn previous PyPI and new PyPI, I can tell people more about it, and you can expect more pull requests - not from me, for new PyPI, once it is ready. And if I am going to submit any new features, like reST validation on edit and Markdown support - the code will be more decoupled than existing one to be almost directly reused for the new site. Why I am skeptical that new site will replace old one soon? Just because I don't believe in rewrites by one man army. When you develop public resource, you need to rely on external feedback. You also need some designer guy in a team. You also need a backlog for collaboration. My ETA for new PyPI is no earlier than PyCon 2014 if Donald and Richard will be working on it full time. So, instead of all-or-nothing scenario I can try to find some help with incremental approach. Your opinion is noted, however my statement stands and as I said, your continued derailment and disruption will not be tolerated. Thank you for your input. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPI pull request #7
Please stop submitting pull requests. Development on the existing codebase is halted except for critical fixes or security issues. You are making extra work for people on this list and it will not be tolerated. Please consider this your final warning. --Noah On Oct 30, 2013, at 1:07 PM, anatoly techtonik techto...@gmail.com wrote: https://bitbucket.org/pypa/pypi/pull-request/7/fix-development-mode/diff This allows to run PyPI on local machine without configuring web server, and fixes CSS warnings from Chrome. -- anatoly t. ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPI pull request
Warehouse is the internal project name, and will be just one software component of the service collectively known as PyPI. That said, Donald started it so by law of the jungle he can call it whatever he wants as long as I don't get phone calls from the FBI. --Noah On Oct 27, 2013, at 10:02 PM, anatoly techtonik techto...@gmail.com wrote: I mean that the name CheeseShop has more human touch in it than Warehouse. -- anatoly t. On Mon, Oct 28, 2013 at 3:00 AM, Richard Jones r1chardj0...@gmail.com wrote: I'm not sure what you mean by it sounding enterprisey except perhaps just the name? On 28 October 2013 10:58, anatoly techtonik techto...@gmail.com wrote: Thanks. Warehouse sounds very enterprisey. Any Roadmap for that, estimate time to become operational? I'd need some features right now and not next PyCon. Also, am I right that bus factor for this stuff is one? -- anatoly t. On Mon, Oct 28, 2013 at 2:53 AM, Richard Jones r1chardj0...@gmail.com wrote: I have merged that PR but I really don't see any point in making any changes to the current codebase beyond fixing significant issues. Cleaning it up is not a priority. I've merged this PR to clean up the PyPI project page on bitbucket a little, but I would ask that no further cosmetic PRs be submitted, thanks. Warehouse is the name of the next version of PyPI being developed by Donald Stufft. Richard On 27 October 2013 17:49, anatoly techtonik techto...@gmail.com wrote: I've heard that there is PyPI 2.0, but I still find current PyPI code to be very suitable for educational purposes (unlike some complicated framework based solutions, where much of the stuff is hidden in internals of external lib abstractions), so I continue to send fixes to improve code base. Please merge this one. https://bitbucket.org/pypa/pypi/pull-request/6/remove-unused-templatetoolspy-file/ -- anatoly t. ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Deprecate and Block requires/provides
On Oct 17, 2013, at 9:26 AM, Michael Foord fuzzy...@gmail.com wrote: On 17 October 2013 16:53, Donald Stufft don...@stufft.io wrote: On Oct 17, 2013, at 11:49 AM, Michael Foord fuzzy...@gmail.com wrote: Package upload certainly worked, and that is what is going to be broken. So would you be ok with deprecating and removing to equal this metadata silently gets sent to /dev/null in order to not break uploads for what would have affected roughly 4% of the total new releases on PyPI in 2013. My vote on this whole thing in the general context of how to handle deprecating metadata fields * Email anyone using deprecated metadata at the time of deprecation (or now, in the case of this stuff) * Deprecation would follow a somewhat normal arc: * Initially it is just marked as deprecated in the docs (pending deprecation phase). * One major release (which is fuzzy in this case, but 6-12 months) later it goes to dev null on input and is removed from all output. * One major release later it is a fatal error. Having this whole schedule formalized will help everyone to know how we evolve the metadata spec, and because it is key-value pairs we have some wiggle room to sometimes ignore certain keys or treat them as opaque blobs (a la HTTP/MIME headers). In the case of this instance, I would say we should do the email and dev-null-ing immediately and then just pick up as normal and in 6-12 months (whatever we decide, not that it should actually be an ill-defined time period) it becomes a fatal error. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Deprecate and Block requires/provides
On Oct 17, 2013, at 3:50 PM, Nick Coghlan ncogh...@gmail.com wrote: On 18 Oct 2013 04:48, Donald Stufft don...@stufft.io wrote: On Oct 17, 2013, at 2:33 PM, Noah Kantrowitz n...@coderanger.net wrote: On Oct 17, 2013, at 9:26 AM, Michael Foord fuzzy...@gmail.com wrote: On 17 October 2013 16:53, Donald Stufft don...@stufft.io wrote: On Oct 17, 2013, at 11:49 AM, Michael Foord fuzzy...@gmail.com wrote: Package upload certainly worked, and that is what is going to be broken. So would you be ok with deprecating and removing to equal this metadata silently gets sent to /dev/null in order to not break uploads for what would have affected roughly 4% of the total new releases on PyPI in 2013. My vote on this whole thing in the general context of how to handle deprecating metadata fields * Email anyone using deprecated metadata at the time of deprecation (or now, in the case of this stuff) * Deprecation would follow a somewhat normal arc: * Initially it is just marked as deprecated in the docs (pending deprecation phase). * One major release (which is fuzzy in this case, but 6-12 months) later it goes to dev null on input and is removed from all output. * One major release later it is a fatal error. Having this whole schedule formalized will help everyone to know how we evolve the metadata spec, and because it is key-value pairs we have some wiggle room to sometimes ignore certain keys or treat them as opaque blobs (a la HTTP/MIME headers). In the case of this instance, I would say we should do the email and dev-null-ing immediately and then just pick up as normal and in 6-12 months (whatever we decide, not that it should actually be an ill-defined time period) it becomes a fatal error. --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig This sounds reasonable to me. And to me. A general Evolution of PyPI APIs process PEP could be a very helpful thing to avoid having to rehash this discussion for every change :) +1, especially because the process is asymmetric, pip needs to accept and silently ignore unknown metadata fields indefinitely. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Remove the Mirror Authenticity API
+1 --Noah On Sep 28, 2013, at 8:05 PM, Donald Stufft don...@stufft.io wrote: I believe we should remove the /serverkey and /serversig/* API's from PyPI. * I am not aware of *any* implementation that actually verifies packages against this API * In the light of PEP449 users now make a very conscious choice of which mirror they are using, which means they are no longer downloading random things from indiscriminate mirrors. * It uses DSA, which is a cryptographic primitive where if you reuse the random number or *any* bias in your random number you completely leak the private key. Given the nature of PyPI it's completely possible for a malicious user to essentially create an unbounded number of signatures making it more likely that a random nonce will be reused. * Moving forward something like TUF is a much better answer to the problems this attempts to solve as well as other problems. So it's basically unused with questionable primitives and better solutions exist. Does anyone have any objections to this being removed? - Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
[Distutils] Decommissioning last.pypi.python.org
I am shortly going to delete last.pypi.python.org. We've checked with all versions of pip, this will not cause any user-facing disruptions and --use-mirrors will be converted into a no-op. Individual \w.pypi.python.org domains will remain in accordance with the PEP until they request to be redirected. If you are doing something outside of pip using the autodiscovery protocol, now would be the time to fix it. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] pypissh
On Sep 4, 2013, at 6:47 AM, Donald Stufft wrote: On Sep 4, 2013, at 9:46 AM, Antoine Pitrou anto...@python.org wrote: Nick Coghlan ncoghlan at gmail.com writes: If the PyPI password restrictions ever feel too onerous, then OpenID is another alternative (albeit not one that works with the command line tools). However, you should be able to use pypissh for CLI access in that case. For the record, it seems pypissh doesn't work with Python 3: $ sudo pip3 install pypissh Downloading/unpacking pypissh Running setup.py egg_info for package pypissh Installing collected packages: pypissh Running setup.py install for pypissh File /usr/local/lib/python3.3/dist-packages/pypissh.py, line 186 except socket.error, e: ^ SyntaxError: invalid syntax Regards Antoine. ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig I believe MvL owns PyPISSH and it has an issue tracker under his account on bitbucket.org. Obligatory reminder that we (I) have no intention of supporting pypissh as we move into the Era of Warehouse. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] pypissh
On Sep 4, 2013, at 11:33 AM, Antoine Pitrou wrote: Noah Kantrowitz noah at coderanger.net writes: Obligatory reminder that we (I) have no intention of supporting pypissh as we move into the Era of Warehouse. Really? So what will be the options to upload files easily without stuffing a password in .pypirc I think Donald intends to support SSL Client Certs for those that want to use them, though OAuth-style access tokens is another possibility (no idea how Donald feels about those). --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] pypissh
On Sep 4, 2013, at 12:14 PM, Donald Stufft wrote: On Sep 4, 2013, at 2:36 PM, Vinay Sajip vinay_sa...@yahoo.co.uk wrote: Obligatory reminder that we (I) have no intention of supporting pypissh as we move into the Era of Warehouse. What *is* the Era of Warehouse, exactly? Is there any documentation which defines standards, interfaces etc., or a rough time frame/road map for such documentation? What are the deliverables? Is it expected that there could be multiple implementations of a standard, or just a single blessed implementation that everyone has to use? Does all or most of the discussion about Warehouse happen on this list, or does substantive discussion take place on some other list somewhere? Regards, Vinay Sajip Rolling up answers to multiple questions in here. 1) Warehouse is the name of the software that will power PyPI 2.0. 2) Nothing about the future of Warehouse is set in stone and API breakages and the like will be discussed before hand. 3) The way the migration was going to work was posted to this list already (https://mail.python.org/pipermail/distutils-sig/2013-July/022096.html). 4) In regards to the PyPISSH I don't know exactly what tooling I want to replace it with, it might simply be a saner implementation of SSH Authentication, it might be TLS Client Certs, or OAuth Tokens. Personally I'm leaning towards TLS Client Certs and possibly OAuth tokens but that will be decided down the road. To refine my statement, the current server implementation of using opensshd with some authorized_keys trickery is what the infra team is declining to support long term. Something built around Twisted's SSH server impl (for example) could be a suitable replacement since that would be secure by default as opposed to the current system where any failure on our part gives you shell access to the PyPI server. I know of no current issues, but long-term it isn't a position we want to be in in terms of support. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] What to do about the PyPI mirrors
On Aug 5, 2013, at 11:11 PM, Christian Theune c...@gocept.com wrote: Two more things: why is the CDN not suffering from the security problems you describe for the mirrors? a) Fastly seems to be the one owning the certificate for pypi.python.org. What?!? They have a delegated SAN for it, which digicert (the CA) authorizes with the domain contact (the board in this case). b) What does stop Fastly from introducing incorrect/rogue code in package downloads? Basically this one boils down to personal trust from me to the Fastly team combined with the other companies using them being very reputable. At the end of the day, there is not currently any cryptographic mechanism preventing Fastly from doing bad things. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] What to do about the PyPI mirrors
On Aug 5, 2013, at 11:09 PM, Christian Theune c...@gocept.com wrote: Hi, looks like I'm late to the party to figure out that I'm going to be hurt again. I'd like to suggest explicitly considering what is going to break due to this and how much work you are forcefully inflicting on others. My whole experience around the packaging (distribute/setuptools) and mirroring/CDN in this year estimates cost for my company somewhere between 10k-20k EUR just for keeping up with the breakage those changes incure. It might be that we're wonderfully stupid (..enough to contribute) and all of this causes no headaches for anybody else …. Overall, guessing that the packaging infrastructure is used by probably multiple thousands of companies then I'd expect that at least 100 of them might be experiencing problems like us. Juggling arbritrary numbers I can see that we're inflicting around a million EURs of cost that nobody asked for. More specific statements below. On 2013-08-04 22:25:01 +, Donald Stufft said: Here's my PEP for Deprecating and Removing the Official Public Mirrors It's source is at: https://github.com/dstufft/peps/blob/master/mirror-removal.rst Abstract === This PEP provides a path to deprecate and ultimately remove the official public mirroring infrastructure for `PyPI`_. It does not propose the removal of mirroring support in general. -1 - maybe I don't have the right to speak up on CDN usage, but personally I feel it's a bad idea to delegate overall PyPI availability exclusively to a commercial third party. It's OK for me that we're using them to improve PyPI availability, but completely putting our faith in their hands, doesn't sound right to me. Rationale The PyPI mirroring infrastructure (defined in `PEP381`_) provides a means to mirror the content of PyPI used by the automatic installers. It also provides a method for autodiscovery of mirrors and a consistent naming scheme. There are a number of problems with the official public mirrors: * They give control over a \*.python.org domain name to a third party, allowing that third party to set or read cookies on the pypi.python.org and python.org domain name. Agreed, that's a problem. * The use of a sub domain of pypi.python.org means that the mirror operators will never be able to get a certificate of their own, and giving them one for a python.org domain name is unlikely to happen. Agreed. * They are often out of date, most often by several hours to a few days, but regularly several days and even months. That's something that the mirroring infrastructure should have been constructed for. I completely agree that the way the mirroring was established was way sub-optimal. I think we can do better. * With the introduction of the CDN on PyPI the public mirroring infrastructure is not as important as it once was as the CDN is also a globally distributed network of servers which will function even if PyPI is down. Well, now we have one breakage point more which keeps annoying me. This argument is not completely true. They may be getting better over time but we have invested heavily to accomodate the breakage - that needs to be balanced with some benefit in the near future. To be clear, the CDN and other server-side improvements are not a hard-HA replacement like a local company mirror. You are exactly the use case that can and should be using a mirror for your own use. We are doing _nothing_ that disrupts this use case and will support is exactly as before. * Although there is provisions in place for it, there is currently no known installer which uses the authenticity checks discussed in `PEP381`_ which means that any download from a mirror is subject to attack by a malicious mirror operator, but further more due to the lack of TLS it also means that any download from a mirror is also subject to a MITM attack. Again, I think that was a mistake during the introduction of the mirroring infrastructure: too few people, too confusing PEP. * They have only ever been implemented by one installer (pip), and its implementation, besides being insecure, has serious issues with performance and is slated for removal with it's next release (1.5). Only if you consider the mirror auto-discovery protocol. I'm not sure whether using DNS was such a smart move. A simple HTTP request to find mirrors would have been nice. I think we can still do that. Also, not everyone wants or needs auto-detection the way that the protocol describes it. I personally just hand-pick a mirror (my own, hah) and keep using that. We are also thinking about providing system-level default configuration to hint tools like PIP and setuptools to a different default index that is closer from a network perspective. From a customer perspective this should be PyPI. I'd like to avoid breakage. Again, if you
Re: [Distutils] What to do about the PyPI mirrors
On Aug 5, 2013, at 11:56 PM, holger krekel hol...@merlinux.eu wrote: On Mon, Aug 05, 2013 at 23:31 -0700, Noah Kantrowitz wrote: On Aug 5, 2013, at 11:11 PM, Christian Theune c...@gocept.com wrote: Two more things: why is the CDN not suffering from the security problems you describe for the mirrors? a) Fastly seems to be the one owning the certificate for pypi.python.org. What?!? They have a delegated SAN for it, which digicert (the CA) authorizes with the domain contact (the board in this case). b) What does stop Fastly from introducing incorrect/rogue code in package downloads? Basically this one boils down to personal trust from me to the Fastly team combined with the other companies using them being very reputable. At the end of the day, there is not currently any cryptographic mechanism preventing Fastly from doing bad things. The problem is not so much trusting individuals but that the companies in question are based in the US. If its government wants to temporarily serve backdoored packages to select regions, they could silently force Fastly to do it. I guess the only way around this is to work with pypi- and eventually author/maintainer-signatures and verification. No, I have carefully selected whom I trust to work with on the PSF infrastructure. I can promise you there is a 100% chance that the head of Fastly would sooner shut down the company than allow a government interdiction of any kind. I extend this trust to Dyn and OSL as well, and I do not do so lightly. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] What to do about the PyPI mirrors
On Aug 6, 2013, at 12:01 AM, Nick Coghlan ncogh...@gmail.com wrote: On 6 August 2013 16:09, Christian Theune c...@gocept.com wrote: Hi, looks like I'm late to the party to figure out that I'm going to be hurt again. That's why I asked for this to be put through the PEP process: to give it more visibility, and provide more opportunity for people potentially affected to have a chance to comment and offer alternatives. Giving third parties the opportunity to read python.org cookies indefinitely isn't an option. Everything else is negotiable. I'd like to suggest explicitly considering what is going to break due to this and how much work you are forcefully inflicting on others. My whole experience around the packaging (distribute/setuptools) and mirroring/CDN in this year estimates cost for my company somewhere between 10k-20k EUR just for keeping up with the breakage those changes incure. It might be that we're wonderfully stupid (..enough to contribute) and all of this causes no headaches for anybody else …. Overall, guessing that the packaging infrastructure is used by probably multiple thousands of companies then I'd expect that at least 100 of them might be experiencing problems like us. Juggling arbritrary numbers I can see that we're inflicting around a million EURs of cost that nobody asked for. More specific statements below. On 2013-08-04 22:25:01 +, Donald Stufft said: Here's my PEP for Deprecating and Removing the Official Public Mirrors It's source is at: https://github.com/dstufft/peps/blob/master/mirror-removal.rst Abstract === This PEP provides a path to deprecate and ultimately remove the official public mirroring infrastructure for `PyPI`_. It does not propose the removal of mirroring support in general. -1 - maybe I don't have the right to speak up on CDN usage, but personally I feel it's a bad idea to delegate overall PyPI availability exclusively to a commercial third party. It's OK for me that we're using them to improve PyPI availability, but completely putting our faith in their hands, doesn't sound right to me. Would you be happier if it said the current incarnation of the public mirroring infrastructure? I have no objections to somebody proposing a *new* less broken mirroring process. That's something that the mirroring infrastructure should have been constructed for. I completely agree that the way the mirroring was established was way sub-optimal. I think we can do better. As noted above, this PEP is about killing off the *current* public mirroring system as being irredeemably broken. If that inspires somebody to come up with a more sensible alternative, so much the better. * With the introduction of the CDN on PyPI the public mirroring infrastructure is not as important as it once was as the CDN is also a globally distributed network of servers which will function even if PyPI is down. Well, now we have one breakage point more which keeps annoying me. This argument is not completely true. They may be getting better over time but we have invested heavily to accomodate the breakage - that needs to be balanced with some benefit in the near future. That's why explicit mirror usage is still supported and recommended. * Although there is provisions in place for it, there is currently no known installer which uses the authenticity checks discussed in `PEP381`_ which means that any download from a mirror is subject to attack by a malicious mirror operator, but further more due to the lack of TLS it also means that any download from a mirror is also subject to a MITM attack. Again, I think that was a mistake during the introduction of the mirroring infrastructure: too few people, too confusing PEP. Which is why *this* incarnation of it needs to go away. * They have only ever been implemented by one installer (pip), and its implementation, besides being insecure, has serious issues with performance and is slated for removal with it's next release (1.5). Only if you consider the mirror auto-discovery protocol. I'm not sure whether using DNS was such a smart move. A simple HTTP request to find mirrors would have been nice. I think we can still do that. And can be done regardless of what happens to the current system. Also, not everyone wants or needs auto-detection the way that the protocol describes it. I personally just hand-pick a mirror (my own, hah) and keep using that. Which will be unaffected for anyone not relying on a pypi.python.org subdomain. We are also thinking about providing system-level default configuration to hint tools like PIP and setuptools to a different default index that is closer from a network perspective. From a customer perspective this should be PyPI. I'd like to avoid breakage. Again, if you don't let me choose where to spend my time, I'd rather invest the time I need
Re: [Distutils] What to do about the PyPI mirrors
On Aug 6, 2013, at 12:10 AM, holger krekel hol...@merlinux.eu wrote: On Mon, Aug 05, 2013 at 23:49 -0700, Noah Kantrowitz wrote: On Aug 5, 2013, at 11:09 PM, Christian Theune c...@gocept.com wrote: (...) Between now and the first DNS change, I would absolutely recommend any current public mirrors to redirect users to their new domain name if they intend to have one, and we'll do whatever we can to help make users aware of the switch. I would rather have a clear timeline with fewer steps than add another stage where we (PSF) are issuing redirects to non-PSF servers. Very very +1 on the easier bandersnatch-ing though, I really would love to see more mirrors out there, I just don't want them associated with PyPI or python.org, and I don't want pip to be trying to auto-discover them. PyPI mirrors _are_ associated with PyPI and pypi.python.org. (Why) Do do want to flatly rule out pip/pypi.python.org support for managing mirrors? The perl CPAN mirroring provides this nice little machine-readable file: http://www.cpan.org/indices/mirrors.json and a python-equivalent could be consumed by pip, i guess. Because at this time there is no Python package installer that can install from a public mirror in a way that makes me comfortable supporting it as an official resource. This could be addressed in pip by verifying the /simple signatures, but this mostly precludes improved mirroring mechanisms like that used by Crate. More to the point, I as the head of infrastructure am responsible for *.python.org, but if there is an issue with a mirror, be it downtime, server compromise, or anything else, me and my team can't do anything to fix that. This is, again, not a situation I am comfortable with. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] What to do about the PyPI mirrors
On Aug 6, 2013, at 5:22 AM, Nick Coghlan wrote: On 6 August 2013 16:59, Christian Theune c...@gocept.com wrote: Hi, Thanks for all the feedback, I'll calm down a bit and ponder some more structured reply. However, you're responding to the technicalities. I didn't see any consideration to the user pain. It seems irrelevant. Almost like arguing with the TSA about taking off your shoes. User pain is the only reason for not making the change tomorrow. People need time to adjust, or to propose alternative solutions. My reasoning for picking 4 months total on the migration is that an individual user switching their mirror hostnames is a relatively quick process (maybe a few days in a really big case) and anyone that doesn't hear about this change within a few months is highly unlikely to learn about it in a larger period of time. Humans are generally deadline-driven, so moving the dealing back doesn't get us much except moving the conversion work back with it. Basically I think paste the 6-8 weeks mark, we are just hitting the long tail in terms of actual benefit to users, and it is better to just break the system and force them to notice they need to fix things (since one reason for doing this is current system is unsafe and allowing that to exist for another year is not really on my list). --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] What to do about the PyPI mirrors
On Aug 3, 2013, at 5:17 PM, Donald Stufft wrote: On Jul 25, 2013, at 1:38 AM, Richard Jones r1chardj0...@gmail.com wrote: Hi all, I've just been contacted by someone who's set up a new public mirror of PyPI and would like it integrated into the mirror ecosystem. I think it's probably time we thought about how to demote the mirrors: - they cause problems with security (being under the python.org domain causes various issues including inability to use HTTPS and cookie issues) - they're no longer necessary thanks to the CDN work So, things to do: - links and information on PyPI itself can be removed - tools that use mirrors still need to be able to but mention of using public mirrors is probably something to demote These are just rough thoughts that occurred to me just now. Richard ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig Can we close the loop on this? Ideally I think any public mirrors should need to register their own domain name. We can either maintain a list of unofficial mirrors, or Ken Cochrane has been doing a good job I think of keeping a list (as well as tracking some basic stats) at http://pypi-mirrors.org/ so maybe we can just point people to that as the list of mirrors? Ideally we should get all of them off the *.python.org namespace. As the one with the finger on the not-the-metaphorical button, I think we should say that two (2) months from now, on October 1st 2013, the [a-g].pypi.python.org DNS names will all be redirected to front.python.org and another two months beyond that (2013-12-01) they will all be deleted (along with last.pypi.python.org). That seems like a very generous deprecation schedule, especially given that all the needs to change is some domain registrations. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] a plea for backward-compatibility / smooth transitions
On Jul 29, 2013, at 10:41 PM, Antoine Pitrou solip...@pitrou.net wrote: Paul Moore p.f.moore at gmail.com writes: Personally, none of the changes have detrimentally affected me, so my opinion is largely theoretical. But even I am getting a little frustrated by the constant claims that what we have now is insecure and broken, and must be fixed ASAP. FWIW, +1. You may be paranoid, but not everyone has to be (or suffer the consequences of it). Security issues should be fixed without breaking things in a hassle (which is the policy we followed e.g. for the ssl module, or hash randomization). You missed a key word … when possible. If there is a problem we will fix it, when we can do that in a way that minimizes breakages we will do that. Its all just about cost-benefit, and when you are talking about executing code downloaded from the internet it becomes quite easy to see benefits outweighing costs even with pretty major UX changes. Not something we do lightly, but status quo does not win here, sorry. The whole python.org infrastructure is built on an OS kernel written by someone who thinks security issues are normal bugs. AFAIK there is no plan to switch to OpenBSD. This is news to me, we specifically run Ubuntu LTS because Canonical's security response team has a proven track record of handling issues. If you mean that Linus doesn't handle security issues well, then it is fortunate indeed that we don't actually use his software. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] a plea for backward-compatibility / smooth transitions
On Jul 29, 2013, at 11:19 PM, Antoine Pitrou solip...@pitrou.net wrote: Noah Kantrowitz noah at coderanger.net writes: The whole python.org infrastructure is built on an OS kernel written by someone who thinks security issues are normal bugs. AFAIK there is no plan to switch to OpenBSD. This is news to me, we specifically run Ubuntu LTS because Canonical's security response team has a proven track record of handling issues. If you mean that Linus doesn't handle security issues well, then it is fortunate indeed that we don't actually use his software. Did you already forget what the discussion is about? Security/bugfix Ubuntu LTS updates don't break compatibility for the sake of hardening things, which is the whole point. Again, speaking as the guy that has to clean up the mess when they do break compat, I promise you they do. Same deal, they only break compat when keeping compat would present a threat to users, which is quite often the case with security bugs. They are fortunately a bit further ahead of us on the long tail of finding problems, so this is far less frequent than it was in years past. We will get there too, but like I said, status quo is not a defense here, just strap in and hang on. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] a plea for backward-compatibility / smooth transitions
On Jul 30, 2013, at 12:01 AM, Antoine Pitrou solip...@pitrou.net wrote: Donald Stufft donald at stufft.io writes: I have zero qualms about releasing a full disclosure along with working exploits into the wild for a security vulnerability that people block me on. If I'm unable to rectify the problem I will make sure that everyone *knows* about the problem. I don't know what I'm supposed to infer from such a statement, except that I probably don't want to trust you. You might think that publish[ing] working exploits into the wild is some kind of heroic, altruistic act, but I think few people would agree. No, this is the standard for security researchers. If the vendor ignores the reported exploit for long enough, they go public and try to make sure users understand the risks and how to mitigate them in the time it takes the vendor to fix it. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] What to do about the PyPI mirrors
On Jul 24, 2013, at 10:38 PM, Richard Jones wrote: Hi all, I've just been contacted by someone who's set up a new public mirror of PyPI and would like it integrated into the mirror ecosystem. I think it's probably time we thought about how to demote the mirrors: - they cause problems with security (being under the python.org domain causes various issues including inability to use HTTPS and cookie issues) - they're no longer necessary thanks to the CDN work So, things to do: - links and information on PyPI itself can be removed - tools that use mirrors still need to be able to but mention of using public mirrors is probably something to demote These are just rough thoughts that occurred to me just now. +1, as envoy of infrastructure team we would like to formally retire the [a-z].pypi.python.org names. Anyone with an existing mirror should be encouraged to continue maintaining it, but it will be for their own use (or the use of their company/internal network). --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] API for registering/managing URLs for a package
On Jul 18, 2013, at 7:10 AM, M.-A. Lemburg wrote: I would like to write a script to automatically register release URLs for PyPI packages. Is the REST API documented somewhere, or is the implementation the spec ? ;-) And related to this: Will there be an option to tell PyPI's CDN to cache the release URL's contents ? I think you are perhaps confused, the use of external URLs on PyPI is formally deprecated. The way you inform the PyPI and the CDN network about your package is you upload it to PyPI. pip 1.4 effectively disables unsafe external URLs, and all external URLs will follow soon. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] API for registering/managing URLs for a package
On Jul 18, 2013, at 8:06 AM, Noah Kantrowitz wrote: On Jul 18, 2013, at 7:10 AM, M.-A. Lemburg wrote: I would like to write a script to automatically register release URLs for PyPI packages. Is the REST API documented somewhere, or is the implementation the spec ? ;-) And related to this: Will there be an option to tell PyPI's CDN to cache the release URL's contents ? I think you are perhaps confused, the use of external URLs on PyPI is formally deprecated. The way you inform the PyPI and the CDN network about your package is you upload it to PyPI. pip 1.4 effectively disables unsafe external URLs, and all external URLs will follow soon. Someone reminded me that I'm only partially correct, the external URL stuffs will continue to be supported, but only as a convenience during package registration/upload. From the PoV of clients (and the CDN) everything will be local. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
[Distutils] Worry about lack of focus
So we've recently seen a big resurgence in activity on improving Python packaging. First off, thats good, hopefully thats why we are all here. That said, I'm becoming worried about a possible lack of focus, and I know I'm not the only one. There have been many ideas floated, and many PEPs either sketched out, reworked, or are stated to be in planning. I think perhaps we should work out some kind of shortlist of what we think can and should be accomplished in the short term and just keep a running list of topics that need energy but are lower priority. This would reduce the chances of hitting the fix the whole world at once situation that we have run in to before in this attempt, which often results in burnout and frustration all around. Just to kick things off here are the rough topics I can think of that I've seen discussed recently (ignoring that many of these are dependent on each other): * Including pip with Python 3.4 * Bundling setuptools with pip * Splitting setuptools and pkg_resources * Replacing the executable generation in pip with something new * Working out how to let pip upgrade itself on Windows * Entrypoints in distutils/the stdlib * Executable generation in distlib * Signing/vetting of releases * General improvements to the wheel format * General improvements to package metadata Apologies for anything I have mis-paraphrased or missed, but that is definitely a lot of things to have up in the air. Just want to make sure we can get everything done without anyone going crazy(er) and that we keep sight of whats going on. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Expectations on how pip needs to change for Python 3.4
On Jul 13, 2013, at 10:58 PM, Nick Coghlan wrote: On 14 July 2013 12:46, Donald Stufft don...@stufft.io wrote: I'm sure I've seen people say other things that have made me think are you expecting the pip maintainers to make that change? in the various threads, so I doubt this list is definitive. The other big one is the one you noted about pip *not* offering a stable API, *but* exposing an apparently stable API to introspection. Introspection currently tells me that pip exports *at least* 32 public names (and this is without checking for public submodules that aren't implicitly imported by pip/__init__.py): import pip; public = set(k for k, v in pip.__dict__.items() if not k.startswith('_') and (not hasattr(v, __name__) or hasattr(v, __module__) or v.__name__.startswith(pip.))); print(len(public)) 32 If pip really has no stable public API, then it should properly indicate this under introspection (if it already uses relative imports correctly, then the easiest ways to achieve that are to just shove everything under a pip._impl subpackage or shuffle it sideways into a _pip package). Pip does not use relative imports. Is simply documenting the fact there is no public API enough? Pushing everything into a _impl or _pip directory makes me nervous because that's a lot of code churn (and I know there are people using those APIs, and while they aren't technically stable it feels like moving things around just for the sake of an _ in the name is unfriendly to those people. Either the existing APIs are moved to a different name, or they get declared stable and pip switches to internally forked APIs any time a backwards incompatible change is needed for refactoring purposes (see runpy._run_module_as_main for an example of needing to do this in the standard library). I've had to directly deal with too many issues arising from getting this wrong in the past for me to endorse bundling of a module that doesn't follow this practice with CPython - if introspection indicates an API is public, then it's public and subject to all standard library backwards compatibility guarantees, or else we take the pain *once* and explicitly mark it private by adding a leading underscore rather than leaving it in limbo (contextlib._GeneratorContextManager is a standard library example of the latter approach - it used to lack the leading underscore, suggesting it was a public API when it's really just an implementation detail of contextlib.contextmanager). Respectfully, I disagree. Pip is not going in to the stdlib, and as such should not be subject to the same API stability policies as the stdlib. If the PyPA team wants to break the API every release, that is their call as the subject matter experts. Pip is not being included as a library at all. What should be subject to compat is the defined command line interface, because pip is a CLI tool. Independently of this discussion I've already been talking to the PyPA team about what they want to consider a stable API, but that is a discussion to be had over in pip-land, not here and not now. This new category of bundled for your convenience but still external applications will need new standards, and we should be clear about them for sure, but I think this is going too far and puts undue burden on the PyPA team. Remember the end goal is simply to get an installer in the hands of users easier. --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Expectations on how pip needs to change for Python 3.4
On Jul 14, 2013, at 12:35 AM, Nick Coghlan wrote: On 14 July 2013 17:13, Donald Stufft don...@stufft.io wrote: I think it would be reasonable for the pip maintainers to be asked to declare a public API (even if that's None) using the naming scheme or an import warning and declare a backwards compatibility policy for pip itself so that people can know what to expect from pip. I do not however, believe it is reasonable to bind pip to the same policy that CPython uses nor the same schedule. (If you weren't suggesting that I apologize). The main elements of CPython's backwards compatibility policy that I consider relevant are: * Use leading underscores to denote private APIs with no backwards compatibility guarantees * Be conservative with deprecating public APIs that aren't fundamentally broken * Use DeprecationWarning to give at least one (pip) release notice of an upcoming backwards incompatible change We *are* sometimes quite aggressive with deprecation and removal even in the standard library - we removed contextlib.nested from Python 3.2 as a problematic bug magnet well before I came up with the contextlib.ExitStack API as a less error prone replacement in Python 3.3. It's only when it comes to core syntax and builtin behaviour that we're likely to hit issues that simply don't have a sensible deprecation strategy, so we decide we have to live with them indefinitely. That said, I think the answer to this discussion also affects the answer to whether or not CPython maintenance releases should update to newer versions of pip: if pip chooses to adopt a faster deprecation cycle than CPython, then our maintenance releases shouldn't bundle updated versions. Instead, they should follow the policy: * if this is a new major release, or the first maintenance release to bundle pip, bundle the latest available version of pip * otherwise, bundle the same version of pip as the previous release This would mean we'd be asking the pip team to help out by providing security releases for the bundled version, so we can get that without breaking the public API that's available by default. On the other hand, if the pip team are willing to use long deprecation cycles then we can just bundle the updated versions and not worry about security releases (I'd prefer that, but it only works if the pip team are willing to put up with keeping old APIs around for a couple of years before killing them off once the affected CPython branches go into security fix only mode). If I can surmise your worry here, it is that people will open an interactive terminal, import pip, reflect out the classes/methods/etc, see that despite being mentioned no-where in the Python or pip documentation the methods and classes don't start with an underscore, and thus conclude that this is a stable API to build against? I agree that conventions are good, but I have to say this sounds like a bit of a stretch and certainly anyone complaining that their undocumented API that they only found via reflection (or reading the pip source) was broken basically gets what they deserve. The point I was trying to make is that a major shift in thinking is needed here. pip is not part of CPython, regardless of this bundling neither this mailing list nor the CPython team will have any control (aside from the nuclear option that the CPython team can elect to stop bundling pip). If you think it would be good for the code-health of pip to be clearer about what their public API is, I will suppor t that all the way and in fact have an open ticket against pip to that effect already, but that is something for the pip team to decide. This does very much mean that the CPython team is not just backing the pip codebase, but the PyPA/pip team. I think the past few years have shown them deserving of this trust, and they should be allowed to run things as they see fit. These lines get blurry since several people move back and forth between CPython and PyPA (and distutils and PyPI, etc) hats, so I think this must be stated clearly up front that what the CPython team thinks is reasonable for an API policy will be nothing more than a recommendation from very knowledgable colleagues and will be given the appropriate consideration and respect it deserves based on that. Hopefully that makes my point-of-view a little clearer. --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping))
On Jul 14, 2013, at 9:45 AM, Steve Dower wrote: From: Paul Moore On 13 July 2013 10:05, Paul Moore p.f.mo...@gmail.com wrote: How robust is the process of upgrading pip using itself? Specifically on Windows, where these things typically seem less reliable. OK, I just did some tests. On Windows, pip install -U pip FAILS. The reason for the failure is simple enough to explain - the pip.exe wrapper is held open by the OS while it's in use, so that the upgrade cannot replace it. The result is a failed upgrade and a partially installed new version of pip. In practice, the exe stubs are probably added fairly late in the install (at least when installing from sdist, with a wheel that depends on the order of the files in the wheel), so it's probably only a little bit broken, but a little bit broken is still broken :-( On the other hand, python -m pip install -U pip works fine because it avoids the exe wrappers. There's a lot of scope for user confusion and frustration in all this. For standalone pip I've tended to recommend don't do that - manually uninstall and reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. That sort of advice isn't going to be realistic for a pip bundled with CPython. Does anyone have any suggestions? Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. The usual way to do this is just move the existing executable to pip.exe.deleteme or something, and then write out the new one. Then on every startup (or maybe some level of special case for just pip upgrades?) try to unlink *.deleteme. Not the simplest system ever, but it gets the job done. --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping))
On Jul 14, 2013, at 10:31 AM, Ian Cordasco wrote: On Sun, Jul 14, 2013 at 1:12 PM, Noah Kantrowitz n...@coderanger.net wrote: On Jul 14, 2013, at 9:45 AM, Steve Dower wrote: From: Paul Moore On 13 July 2013 10:05, Paul Moore p.f.mo...@gmail.com wrote: How robust is the process of upgrading pip using itself? Specifically on Windows, where these things typically seem less reliable. OK, I just did some tests. On Windows, pip install -U pip FAILS. The reason for the failure is simple enough to explain - the pip.exe wrapper is held open by the OS while it's in use, so that the upgrade cannot replace it. The result is a failed upgrade and a partially installed new version of pip. In practice, the exe stubs are probably added fairly late in the install (at least when installing from sdist, with a wheel that depends on the order of the files in the wheel), so it's probably only a little bit broken, but a little bit broken is still broken :-( On the other hand, python -m pip install -U pip works fine because it avoids the exe wrappers. There's a lot of scope for user confusion and frustration in all this. For standalone pip I've tended to recommend don't do that - manually uninstall and reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. That sort of advice isn't going to be realistic for a pip bundled with CPython. Does anyone have any suggestions? Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. The usual way to do this is just move the existing executable to pip.exe.deleteme or something, and then write out the new one. Then on every startup (or maybe some level of special case for just pip upgrades?) try to unlink *.deleteme. Not the simplest system ever, but it gets the job done. I accidentally only emailed Paul earlier, but why can't we upgrade the pip module with the exe and then replace the process (using something in the os.exec* family) with `python -m pip update-exe` which could then succeed since the OS isn't holding onto the exe file? I could be missing something entirely obvious since I haven't developed (directly) on or for Windows in at least 5 years. Unfortunately windows doesn't actually offer the equivalent of a POSIX exec(). The various functions in os don't actually replace the current process, they just create a new one and terminate the old one. This means the controlling terminal would see the pip process as ended, so it makes showing output difficult at best. --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping))
On Jul 14, 2013, at 10:39 AM, Noah Kantrowitz wrote: On Jul 14, 2013, at 10:31 AM, Ian Cordasco wrote: On Sun, Jul 14, 2013 at 1:12 PM, Noah Kantrowitz n...@coderanger.net wrote: On Jul 14, 2013, at 9:45 AM, Steve Dower wrote: From: Paul Moore On 13 July 2013 10:05, Paul Moore p.f.mo...@gmail.com wrote: How robust is the process of upgrading pip using itself? Specifically on Windows, where these things typically seem less reliable. OK, I just did some tests. On Windows, pip install -U pip FAILS. The reason for the failure is simple enough to explain - the pip.exe wrapper is held open by the OS while it's in use, so that the upgrade cannot replace it. The result is a failed upgrade and a partially installed new version of pip. In practice, the exe stubs are probably added fairly late in the install (at least when installing from sdist, with a wheel that depends on the order of the files in the wheel), so it's probably only a little bit broken, but a little bit broken is still broken :-( On the other hand, python -m pip install -U pip works fine because it avoids the exe wrappers. There's a lot of scope for user confusion and frustration in all this. For standalone pip I've tended to recommend don't do that - manually uninstall and reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. That sort of advice isn't going to be realistic for a pip bundled with CPython. Does anyone have any suggestions? Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. The usual way to do this is just move the existing executable to pip.exe.deleteme or something, and then write out the new one. Then on every startup (or maybe some level of special case for just pip upgrades?) try to unlink *.deleteme. Not the simplest system ever, but it gets the job done. I accidentally only emailed Paul earlier, but why can't we upgrade the pip module with the exe and then replace the process (using something in the os.exec* family) with `python -m pip update-exe` which could then succeed since the OS isn't holding onto the exe file? I could be missing something entirely obvious since I haven't developed (directly) on or for Windows in at least 5 years. Unfortunately windows doesn't actually offer the equivalent of a POSIX exec(). The various functions in os don't actually replace the current process, they just create a new one and terminate the old one. This means the controlling terminal would see the pip process as ended, so it makes showing output difficult at best. Check that, maybe I'm wrong, does anyone know if the P_OVERLAY flag unlocks the original binary? /me drags out a windows VM … --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping))
On Jul 14, 2013, at 10:43 AM, Noah Kantrowitz wrote: On Jul 14, 2013, at 10:39 AM, Noah Kantrowitz wrote: On Jul 14, 2013, at 10:31 AM, Ian Cordasco wrote: On Sun, Jul 14, 2013 at 1:12 PM, Noah Kantrowitz n...@coderanger.net wrote: On Jul 14, 2013, at 9:45 AM, Steve Dower wrote: From: Paul Moore On 13 July 2013 10:05, Paul Moore p.f.mo...@gmail.com wrote: How robust is the process of upgrading pip using itself? Specifically on Windows, where these things typically seem less reliable. OK, I just did some tests. On Windows, pip install -U pip FAILS. The reason for the failure is simple enough to explain - the pip.exe wrapper is held open by the OS while it's in use, so that the upgrade cannot replace it. The result is a failed upgrade and a partially installed new version of pip. In practice, the exe stubs are probably added fairly late in the install (at least when installing from sdist, with a wheel that depends on the order of the files in the wheel), so it's probably only a little bit broken, but a little bit broken is still broken :-( On the other hand, python -m pip install -U pip works fine because it avoids the exe wrappers. There's a lot of scope for user confusion and frustration in all this. For standalone pip I've tended to recommend don't do that - manually uninstall and reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. That sort of advice isn't going to be realistic for a pip bundled with CPython. Does anyone have any suggestions? Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. The usual way to do this is just move the existing executable to pip.exe.deleteme or something, and then write out the new one. Then on every startup (or maybe some level of special case for just pip upgrades?) try to unlink *.deleteme. Not the simplest system ever, but it gets the job done. I accidentally only emailed Paul earlier, but why can't we upgrade the pip module with the exe and then replace the process (using something in the os.exec* family) with `python -m pip update-exe` which could then succeed since the OS isn't holding onto the exe file? I could be missing something entirely obvious since I haven't developed (directly) on or for Windows in at least 5 years. Unfortunately windows doesn't actually offer the equivalent of a POSIX exec(). The various functions in os don't actually replace the current process, they just create a new one and terminate the old one. This means the controlling terminal would see the pip process as ended, so it makes showing output difficult at best. Check that, maybe I'm wrong, does anyone know if the P_OVERLAY flag unlocks the original binary? /me drags out a windows VM … Ignore my ignoring, with os.execl command flow does return back to the controlling terminal process (the new process continues in the background) and with os.spawnl(os.P_OVERLAY, 'python-2') I just get a segfault on 3.3. Yay for not completely misremembering, boo for this being so complicated. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping))
On Jul 14, 2013, at 3:06 PM, Nick Coghlan wrote: On 15 Jul 2013 05:44, Paul Moore p.f.mo...@gmail.com wrote: On 14 July 2013 18:06, Donald Stufft don...@stufft.io wrote: Wouldn't a .py file make the command `pip.py`` and not ``pip`` ? Not if .py is a registered extension. What I can't remember is whether it needs to be in PATHEXT (which it isn't by default). The big problem here is that the behaviour isn't very well documented (if at all) so the various command shells act subtly differently. That's why I want to test, and why it won't be a 5-minute job to do so... But the various replace the exe afterwards hacks sound awfully complicated to me - particularly as pip doesn't control the exes in the first place, they are part of the setuptools console script entry point infrastructure. My strong preference here is to remove the current use of setuptools entry points, simply because I don't think the problem is solvable while pip doesn't control the exe management at all. That's a non-trivial change, but longer term maybe the best. Question for Nick, Brett and any other core devs around: Would python-dev be willing to include in the stdlib some sort of package for managing exe-wrappers? I don't really want pip to manage exe wrappers any more than I like setuptools doing so. Maybe the existing launcher can somehow double up in that role? Not sure it fits the launcher, but having something along those lines in the stdlib makes sense (especially in the context of a pip bundling PEP). Another option we may want to consider is an actual msi installer for pip (I'm not sure that would actually help, but it's worth looking into), as well as investigating what other self-updating Windows apps (like Firefox) do to handle this problem. They do the exec a helper executable that replaces the original approach, which works fine for non-console apps since there isn't the problem of the shell getting confused :-/ --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Expectations on how pip needs to change for Python 3.4
On Jul 13, 2013, at 9:59 AM, Brett Cannon wrote: On Sat, Jul 13, 2013 at 11:15 AM, Paul Moore p.f.mo...@gmail.com wrote: On 13 July 2013 16:03, Donald Stufft don...@stufft.io wrote: 1. Install to user-packages by default. Do people really want this? I hadn't seen it (other than if pip was installed to user by default). I think it's a bad idea to switch this on people. I doubt the user-packages is going to be in people's default PATH so they'll easily get into cases where things are installed but they don't know where it was installed too. I believe Nick wants to make user-packages the default. I know at least some of the pip maintainers (yourself included) have reservations. Personally, I've never used user-packages, so I don't know what issues might arise. But I hope to try it out sometime when I get the chance, just to get some specific information. I would assume the executable script was installed next to the python binary but the library parts went into user-packages. That way -m would work for all binaries of the same version. 2. Not depend on setuptools (??? - Nick's inversion idea) I wanted to do this anyways. It will still depend on it, but it will just bundle setuptools itself like its other dependencies. For pip dependencies are an implementation detail not an actual thing it can/should have. Bundling is not the same as Nick's suggestion. I personally have no problem with bundling, but pip install with a bundled setuptools might not work because the setup subprocess won't see the bundled setuptools when it imports it in setup.py. But either way, it's doable, I just want to know if it's on the critical path... 3. Possibly change the wrapper command name from pip to pip3 on Unix. Not sure on this. Ideally i'd want the commands to be pipX.Y, pipX, and pip all available and not install the less specific ones if they already exist but that might be too hard? Could we just start to move away from an executable script and start promoting rather aggressively -m instead? It truly solves this problem and since the results are tied to the Python executable used (i.e. where something gets installed) it disambiguates what Python binary pip is going to work with (something I have trouble with thanks to Python 2 and 3 both being installed and each with their own pip installation). I realize older Python versions can't do this (I believe 2.6 and older can't for packages) but at least in the situation we are discussing here of bundling pip it's not an issue. No, this is not how any user ever will expect unix programs to work. I know that python -m is very cute, and I use it myself for some debug and helper functionality at times, but it can never replace normal scripts. This is a user experience expectation, and we will have to meet it. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Current status of PEP 439 (pip boostrapping)
On Jul 13, 2013, at 6:46 AM, Brett Cannon wrote: On Sat, Jul 13, 2013 at 1:31 AM, Nick Coghlan ncogh...@gmail.com wrote: In addition to the long thread based on Richard's latest set of updates, I've also received a few off-list comments on the current state of the proposal. So, I figured I'd start a new thread summarising my current point of view and see where we want to go from there. 1. However we end up solving the bootstrapping problem, I'm *definitely* a fan of us updating pyvenv in 3.4 to ensure that pip is available by default in new virtual environments created with that tool. I also have an idea for a related import system feature that I'll be sending to import-sig this afternoon (it's a variant on *.pth and *.egg-link files that should be able to address a variety of existing problems, including the one of *selectively* making system and user packages available in a virtual environment in a cross-platform way without needing to copy them) 2. While I was originally a fan of the implicit bootstrapping on demand design, I no longer like that notion. While Richard's bootstrap script is a very nice piece of work, the edge cases and neat tricks have built up to the point where they trip my if the implementation is hard to explain, it's a bad idea filter. Accordingly, I no longer think the implicit bootstrapping is a viable option. 3. That means there are two main options available to us that I still consider viable alternatives (the installer bundling idea was suggested in one of the off list comments I mentioned): * an explicit bootstrapping script * bundling a *full* copy of pip with the Python installers for Windows and Mac OS X, but installing it to site-packages rather than to the standard library directory. That way pip can be used to upgrade itself as normal, rather than making it part of the standard library per se. This is then closer to the bundled application model adopted for IDLE in PEP 434 (we could, in fact, move to distributing idle the same way). I'm currently leaning towards offering both, as we're going to need a tool for bootstrapping source builds, but the simplest way to bootstrap pip for Windows and Mac OS X users is to just *bundle a copy with the binary installers*. So long as the bundled copy looks *exactly* the way it would if installed later (so it can update itself), then we avoid the problem of coupling the pip update cycles to the standard library feature release cycle. The bundled version can be updated to the latest available versions when we do a Python maintenance release. For Linux, if you're using the system Python on a Debian or Fedora derivative, then sudo apt-get python-pip and sudo yum install python-pip are both straightforward, and if you're using something else, then it's unlikely getting pip bootstrapped using the bootstrap script is a task that will bother you :) The python -m getpip command is still something we will want to provide, as it is useful to people that build their own copy of Python from source. But is it going to make a difference? If we shift to using included copies of pip in binary installers over a bootstrap I say leave out the bootstrap as anyone building from source should know how to get pip installed on their machine or venv. The only reason I see it worth considering is if pyvenv starts bootstrapping pip and we want to support the case of pip not being installed. But if we are including it in the binary installer and are going to assume it's available through OS distros, then there isn't a need to as pip can then install pip for us into the venv and skip any initial pip bootstrap. If pip isn't found we can simply either point to the docs in the failure message or print out the one-liner it takes to install pip (and obviously there can be a --no-pip flag to skip this for people who want to install it manually like me who build from source). IOW I think taking the worldview in Python 3.4 that pip will come installed with Python unless you build from source negates the need for the bootstrap script beyond just saying ``curl https://pypi.python.org/get-pip.py | python`` if pip isn't found. This is highly unhelpful for dealing with systems automation. For the foreseeable future, the bulk of Python 3.4 installations will either be source installs, or homegrown packages based on source installs. The bundled pip doesn't need to be included with, say, an hg clone that you then build and install, but it does have to come with an install from an official release source tarball. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Current status of PEP 439 (pip boostrapping)
On Jul 12, 2013, at 10:31 PM, Nick Coghlan wrote: In addition to the long thread based on Richard's latest set of updates, I've also received a few off-list comments on the current state of the proposal. So, I figured I'd start a new thread summarising my current point of view and see where we want to go from there. 1. However we end up solving the bootstrapping problem, I'm *definitely* a fan of us updating pyvenv in 3.4 to ensure that pip is available by default in new virtual environments created with that tool. I also have an idea for a related import system feature that I'll be sending to import-sig this afternoon (it's a variant on *.pth and *.egg-link files that should be able to address a variety of existing problems, including the one of *selectively* making system and user packages available in a virtual environment in a cross-platform way without needing to copy them) 2. While I was originally a fan of the implicit bootstrapping on demand design, I no longer like that notion. While Richard's bootstrap script is a very nice piece of work, the edge cases and neat tricks have built up to the point where they trip my if the implementation is hard to explain, it's a bad idea filter. Accordingly, I no longer think the implicit bootstrapping is a viable option. 3. That means there are two main options available to us that I still consider viable alternatives (the installer bundling idea was suggested in one of the off list comments I mentioned): * an explicit bootstrapping script * bundling a *full* copy of pip with the Python installers for Windows and Mac OS X, but installing it to site-packages rather than to the standard library directory. That way pip can be used to upgrade itself as normal, rather than making it part of the standard library per se. This is then closer to the bundled application model adopted for IDLE in PEP 434 (we could, in fact, move to distributing idle the same way). I'm currently leaning towards offering both, as we're going to need a tool for bootstrapping source builds, but the simplest way to bootstrap pip for Windows and Mac OS X users is to just *bundle a copy with the binary installers*. So long as the bundled copy looks *exactly* the way it would if installed later (so it can update itself), then we avoid the problem of coupling the pip update cycles to the standard library feature release cycle. The bundled version can be updated to the latest available versions when we do a Python maintenance release. For Linux, if you're using the system Python on a Debian or Fedora derivative, then sudo apt-get python-pip and sudo yum install python-pip are both straightforward, and if you're using something else, then it's unlikely getting pip bootstrapped using the bootstrap script is a task that will bother you :) The python -m getpip command is still something we will want to provide, as it is useful to people that build their own copy of Python from source. The bundling idea will obviously need to be discussed with the installer builders, and on python-dev in general, but that was always going to be the case for this PEP anyway (since it *does* touch CPython directly, rather than just being related to the packaging ecosystem). It achieves the aim of allowing people to assume some version of pip will be present on Python 3.4+ installations (or readily available in the case of Linux), while avoiding the problem of coupling pip updates to major Python version updates. As someone that has otherwise remained silent on this thread but was talking with people off-list I probably owe them a public +1 for bundling pip as a semi-new category of non-stdlib-but-included project. This would bring us in line with other tools like gem and npm which work out of the box and gives the user experience people want. Care would have to be paid to make sure the final pip binary ends up in the right filename, much in the same way as we do python - python2 - python 2.7 and such, but this is a solvable problem. How linux distros adapt to this is certainly another question, but I would absolutely advocate to packagers that installing the main python package results in a working pip install, regardless of how that is accomplished. As someone that has to write system management scripts to install and configure Python, being able to count on both pip and pyenv as standard tools in standard places is near-mind-blowingly awesome (give or take that it would be many years unt il I could reasonably assume 3.4 as the default python, but a man can dream). While the getpip module is interesting in a few use cases, it is vastly more valuable to me that we focus on the user experience of the majority of Python developers and deployments, and this is somewhere that Ruby and Node are getting it right in having the package tool simply be there by default. Bundling also addresses the myriad
Re: [Distutils] PyPI mirrors
On Jul 2, 2013, at 2:33 AM, David King wrote: Hi all, Has the relationship between PyPI mirrors changed since PyPI has started being served behind a CDN? I know people have been recommending against using --use-mirrors with pip since it doesn't take advantage of the CDN. I've been considering trying to get a public PyPI mirror setup and wanted to know how/if they're still being used. Yes, the use of public mirrors is no longer recommended as a best practice. The idea is that mirrors will continue to be an important part of the ecosystem for things like deploy caching, internal company mirrors, etc, but the federated, public mirror network concept is being retired. Several of the public mirrors have already shut down and just point back at PyPI, but others are still available if you want to use them. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Fixing PyPI download stats with real-time log analysis (Was: PyPI Download Counts)
On Jun 22, 2013, at 10:33 PM, anatoly techtonik wrote: On Fri, Jun 14, 2013 at 5:05 PM, anatoly techtonik techto...@gmail.com wrote: Could you, please, share the log format+example, so that we can experiment with it? ping Additional assistance is not required for this project. Thank you for your interest, and stay tuned for future updates. --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] b.pypi.python.org
On Jun 7, 2013, at 2:34 PM, Noah Kantrowitz n...@coderanger.net wrote: On Jun 7, 2013, at 2:27 PM, Donald Stufft don...@stufft.io wrote: On Jun 7, 2013, at 5:26 PM, ken cochrane kencochr...@gmail.com wrote: b.pypi.python.org is an official mirror that runs on Google App engine, and it uses a special mirror package built just for GAE. Code for it is found here. https://bitbucket.org/loewis/pypi-appengine b.pypi.python.org has been broken for over 104 days according to http://www.pypi-mirrors.org, and this is because of an issue when we switched pypi over to serving over SSL. I have submitted a pull request to fix this. https://bitbucket.org/loewis/pypi-appengine/pull-request/2/change-pypi-mirror-connection-to-https/diff#comment-262919 but it hasn't been accepted. I am one of the maintainers of b.pypi.python.org, so I can see the logs and push out a new version. I haven't needed to push a version out before, and I'm a little hesitant incase I do it wrong and break something. I also don't want to push code to GAE from my fork, until my PR gets accepted or else someone else in the future might deploy the original one again and remove my fix. Two things: 1. Now that we have the pypi CDN up, do we still need this mirror? Honestly probably not. Mirrors are less important from a availability/speed side of things now and will likely move to being more useful for companies and such to use. OK, what would be the procedure for removing a mirror? Anyone know who is in charge of this mirror? I think Guido had it setup when he worked at Google, and google is paying for the costs of the mirror, but now that he doesn't work for Google, not sure who might be the contact person on that side. I've asked Guido who are admins on the account it to get it turned off. If he doesn't know I can try to find out internally. Brett, Here is the owner list that I can see on GAE - guido - kencochrane (me) - kumar.mcmillan - martin.v.loewis - r1chardj0n3s So Guido didn't even know he was an owner. =) In terms of shutting down the app, you will want to do two things. First is empty out the cron.yaml file; it should have nothing more than cron:. After that you probably want to return 404 for everything; see http://stackoverflow.com/a/189935/236574 on how to do that for all URLs. If you want, Ken, I can clone https://bitbucket.org/kencochrane/pypi-appengine and send you a pull request to do all of this since I recently did something similar for py3ksupport.appspot.com when I shut it down. Brett, If our goal is to shut it down, then yes please, if you can send a pull request that would be great.. We should also probably remove it from the pypi mirror pool before we do this so it no longer gets traffic sent it's way. If we can get it up to date again, I think it is fine, but an out of date mirror is not useful to anyone, and it could cause problems in the long run. 2. If yes to 1. if someone can take a minute to review my PR, and leave comments, or if you have the power, accept my pull request and push out a new version so we can get the mirror up to date. I don't have such permission sadly. Thank you anyway. Paging Noah to kill the DNS Unless there's any objections to removing it? If no one complains in the next 24-ish hours I'll just point it back at pypi.python.org like we did with d.pypi. This is now complete. In another 24 hours there should be no traffic to the GAE app and it can just be archived or deleted. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPI Download Counts
On Jun 9, 2013, at 1:04 PM, Alex Clark acl...@aclark.net wrote: Donald Stufft donald at stufft.io writes: So yes. I broke Download counts because they were not more important than people being able to actually use PyPI to install from. FWIW: You missed the moral of the story: when you make a decision like this, someone will *always* disagree with you (even over the most trivial things). And even if they don't, they may disagree with your approach (e.g. why not sort problems with download counts before enabling the CDN) So the only way to make everyone happy is to consider everyone who will be affected by your actions, before you take action. There is another way, make awesome and wait for history to determine who was happy :) --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] b.pypi.python.org
On Jun 7, 2013, at 2:27 PM, Donald Stufft don...@stufft.io wrote: On Jun 7, 2013, at 5:26 PM, ken cochrane kencochr...@gmail.com wrote: b.pypi.python.org is an official mirror that runs on Google App engine, and it uses a special mirror package built just for GAE. Code for it is found here. https://bitbucket.org/loewis/pypi-appengine b.pypi.python.org has been broken for over 104 days according to http://www.pypi-mirrors.org, and this is because of an issue when we switched pypi over to serving over SSL. I have submitted a pull request to fix this. https://bitbucket.org/loewis/pypi-appengine/pull-request/2/change-pypi-mirror-connection-to-https/diff#comment-262919 but it hasn't been accepted. I am one of the maintainers of b.pypi.python.org, so I can see the logs and push out a new version. I haven't needed to push a version out before, and I'm a little hesitant incase I do it wrong and break something. I also don't want to push code to GAE from my fork, until my PR gets accepted or else someone else in the future might deploy the original one again and remove my fix. Two things: 1. Now that we have the pypi CDN up, do we still need this mirror? Honestly probably not. Mirrors are less important from a availability/speed side of things now and will likely move to being more useful for companies and such to use. OK, what would be the procedure for removing a mirror? Anyone know who is in charge of this mirror? I think Guido had it setup when he worked at Google, and google is paying for the costs of the mirror, but now that he doesn't work for Google, not sure who might be the contact person on that side. I've asked Guido who are admins on the account it to get it turned off. If he doesn't know I can try to find out internally. Brett, Here is the owner list that I can see on GAE - guido - kencochrane (me) - kumar.mcmillan - martin.v.loewis - r1chardj0n3s So Guido didn't even know he was an owner. =) In terms of shutting down the app, you will want to do two things. First is empty out the cron.yaml file; it should have nothing more than cron:. After that you probably want to return 404 for everything; see http://stackoverflow.com/a/189935/236574 on how to do that for all URLs. If you want, Ken, I can clone https://bitbucket.org/kencochrane/pypi-appengine and send you a pull request to do all of this since I recently did something similar for py3ksupport.appspot.com when I shut it down. Brett, If our goal is to shut it down, then yes please, if you can send a pull request that would be great.. We should also probably remove it from the pypi mirror pool before we do this so it no longer gets traffic sent it's way. If we can get it up to date again, I think it is fine, but an out of date mirror is not useful to anyone, and it could cause problems in the long run. 2. If yes to 1. if someone can take a minute to review my PR, and leave comments, or if you have the power, accept my pull request and push out a new version so we can get the mirror up to date. I don't have such permission sadly. Thank you anyway. Paging Noah to kill the DNS Unless there's any objections to removing it? If no one complains in the next 24-ish hours I'll just point it back at pypi.python.org like we did with d.pypi. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Preemptive Apology for Volume of Mail
On Jun 3, 2013, at 11:29 PM, Chris Withers wrote: Please can you do something to stop it? Kill the MTA or something? This is ridiculous… As someone also in the top percentile of package maintainers I understand your annoyance, but just make a filter for don...@python.org for the day or something. The vast majority of PyPI users have only one package so asking us to derail the sending (probably resulting in having to begin again) is unhelpful. If the only cost to us all is hitting Ctrl-A Delete, I welcome progress with open arms. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Preemptive Apology for Volume of Mail
On Jun 3, 2013, at 11:37 PM, Chris Withers wrote: On 04/06/2013 07:33, Noah Kantrowitz wrote: On Jun 3, 2013, at 11:29 PM, Chris Withers wrote: Please can you do something to stop it? Kill the MTA or something? This is ridiculous… As someone also in the top percentile of package maintainers I understand your annoyance, but just make a filter for don...@python.org for the day or something. The vast majority of PyPI users have only one package so asking us to derail the sending (probably resulting in having to begin again) is unhelpful. If the only cost to us all is hitting Ctrl-A Delete, I welcome progress with open arms. Are you not concerned that various bits of python.org involved in this process are going to start getting hit by RBLs and other spam filtering that will cause problems down the line as a result of all this noise? No, because only a very small number of people are going to be getting more than a handful of these, and any user with 50+ packages is hopefully enough of a power-user to not fly off the handle. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Preemptive Apology for Volume of Mail
On Jun 4, 2013, at 12:14 AM, Chris Withers wrote: On 04/06/2013 07:45, Noah Kantrowitz wrote: As someone also in the top percentile of package maintainers I understand your annoyance, but just make a filter for don...@python.org for the day or something. The vast majority of PyPI users have only one package so asking us to derail the sending (probably resulting in having to begin again) is unhelpful. If the only cost to us all is hitting Ctrl-A Delete, I welcome progress with open arms. Are you not concerned that various bits of python.org involved in this process are going to start getting hit by RBLs and other spam filtering that will cause problems down the line as a result of all this noise? No, because only a very small number of people are going to be getting more than a handful of these, and any user with 50+ packages is hopefully enough of a power-user to not fly off the handle. That's not what I'm referring to; how much mail has actually been sent? MTAs end up being blacklisted automatically by ISPs and RBLs if they heuristically look like they're spewing spam. It's what companies like MailChimp and co spend their lives working around. Thanks to running many very very large mailing lists, I can promise you another few thousand messages exiting our servers is a non-issue. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Preemptive Apology for Volume of Mail
On Jun 4, 2013, at 12:21 AM, Chris Withers wrote: On 04/06/2013 08:16, Noah Kantrowitz wrote: MTAs end up being blacklisted automatically by ISPs and RBLs if they heuristically look like they're spewing spam. It's what companies like MailChimp and co spend their lives working around. Thanks to running many very very large mailing lists, I can promise you another few thousand messages exiting our servers is a non-issue. This isn't a mailing list, this is a process sending out mails. I've recently seen something very similar start getting RBL errors from msn after only a hundred or so emails sent. Anyway, let's just hope… Its all the same, these emails are relaying via mail.python.org. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] option #1 plus download_url scraping
On Jun 4, 2013, at 3:16 PM, Barry Warsaw wrote: Like many of you, I got Donald's message about the changes to URLs for Cheeseshop packages. My question is about the three options; I think I want a middle ground, but I'm interested to see why you will discourage me from that wink. IIUC, option #1 is fine for packages hosted on PyPI. But what if our packages are *also* hosted elsewhere, say for redundancy purposes, and that external location needs to be scraped? Specifically, say I have a download_url in my setup.py. I *want* that url to be essentially a wildcard or index page because I don't want to have to change setup.py every time I make a release (unless of course `setup.py sdist` did it for me). I also can't add this url to the Additional File URLs page for my package because again I'd have to change it every time I do a release. So the middle ground I think I want is: option #1 plus scraping from download_url, but only download_url. Am I a horrible person for wanting this? Is there a better way. Do you mean you just don't want to update the version number in setup.py before you release? I'm a bit unsure of the reason for this. The goal is very specifically the hosting outside of PyPI is no longer encouraged. The reliability and performance of PyPI have enough of a track record now that I want it on my own site just in case no longer holds enough water to be worth the substantial downsides. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] A process for removal of PyPi entries
On Jun 2, 2013, at 7:21 PM, PJ Eby wrote: On Sat, Jun 1, 2013 at 4:29 PM, Lennart Regebro rege...@gmail.com wrote: On Sat, Jun 1, 2013 at 9:20 PM, Paul Moore p.f.mo...@gmail.com wrote: I'm -1 on anything that doesn't involve at least a minimal level of human involvement (possibly excepting an initial clean up exercise for projects with no author email) This is why I basically said I'm OK with automatic deletion after a time if there are no downloadable packages and no contact information. Otherwise the owner should be contacted. Some people are saying files uploaded vs. downloadable packages. I don't like the files uploaded criterion because IMO it's a perfectly valid use case to list a package on PyPI which is only available via external revision control. Sorry, if you haven't had time to follow lately we have already begun deprecating this system. It is entirely reasonable to start making plans for the case when this will no longer be an option. Heck, a project that only has planning documents and a reasonably active mailing list should still qualify for PyPI listing, else the original distutils-sig would not have qualified for reserving the name distutils on PyPI, before its first release. ;-) If a reasonably active project doesn't have anything to show after six months, I think we have different definitions of 'reasonably active'. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Sooner or later, we're going to have to be more formal about how we name packages.
On Jun 1, 2013, at 11:09 AM, Jim Fulton wrote: On Sat, Jun 1, 2013 at 2:02 PM, Donald Stufft don...@stufft.io wrote: On Jun 1, 2013, at 2:01 PM, Donald Stufft don...@stufft.io wrote: I am opposed to this. Requiring someone to have purchased a domain adds a significant to publishing a project. If there are no requirements that they have purchased the domain then it's nothing more than a convention and something that anyone who wants to do this can do. Fair enough. A common variation on this scenario, which avoids purchasing a domain, is to use a code hosting domain and project name, so, for example: org.bitbucket.j1m.foo. Of course, using a domain name without owning it is a form of squatting. All that means is either we move the problem (instead of one shared namespace we two or three common ones) or we do it github-style and just prepend usernames at which point you can skip the whole URI thing because usernames must be unique for reasons of general sanity and I don't think it is a huge deal that a single person can't have two packages of the same name. Github-style namespacing just means that either names all suck (django/django, kennethreitz/requests) or you need to come up with some way to map un-namespaced names to their canonical form and we are more or less back at square one. If people don't mind the sucky names, they can already put that in their package name if the bare version is taken, so QED this is already doable in the current system, it just looks so ugly that no one wants to do it and enforcing the ugly seems like a poor option. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] A process for removal of PyPi entries
On May 31, 2013, at 1:34 PM, Tres Seaver wrote: On 05/31/2013 09:18 AM, Lennart Regebro wrote: I'd be OK with after six months automatically removing packages that has only one owner/maintainer, and that owner/maintainer has no other packages, and the package has no available downloads, and no contact information on either package nor registered user. Why all the extras: if somebody wants to claim a project name, but can't upload a release for six months, they should just lose. I would actually be willing to have that cut down to a day: trying to grab the name before registering / uploading a release should result in loss of the claim. +1, I think this should just be treated as a form validation thing. It is a detail of the protocol that you upload a dist definition before the files, but I don't think we should consider it a valid PyPI entry until a file is uploaded (especially now that the default mode is to not scrape external sites). As we switch to not scraping, anything with no files should just vanish IMO, at which point it is available for registration again. If someone happens to ninja-upload between the setup.py register and setup.py upload, I think we can just throw an error message since chances of that happening are so amazingly low. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PyPI Download Counts
On May 27, 2013, at 12:27 AM, holger krekel wrote: Hi Donald, On Sun, May 26, 2013 at 20:08 -0400, Donald Stufft wrote: Hello! As you have have noticed the download counts on PyPI are no longer updating. Originally this was due to an issue with the script that processes these download counts. However I have now removed the download counts from the PyPI webui and their use via the API is considered deprecated. There are numerous reasons for their removal/deprecation some of which are: - Technically hard to make work with the new CDN - The CDN is being donated to the PSF, and the donated tier does not offer any form of log access What would be involved money/effort wise to get such access? - The work around for not having log access would greatly reduce the utility of the CDN - Highly inaccurate - A number of things prevent the download counts from being inaccurate, some of which include: - pip download cache - Internal or unofficial mirrors - Packages not hosted on PyPI (for comparisons sake) - Mirrors or unofficial grab scripts causing inflated counts (Last I looked 25% of the downloads were from a known mirroring script). given the CDN usage of mirrors may drop soon. - Not particularly useful - Just because a project has been downloaded a lot doesn't mean it's good - Similarly just because a project hasn't been downloaded a lot doesn't mean it's bad In short because it's value is low for various reasons, and the tradeoffs required to make it work are high It has been not an effective use of resources. The API will continue to return values for it in order to not break scripts, however in the future all these values will be set to 0. The Web UI has been modified to no longer display it. While download counts do have the weeknesses you describe they also provide a rough indication of usage which many of us referred to. I used it to determine interest and it partly drove my development efforts. From that angle i am not happy about the change but of course i see the benefits. Not having download counts maybe lets us think harder about better metrics. The number of projects using a package as a dep might be one. We do still get some indication of package activity from looking through the logs, it just no longer has a direct correlation. We will see one request hit the backend servers from each shield node per hour when that package is being requested. At some point we could recycle this into some kind of abstract popularity count, but I don't think thats a development priority for anyone right now. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] changelog / CDN inconsistency (was: Re: Good news everyone, PyPI is behind a CDN)
On May 27, 2013, at 12:18 PM, holger krekel wrote: On Mon, May 27, 2013 at 14:59 -0400, Donald Stufft wrote: On May 27, 2013, at 2:54 PM, holger krekel hol...@merlinux.eu wrote: On Mon, May 27, 2013 at 13:50 -0400, Donald Stufft wrote: On May 27, 2013, at 12:39 PM, Donald Stufft don...@stufft.io wrote: On May 27, 2013, at 8:08 AM, holger krekel hol...@merlinux.eu wrote: Hi Noah, Donald, (CC also Richard, Christian), i just checked with a test package and think we might have a cache consistency / changelog API problem. It took me a while but here is the basic thing: I uploaded a test package, changelog API reports it has changed, then i go to its simple page, and some of the time the new release file shows up, sometimes not. Tools like bandersnatch, pep381 and devpi-server (and probably others) use PyPI's changelog API to determine if there are changes. It seems those changes are signalled faster than they become consistently accessible through the CDN. This can lead to inconsistent mirrors because when the CDN has the files there is no change event anymore. Such mirrors are run by companies in-house so i think it's a real problem. Even without mirroring there can be problems because installs are not directly repeatable: pip install XYZ=2.0 can give you first 2.0.1, then 2.0.0 a minute later. I had hoped that a particular ip address sees things consistently. I am not familiar with Fastly's caching properties -- can they notify about the fact that a page/file is consistently up-to-date everywhere? Or can the cache be globally invalidated for a particular page/file? Any other ideas? Failing customizing Fastly usage and also maybe for the short term, is/could there be a special location provided by pypi.python.org which the above tools could use to get at the actual non-cached data? We could then maybe mitigate the problem through updates of the respective tools. That would at least solve the problem for one of my customers i think. best, holger On Sun, May 26, 2013 at 10:34 -0700, Noah Kantrowitz wrote: /farnsworth but seriously, at long last today it was my honor to throw the DNS switch to move PyPI to the Fastly caching CDN. I would like to thank Donald Stufft for doing much of the heavy lifting on the PyPI side, and to Fastly for graciously offering to host us. What does this mean for everyone? Well the biggest change is PyPI should get a whole lot faster. There are two major downsides however. There will now be a delay of several minutes in some cases between updating a package and having it be installable, and download counts will now be even more incorrect than they were before. The PyPI admins are discussing what to do about download counts long-term, but for now we all feel that the performance and availability benefits outweigh the loss. If anyone has any questions, or hears anything about issues with PyPI please don't hesitate to contact me. --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig I mentioned it on twitter but might as well mention it here as well. Currently there is no invalidation going on. The effect on the mirroring was unanticipated and I'm currently getting the invalidation API setup within PyPI. - Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig /simple/ Pages should now be immediately invalidated when a new package is released. thanks Donald. Looking at the implementation, i wonder what happens if after ``self._conn.commit()`` a changelog API call arrives, returns changes and a client uses it to retrieve changes before the fastly-purging takes place. It's still a potential race-condition or am i missing something? best, holger - Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA There's no way around a race condition. ``self._conn.commit()`` is what makes the changes available. If we purge prior to committing it then if someone hits the page between the purge and the self._conn.commit() then the client will see a page cached prior to the update (while the change log will appear to be updated). Essentially the same problem we have now. The current implementation does mean that if a client happens to hit between the commit and the purge they'll see old data however that's pretty unlikely. Purging can take a second and also depends on the network connectivity
Re: [Distutils] changelog / CDN inconsistency (was: Re: Good news everyone, PyPI is behind a CDN)
On May 27, 2013, at 1:20 PM, holger krekel wrote: On Mon, May 27, 2013 at 12:58 -0700, Noah Kantrowitz wrote: On May 27, 2013, at 12:18 PM, holger krekel wrote: On Mon, May 27, 2013 at 14:59 -0400, Donald Stufft wrote: On May 27, 2013, at 2:54 PM, holger krekel hol...@merlinux.eu wrote: On Mon, May 27, 2013 at 13:50 -0400, Donald Stufft wrote: On May 27, 2013, at 12:39 PM, Donald Stufft don...@stufft.io wrote: On May 27, 2013, at 8:08 AM, holger krekel hol...@merlinux.eu wrote: Hi Noah, Donald, (CC also Richard, Christian), i just checked with a test package and think we might have a cache consistency / changelog API problem. It took me a while but here is the basic thing: I uploaded a test package, changelog API reports it has changed, then i go to its simple page, and some of the time the new release file shows up, sometimes not. Tools like bandersnatch, pep381 and devpi-server (and probably others) use PyPI's changelog API to determine if there are changes. It seems those changes are signalled faster than they become consistently accessible through the CDN. This can lead to inconsistent mirrors because when the CDN has the files there is no change event anymore. Such mirrors are run by companies in-house so i think it's a real problem. Even without mirroring there can be problems because installs are not directly repeatable: pip install XYZ=2.0 can give you first 2.0.1, then 2.0.0 a minute later. I had hoped that a particular ip address sees things consistently. I am not familiar with Fastly's caching properties -- can they notify about the fact that a page/file is consistently up-to-date everywhere? Or can the cache be globally invalidated for a particular page/file? Any other ideas? Failing customizing Fastly usage and also maybe for the short term, is/could there be a special location provided by pypi.python.org which the above tools could use to get at the actual non-cached data? We could then maybe mitigate the problem through updates of the respective tools. That would at least solve the problem for one of my customers i think. best, holger On Sun, May 26, 2013 at 10:34 -0700, Noah Kantrowitz wrote: /farnsworth but seriously, at long last today it was my honor to throw the DNS switch to move PyPI to the Fastly caching CDN. I would like to thank Donald Stufft for doing much of the heavy lifting on the PyPI side, and to Fastly for graciously offering to host us. What does this mean for everyone? Well the biggest change is PyPI should get a whole lot faster. There are two major downsides however. There will now be a delay of several minutes in some cases between updating a package and having it be installable, and download counts will now be even more incorrect than they were before. The PyPI admins are discussing what to do about download counts long-term, but for now we all feel that the performance and availability benefits outweigh the loss. If anyone has any questions, or hears anything about issues with PyPI please don't hesitate to contact me. --Noah ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig I mentioned it on twitter but might as well mention it here as well. Currently there is no invalidation going on. The effect on the mirroring was unanticipated and I'm currently getting the invalidation API setup within PyPI. - Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig /simple/ Pages should now be immediately invalidated when a new package is released. thanks Donald. Looking at the implementation, i wonder what happens if after ``self._conn.commit()`` a changelog API call arrives, returns changes and a client uses it to retrieve changes before the fastly-purging takes place. It's still a potential race-condition or am i missing something? best, holger - Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA There's no way around a race condition. ``self._conn.commit()`` is what makes the changes available. If we purge prior to committing it then if someone hits the page between the purge and the self._conn.commit() then the client will see a page cached prior to the update (while the change log will appear to be updated). Essentially the same problem we have now. The current implementation does mean that if a client happens to hit between the commit and the purge
Re: [Distutils] Good news everyone, PyPI is behind a CDN
On May 27, 2013, at 2:21 PM, Ralf Schmitt r...@systemexit.de wrote: Noah Kantrowitz n...@coderanger.net writes: /farnsworth but seriously, at long last today it was my honor to throw the DNS switch to move PyPI to the Fastly caching CDN. I would like to thank Donald Stufft for doing much of the heavy lifting on the PyPI side, and to Fastly for graciously offering to host us. What does this mean for everyone? Well the biggest change is PyPI should get a whole lot faster. There are two major downsides however. There will now be a delay of several minutes in some cases between updating a package and having it be installable, and download counts will now be even more incorrect than they were before. The PyPI admins are discussing what to do about download counts long-term, but for now we all feel that the performance and availability benefits outweigh the loss. If anyone has any questions, or hears anything about issues with PyPI please don't hesitate to contact me. --Noah the xmlrpc api is broken when using http 1.0. the second call to curl uses http/1.0 and returns an empty response: $ cat body.txt EOF ?xml version='1.0'? methodCall methodNamepackage_releases/methodName params param valuestringe/string/value /param /params /methodCall EOF $ curl -X POST -d @body.txt http://pypi.python.org/pypi --header Content-Type:text/xml ?xml version='1.0'? methodResponse params param valuearraydata valuestring1.4.5/string/value /data/array/value /param /params /methodResponse $ curl -0 -X POST -d @body.txt http://pypi.python.org/pypi --header Content-Type:text/xml $ We have not supported HTTP 1.0 for quite some time. Even before the CDN move, we used the Host header to route between different HAProxy server blocks on the load balancers. I'm unaware of any reason people would be using HTTP 1.0 clients at this point, HTTP 1.1 has been a standard for 14 years now. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
[Distutils] Good news everyone, PyPI is behind a CDN
/farnsworth but seriously, at long last today it was my honor to throw the DNS switch to move PyPI to the Fastly caching CDN. I would like to thank Donald Stufft for doing much of the heavy lifting on the PyPI side, and to Fastly for graciously offering to host us. What does this mean for everyone? Well the biggest change is PyPI should get a whole lot faster. There are two major downsides however. There will now be a delay of several minutes in some cases between updating a package and having it be installable, and download counts will now be even more incorrect than they were before. The PyPI admins are discussing what to do about download counts long-term, but for now we all feel that the performance and availability benefits outweigh the loss. If anyone has any questions, or hears anything about issues with PyPI please don't hesitate to contact me. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Proposal: Restrict the characters in a project name
On May 14, 2013, at 10:03 PM, Donald Stufft wrote: On May 15, 2013, at 12:54 AM, Donald Stufft don...@stufft.io wrote: On May 15, 2013, at 12:45 AM, Donald Stufft don...@stufft.io wrote: On May 15, 2013, at 12:36 AM, Daniel Holth dho...@gmail.com wrote: = would certainty not be a valid name. So I agree with you about restrictions except possibly on the set of allowed characters. Of course the weird names aren't on pypi yet, the current tooling has bad Unicode support. Pep 3131 pretty much sums up this issue and the objections exactly, if you search/replace. It begins: Python code is written by many people in the world who are not familiar with the English language, or even well-acquainted with the Latin writing system. Such developers often desire to define classes and functions with names in their native languages, rather than having to come up with an (often incorrect) English translation of the concept they want to name. By using identifiers in their native language, code clarity and maintainability of the code among speakers of that language improves. The contexts are different. It's unlikely that someone in the same codebase is going to attempt to trick you into running function named fοο instead of foo (those are different by the way). However it is a very simple attack to tell newcomers to ``pip install Djangο`` instead of ``pip install Django`` (again different). - Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig Perhaps this better explains my point: http://d.stufft.io/image/2t021y342a1d - Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig And an install log, just to prove it's possible: https://gist.github.com/dstufft/5581735 File me as a +1 for this change. If we absolutely must support unicode package names, we should do the URLs in PyPI in punycode and have pip show a puny-mangled name in a confirmation prompt for anything with non-ascii characters in it. Yes, that does basically remove all reason to use unicode in package names, which is why I think blocking it is a much better idea. [a-zA-Z0-9_.-] is probably the right way to go. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] distil 0.1.1 released
On May 3, 2013, at 11:14 AM, Nick Coghlan ncogh...@gmail.com wrote: I would also be relatively happy for pip to refuse the temptation to guess if run globally and require an explicit --user or --system whenever it is run outside a virtual environment. However, I think it's better to make the typical pip install whatever work for most unprivileged users without requiring elevated privileges. I agree the proposed exception for root doesn't make sense so I withdraw that idea, even though installing things into root's home directory is a little strange. As far as Debian's dist-packages setup goes, that's their workaround for this misfeature of the current Python packaging ecosystem. As someone responsible for working with Python app deployment tools, this _will_ break the universe. Yes people should be using virtualenv, however many don't, deal with it. People expect package install as root to work like every other package system (yum, apt, take your pick) where they run sudo pip install a b c d and then run their app as a service-specific user. Magically writing to ~root will clearly not work in this case unless you also run your app as root (though I know some people do that too, but not behavior that should be encouraged). This proposal is entirely non-viable for anything but 100% best-practices users, full stop. --Noah signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig