Re: [Distutils] draft PEP: manylinux1
On 30 January 2016 at 05:35, Nate Coraor wrote: > On Fri, Jan 22, 2016 at 6:29 AM, Nick Coghlan wrote: >> For the time being, these users should either pass the "--no-binary" >> option to pip, ask their distro to provide an index of pre-built wheel >> files for that distro (once we have the distro-specific wheel tagging >> PEP sorted out), or else ask their distro to update system Python >> packages in a more timely fashion (or all of the above). > > Is there a distro-specific wheel tagging PEP in development somewhere that I > missed? If not, I will get the ball rolling on it. Yeah, the "we" there was actually "Nate", since you recently mentioned having made progress on this for Galaxy :) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] draft PEP: manylinux1
> On Jan 29, 2016, at 2:35 PM, Nate Coraor wrote: > > Is there a distro-specific wheel tagging PEP in development somewhere that I > missed? If not, I will get the ball rolling on it. I think this a great idea, and I think it actually pairs nicely with the manylinux proposal. It should be pretty easy to cover the vast bulk of users with a handful of platform specific wheels (1-3ish) and then a manylinux wheel to cover the rest. It would let a project use newer toolchains/libraries in the common case, but still fall back to the older ones on more unusual platforms. - Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA signature.asc Description: Message signed with OpenPGP using GPGMail ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] draft PEP: manylinux1
On 30 January 2016 at 05:30, Nate Coraor wrote: > I wonder if, in relation to this, it may be best to have two separate tags: > one to indicate that the wheel includes external libraries rolled in to it, > and one to indicate that it doesn't. That way, a user can make a conscious > decision as to whether they want to install any wheels that could include > libraries that won't be maintained by the distribution package manager. That > way if we end up in a future world where manylinux wheels and > distro-specific wheels (that may depend on non-default distro packages) live > in PyPI together, there'd be a way to indicate a preference. I don't think we want to go into that level of detail in the platform tag, but metadata for bundled pre-built binaries in wheels and vendored dependencies in sdists is worth considering as an enhancement in its own right. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
[Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions
Hi all, I think this is ready for pronouncement now -- thanks to everyone for all their feedback over the last few weeks! The only change relative to the last posting is that we rewrote the section on "Platform detection for installers", to switch to letting distributors explicitly control manylinux1 compatibility by means of a _manylinux module. -n --- PEP: 513 Title: A Platform Tag for Portable Linux Built Distributions Version: $Revision$ Last-Modified: $Date$ Author: Robert T. McGibbon , Nathaniel J. Smith BDFL-Delegate: Nick Coghlan Discussions-To: Distutils SIG Status: Draft Type: Informational Content-Type: text/x-rst Created: 19-Jan-2016 Post-History: 19-Jan-2016, 25-Jan-2016, 29-Jan-2016 Abstract This PEP proposes the creation of a new platform tag for Python package built distributions, such as wheels, called ``manylinux1_{x86_64,i386}`` with external dependencies limited to a standardized, restricted subset of the Linux kernel and core userspace ABI. It proposes that PyPI support uploading and distributing wheels with this platform tag, and that ``pip`` support downloading and installing these packages on compatible platforms. Rationale = Currently, distribution of binary Python extensions for Windows and OS X is straightforward. Developers and packagers build wheels [1]_ [2]_, which are assigned platform tags such as ``win32`` or ``macosx_10_6_intel``, and upload these wheels to PyPI. Users can download and install these wheels using tools such as ``pip``. For Linux, the situation is much more delicate. In general, compiled Python extension modules built on one Linux distribution will not work on other Linux distributions, or even on different machines running the same Linux distribution with different system libraries installed. Build tools using PEP 425 platform tags [3]_ do not track information about the particular Linux distribution or installed system libraries, and instead assign all wheels the too-vague ``linux_i386`` or ``linux_x86_64`` tags. Because of this ambiguity, there is no expectation that ``linux``-tagged built distributions compiled on one machine will work properly on another, and for this reason, PyPI has not permitted the uploading of wheels for Linux. It would be ideal if wheel packages could be compiled that would work on *any* linux system. But, because of the incredible diversity of Linux systems -- from PCs to Android to embedded systems with custom libcs -- this cannot be guaranteed in general. Instead, we define a standard subset of the kernel+core userspace ABI that, in practice, is compatible enough that packages conforming to this standard will work on *many* linux systems, including essentially all of the desktop and server distributions in common use. We know this because there are companies who have been distributing such widely-portable pre-compiled Python extension modules for Linux -- e.g. Enthought with Canopy [4]_ and Continuum Analytics with Anaconda [5]_. Building on the compability lessons learned from these companies, we thus define a baseline ``manylinux1`` platform tag for use by binary Python wheels, and introduce the implementation of preliminary tools to aid in the construction of these ``manylinux1`` wheels. Key Causes of Inter-Linux Binary Incompatibility To properly define a standard that will guarantee that wheel packages meeting this specification will operate on *many* linux platforms, it is necessary to understand the root causes which often prevent portability of pre-compiled binaries on Linux. The two key causes are dependencies on shared libraries which are not present on users' systems, and dependencies on particular versions of certain core libraries like ``glibc``. External Shared Libraries - Most desktop and server linux distributions come with a system package manager (examples include ``APT`` on Debian-based systems, ``yum`` on ``RPM``-based systems, and ``pacman`` on Arch linux) that manages, among other responsibilities, the installation of shared libraries installed to system directories such as ``/usr/lib``. Most non-trivial Python extensions will depend on one or more of these shared libraries, and thus function properly only on systems where the user has the proper libraries (and the proper versions thereof), either installed using their package manager, or installed manually by setting certain environment variables such as ``LD_LIBRARY_PATH`` to notify the runtime linker of the location of the depended-upon shared libraries. Versioning of Core Shared Libraries --- Even if the developers a Python extension module wish to use no external shared libraries, the modules will generally have a dynamic runtime dependency on the GNU C library, ``glibc``. While it is possible, statically linking ``glibc`` is usually a bad idea because certain important C functions like ``dlopen()`` cann
Re: [Distutils] draft PEP: manylinux1
On Fri, Jan 29, 2016 at 11:35 AM, Nate Coraor wrote: > On Fri, Jan 22, 2016 at 6:29 AM, Nick Coghlan wrote: >> >> On 22 January 2016 at 20:48, M.-A. Lemburg wrote: >> > People who rely on Linux distributions want to continue >> > to do so and get regular updates for system packages from >> > their system vendor. Having wheel files override these >> > system packages by including libs directly in the wheel >> > silently breaks this expectation, potentially opening >> > up the installations for security holes, difficult to >> > track bugs and possible version conflicts with already >> > loaded versions of the shared libs. >> >> For the time being, these users should either pass the "--no-binary" >> option to pip, ask their distro to provide an index of pre-built wheel >> files for that distro (once we have the distro-specific wheel tagging >> PEP sorted out), or else ask their distro to update system Python >> packages in a more timely fashion (or all of the above). > > > Is there a distro-specific wheel tagging PEP in development somewhere that I > missed? If not, I will get the ball rolling on it. It's all yours, I think :-). Some unsolicited advice that you can take or leave...: I think there are two separable questions that are easy to conflate here: (1) the use of a distro tag to specify an ABI ("when I say libssl.so.1, I mean one that exports the same symbols and semantics as the one that Fedora shipped"), and (2) the use of a distro tag for packages that want to depend on more system-supplied libraries and let the distro worry about updating them. I also think there are two compelling use cases for these: (a) folks who would be happy with manylinux, except for whatever reason they can't use it, e.g. because they're on ARM. I bet a platform tag for Raspbian wheels would be quite popular, even if it still required people to vendor dependencies. (b) folks who really want integration between pip and the distro package manager. So my suggestion would be to start with one PEP that just tries to define distro-specific platform tags to answer question (1) and targeting use case (a), and then a second PEP that adds new metadata for specifying external system requirements to answer question (1) and target use case (b). The advantage of doing things in this order is that once you have a platform tag saying "this is a Debian wheel" or "this is a Fedora wheel" scoping you to a particular distribution, then your external package metadata can say "I need the package 'libssl1.0.2' version 1.0.2e-1 or better" or "I need the package 'openssl' version 1.0.2e-5.fc24 or better", and avoid the tarpit of trying to define some cross-distro standard for package naming. -n -- Nathaniel J. Smith -- https://vorpus.org ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] How to get pip to really, really, I mean it -- rebuild this damn package!
On Fri, Jan 29, 2016 at 2:43 PM, Nathaniel Smith wrote: > This is the point where I'd probably start adding print statements to my > local copy of pip. E.g. grep for that "requirement already satisfied" > message and then have it print out sys.path as well. To me this still > smells like some simple bug in one of conda, your environment, or pip, so > it's worth figuring out... > yeah, I was hoping not to have to debug pip :-) But Robert indicate that this may well be a pipi bug -- I'll try an older version first, then maybe dig into the pip code... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R(206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception chris.bar...@noaa.gov ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] How to get pip to really, really, I mean it -- rebuild this damn package!
On Jan 29, 2016 7:48 AM, "Chris Barker - NOAA Federal" < chris.bar...@noaa.gov> wrote: > > >> Requirement already satisfied (use --upgrade to upgrade): gsw==3.0.3 from > >> file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 in > >> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 > > > > I think this is saying that pip thinks it has found an > > already-installed version of gsw 3.0.3 in sys.path, and that the > > directory in your sys.path where it's already installed is > > > > /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 > > That is the temp dir conda sets up to unpack downloaded files, and do > its work in -- hence the name. I'll look and see what's there. I'm > pretty sure conda build starts out with an empty dir, however. And > that dir should not be on sys.path. This is the point where I'd probably start adding print statements to my local copy of pip. E.g. grep for that "requirement already satisfied" message and then have it print out sys.path as well. To me this still smells like some simple bug in one of conda, your environment, or pip, so it's worth figuring out... > > I think this means that that directory is (a) in sys.path, and (b) > > contains a .egg-info/.dist-info directory for gsw 3.0.3. Part (a) > > seems weird and broken. > > Indeed. And I get the same symptoms with a clean environment that I've > set up outside conda build. Though with the same source dir. But with > conda build, it's a fresh unpack of the tarball. > > > Do you have "." in your PYTHONPATH or anything like that? > > God no! Too bad, it would have been an easy explanation ;-) > > Don't know why it seems to be building a wheel for it, if it already > > thinks that it's installed... this is also odd. > > Yes it is. But it doesn't install it :-( > > > > > $PYTHON -m pip install --no-cache-dir --upgrade --force-reinstall ./ > > > > ? Though I'd think that -I would have the same affect as --force-reinstall... > > > So did I, and I think I tried --force-reinstall already, but I will again. > > > (It doesn't look like the cache dir is your problem here, but you do > > probably want to use --no-cache-dir anyway just as good practice, just > > because you don't want to accidentally package up a stale version of > > the software that got pulled out of your cache instead of the version > > you thought you were packaging in the tree in front of you. > > Exactly. Doesn't seem to make a difference, though. > > > Also, I think it's a bug in pip that it caches builds of source trees > > -- PyPI can enforce the rule that each (package name, version number) > > sdist is unique, but for a work-in-progress VCS checkout it's just not > > true that (package name, version number) uniquely identifies a > > snapshot of the whole tree. So in something like 'pip install .', then > > requirement resolution code should treat this as a special requirement > > that it wants *this exact tree*, not just any package that has the > > same (package name, version number) as this tree; and the resulting > > wheel should not be cached. > > Absolutely! In fact, I'll bet that approach is the source of the > problem here. If not automagically, there should be a flag, at least. Maybe... it looks like at least @dstufft agrees that the current behavior is a bug: https://github.com/pypa/pip/issues/536 and if this were fixed then it would at least hide your problem. But the underlying bug here is that pip seems to think that a package is installed when it wasn't. Hacking it to ignore whether a package is installed would just be papering over this. -n ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Fwd: How to get pip to really, really, I mean it -- rebuild this damn package!
On Fri, Jan 29, 2016 at 12:25 PM, Robert Collins wrote: > Please try pip 7.1? Latest before 8; we're not meant to be caching > wheels of by-location things from my memory, but it may have > regressed/changed with the cache changes made during the 8 development > cycle. > indeed this is looking like a regression. I have done: * create 2 fresh conda environments * downgrade pip to 7.1.2 in both (conda seems to put the latest pip in a new environment by default) $ pip --version pip 7.1.2 from /Users/chris.barker/miniconda2/envs/test2/lib/python2.7/site-packages (python 2.7) * build and install the package in one of them: $ pip install ./ Processing /Users/chris.barker/Temp/geojson-1.3.2 Requirement already satisfied (use --upgrade to upgrade): setuptools in /Users/chris.barker/miniconda2/envs/test1/lib/python2.7/site-packages/setuptools-19.4-py2.7.egg (from geojson==1.3.2) Building wheels for collected packages: geojson Running setup.py bdist_wheel for geojson Stored in directory: /Users/chris.barker/Library/Caches/pip/wheels/67/47/d8/01cf2332293b60900ec87ff03bbe9fff92bc85f194e0eb0e74 Successfully built geojson Installing collected packages: geojson Successfully installed geojson-1.3.2 You are using pip version 7.1.2, however version 8.0.2 is available. You should consider upgrading via the 'pip install --upgrade pip' command. It built and installed fine: (test1)orrw-m-4179497:geojson-1.3.2 chris.barker$ python -c "import geojson" (test1)orrw-m-4179497:geojson-1.3.2 chris.barker$ Now I go to install in the other environment: first, check if geojson is there: (test2)orrw-m-4179497:Temp chris.barker$ python -c "import geojson" Traceback (most recent call last): File "", line 1, in ImportError: No module named geojson nope. so use pip to install from source (test2)orrw-m-4179497:geojson-1.3.2 chris.barker$ pip install ./ Processing /Users/chris.barker/Temp/geojson-1.3.2 Requirement already satisfied (use --upgrade to upgrade): setuptools in /Users/chris.barker/miniconda2/envs/test2/lib/python2.7/site-packages/setuptools-19.4-py2.7.egg (from geojson==1.3.2) Building wheels for collected packages: geojson Running setup.py bdist_wheel for geojson Stored in directory: /Users/chris.barker/Library/Caches/pip/wheels/67/47/d8/01cf2332293b60900ec87ff03bbe9fff92bc85f194e0eb0e74 Successfully built geojson Installing collected packages: geojson Successfully installed geojson-1.3.2 You are using pip version 7.1.2, however version 8.0.2 is available. You should consider upgrading via the 'pip install --upgrade pip' command. All good. See if it works: (test2)orrw-m-4179497:Temp chris.barker$ python -c "import geojson; print geojson.__version__" 1.3.2 (test2)orrw-m-4179497:Temp chris.barker$ yup -- all good. However -- AARRGG! I then repeated that same exercise with pip 8.0.1, which was failing for me before (multiple times) but it jsut worked! This is really, really frustrating...did the downgrade, then re-upgrade of pip somehow fix this? really, really weird. OK -- well, I'm done for now with pip for building conda packages -CHB > > -Rob > > On 30 January 2016 at 04:48, Chris Barker - NOAA Federal > wrote: > >>> Requirement already satisfied (use --upgrade to upgrade): gsw==3.0.3 > from > >>> file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 in > >>> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 > >> > >> I think this is saying that pip thinks it has found an > >> already-installed version of gsw 3.0.3 in sys.path, and that the > >> directory in your sys.path where it's already installed is > >> > >> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 > > > > That is the temp dir conda sets up to unpack downloaded files, and do > > its work in -- hence the name. I'll look and see what's there. I'm > > pretty sure conda build starts out with an empty dir, however. And > > that dir should not be on sys.path. > > > >> I think this means that that directory is (a) in sys.path, and (b) > >> contains a .egg-info/.dist-info directory for gsw 3.0.3. Part (a) > >> seems weird and broken. > > > > Indeed. And I get the same symptoms with a clean environment that I've > > set up outside conda build. Though with the same source dir. But with > > conda build, it's a fresh unpack of the tarball. > > > >> Do you have "." in your PYTHONPATH or anything like that? > > > > God no! > > > >> Don't know why it seems to be building a wheel for it, if it already > >> thinks that it's installed... this is also odd. > > > > Yes it is. But it doesn't install it :-( > > > >> > >> $PYTHON -m pip install --no-cache-dir --upgrade --force-reinstall ./ > >> > >> ? Though I'd think that -I would have the same affect as > --force-reinstall... > >> > > So did I, and I think I tried --force-reinstall already, but I will > again. > > > >> (It doesn't look like the cache dir is your problem here, but you do > >> probably want to use --no-cache-dir anyway just
Re: [Distutils] PEP 513: A Platform Tag for Portable Linux Built Distributions Version
On Fri, Jan 29, 2016 at 6:57 AM, Nick Coghlan wrote: > I like the simple approach Nathaniel now has in the PEP, as that seems > like it should be relatively easy to handle as either modifications > directly to a python package, or as an add-on module (ala installing > LSB support, only much lighter weight). It doesn't scale to lots of > platform tags, but we don't really intend it to - we'd like to avoid > platform tag proliferation if we can, and if we're successful in that, > then we won't need to solve the problem with a lack of scalability. > > Cheers, > Nick. > > P.S. To provide a bit more context on "Why is a separate file > easier?", the main benefit is that adding a new file simply *cannot* > break the sys or platform modules, while patching them can. That's > still not a guarantee that any given distro will actually provide an > importable "_manylinux" module to indicate compatibility, but given > the fallback glibc check, _manylinux is more a hedge against future > *in*compatibility than it is anything else. Great! I think we're ready for pronouncement then... I'll resend the (hopefully final) version in a moment. -n -- Nathaniel J. Smith -- https://vorpus.org ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
[Distutils] Fwd: How to get pip to really, really, I mean it -- rebuild this damn package!
Bah, offlist by mistake. -- Forwarded message -- From: Robert Collins Date: 30 January 2016 at 09:25 Subject: Re: [Distutils] How to get pip to really, really, I mean it -- rebuild this damn package! To: Chris Barker - NOAA Federal Please try pip 7.1? Latest before 8; we're not meant to be caching wheels of by-location things from my memory, but it may have regressed/changed with the cache changes made during the 8 development cycle. -Rob On 30 January 2016 at 04:48, Chris Barker - NOAA Federal wrote: >>> Requirement already satisfied (use --upgrade to upgrade): gsw==3.0.3 from >>> file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 in >>> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 >> >> I think this is saying that pip thinks it has found an >> already-installed version of gsw 3.0.3 in sys.path, and that the >> directory in your sys.path where it's already installed is >> >> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 > > That is the temp dir conda sets up to unpack downloaded files, and do > its work in -- hence the name. I'll look and see what's there. I'm > pretty sure conda build starts out with an empty dir, however. And > that dir should not be on sys.path. > >> I think this means that that directory is (a) in sys.path, and (b) >> contains a .egg-info/.dist-info directory for gsw 3.0.3. Part (a) >> seems weird and broken. > > Indeed. And I get the same symptoms with a clean environment that I've > set up outside conda build. Though with the same source dir. But with > conda build, it's a fresh unpack of the tarball. > >> Do you have "." in your PYTHONPATH or anything like that? > > God no! > >> Don't know why it seems to be building a wheel for it, if it already >> thinks that it's installed... this is also odd. > > Yes it is. But it doesn't install it :-( > >> >> $PYTHON -m pip install --no-cache-dir --upgrade --force-reinstall ./ >> >> ? Though I'd think that -I would have the same affect as --force-reinstall... >> > So did I, and I think I tried --force-reinstall already, but I will again. > >> (It doesn't look like the cache dir is your problem here, but you do >> probably want to use --no-cache-dir anyway just as good practice, just >> because you don't want to accidentally package up a stale version of >> the software that got pulled out of your cache instead of the version >> you thought you were packaging in the tree in front of you. > > Exactly. Doesn't seem to make a difference, though. > >> Also, I think it's a bug in pip that it caches builds of source trees >> -- PyPI can enforce the rule that each (package name, version number) >> sdist is unique, but for a work-in-progress VCS checkout it's just not >> true that (package name, version number) uniquely identifies a >> snapshot of the whole tree. So in something like 'pip install .', then >> requirement resolution code should treat this as a special requirement >> that it wants *this exact tree*, not just any package that has the >> same (package name, version number) as this tree; and the resulting >> wheel should not be cached. > > Absolutely! In fact, I'll bet that approach is the source of the > problem here. If not automagically, there should be a flag, at least. > > However, what seems to be happening is that pip is looking outside the > current Python environment somewhere to see if this package needs to > be installed. It may be something that works with virtualenv, but > doesn't with conda environments for some reason. > > I guess on some level pip simply isn't designed to build and install > from local source :-( > > In the end, I'm still confused: does pip install give me anything that: > > setup.py install single-version-externally-managed > > Doesn't? Other that support for non-setuptools installs, anyway. > > CHB > > >> I don't know if there are any bugs filed >> in pip on this...) >> >> -n >> >> -- >> Nathaniel J. Smith -- https://vorpus.org > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig -- Robert Collins Distinguished Technologist HP Converged Cloud -- Robert Collins Distinguished Technologist HP Converged Cloud ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] draft PEP: manylinux1
On Fri, Jan 22, 2016 at 10:26 PM, Nick Coghlan wrote: > On 22 January 2016 at 22:07, M.-A. Lemburg wrote: > > However, system vendors will often be a lot faster with updates > > than package authors, simply because it's their business model, > > so as user you will want to benefit from those updates and > > not have to rely on the package author to ship new wheel files. > > This is true for the subset of packages monitored by distro security > response teams, but there's a *lot* of software not currently packaged > for Linux distros that never will be as more attention is given to the > "rebuild the world on demand" model that elastic cloud computing and > fast internet connections enable. > > My fundamental concern is that if a package author publishes a distro > dependent wheel file, pip attempts to install it, and it doesn't work, > the reaction for many users is going to be "Python packaging is > broken", not "the packaging of this particular package is broken". > This is already broken with source dists if you don't have the appropriate -dev packages (or a compiler) installed. Some package authors provide more useful feedback explaining what the problem is and how one might resolve it, rather than dying on a compiler error due to a missing header, but many do not. One solution to this for both source and binary distributions is package manager awareness in the build/install tools, and to have packages declare their dependencies in structured metadata. A translational layer would make this easier on package authors: If they only had to say they depend on "OpenSSL headers" and that was translated to the correct package for the OS, this could be relayed to the user at build time ("install these packages using this command") or the package manager could be directly invoked, if the user has chosen to allow the build/install tool to do that. --nate > However, moving the "generic linux wheels are ignored by default" > behaviour to pip-the-client, rather than enforcing it as a restriction > on PyPI uploads could definitely be a reasonable alternative way of > addressing that concern. > > Cheers, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] draft PEP: manylinux1
On Fri, Jan 22, 2016 at 5:42 AM, Nick Coghlan wrote: > On 22 January 2016 at 19:33, M.-A. Lemburg wrote: > > For example, if a package needs a specific version of libpng, > > the package author can document this and the user can then make > > sure to install that particular version. > > The assumption that any given Python user will know how to do this is > not a reasonable assumption in 2016. > > If a publisher wants to bundle a particular version of libpng, they > can. If (as is also entirely reasonable) they don't want to assume the > associated responsibilities for responding to CVEs, then they can > stick with source distributions, or target more specific Linux > versions (as previously discussed in the context of Nate Coraor's > Starforge work) > I wonder if, in relation to this, it may be best to have two separate tags: one to indicate that the wheel includes external libraries rolled in to it, and one to indicate that it doesn't. That way, a user can make a conscious decision as to whether they want to install any wheels that could include libraries that won't be maintained by the distribution package manager. That way if we end up in a future world where manylinux wheels and distro-specific wheels (that may depend on non-default distro packages) live in PyPI together, there'd be a way to indicate a preference. --nate > > Regards, > Nick. > > -- > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Fwd: How to get pip to really, really, I mean it -- rebuild this damn package!
On Fri, Jan 29, 2016 at 12:25 PM, Robert Collins wrote > > > Please try pip 7.1? Latest before 8; we're not meant to be caching > wheels of by-location things from my memory, but it may have > regressed/changed with the cache changes made during the 8 development > will do -- in fact, I was pretty sure this was working earlier stay tuned. -CHB > cycle. > > -Rob > > On 30 January 2016 at 04:48, Chris Barker - NOAA Federal > wrote: > >>> Requirement already satisfied (use --upgrade to upgrade): gsw==3.0.3 > from > >>> file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 in > >>> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 > >> > >> I think this is saying that pip thinks it has found an > >> already-installed version of gsw 3.0.3 in sys.path, and that the > >> directory in your sys.path where it's already installed is > >> > >> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 > > > > That is the temp dir conda sets up to unpack downloaded files, and do > > its work in -- hence the name. I'll look and see what's there. I'm > > pretty sure conda build starts out with an empty dir, however. And > > that dir should not be on sys.path. > > > >> I think this means that that directory is (a) in sys.path, and (b) > >> contains a .egg-info/.dist-info directory for gsw 3.0.3. Part (a) > >> seems weird and broken. > > > > Indeed. And I get the same symptoms with a clean environment that I've > > set up outside conda build. Though with the same source dir. But with > > conda build, it's a fresh unpack of the tarball. > > > >> Do you have "." in your PYTHONPATH or anything like that? > > > > God no! > > > >> Don't know why it seems to be building a wheel for it, if it already > >> thinks that it's installed... this is also odd. > > > > Yes it is. But it doesn't install it :-( > > > >> > >> $PYTHON -m pip install --no-cache-dir --upgrade --force-reinstall ./ > >> > >> ? Though I'd think that -I would have the same affect as > --force-reinstall... > >> > > So did I, and I think I tried --force-reinstall already, but I will > again. > > > >> (It doesn't look like the cache dir is your problem here, but you do > >> probably want to use --no-cache-dir anyway just as good practice, just > >> because you don't want to accidentally package up a stale version of > >> the software that got pulled out of your cache instead of the version > >> you thought you were packaging in the tree in front of you. > > > > Exactly. Doesn't seem to make a difference, though. > > > >> Also, I think it's a bug in pip that it caches builds of source trees > >> -- PyPI can enforce the rule that each (package name, version number) > >> sdist is unique, but for a work-in-progress VCS checkout it's just not > >> true that (package name, version number) uniquely identifies a > >> snapshot of the whole tree. So in something like 'pip install .', then > >> requirement resolution code should treat this as a special requirement > >> that it wants *this exact tree*, not just any package that has the > >> same (package name, version number) as this tree; and the resulting > >> wheel should not be cached. > > > > Absolutely! In fact, I'll bet that approach is the source of the > > problem here. If not automagically, there should be a flag, at least. > > > > However, what seems to be happening is that pip is looking outside the > > current Python environment somewhere to see if this package needs to > > be installed. It may be something that works with virtualenv, but > > doesn't with conda environments for some reason. > > > > I guess on some level pip simply isn't designed to build and install > > from local source :-( > > > > In the end, I'm still confused: does pip install give me anything that: > > > > setup.py install single-version-externally-managed > > > > Doesn't? Other that support for non-setuptools installs, anyway. > > > > CHB > > > > > >> I don't know if there are any bugs filed > >> in pip on this...) > >> > >> -n > >> > >> -- > >> Nathaniel J. Smith -- https://vorpus.org > > ___ > > Distutils-SIG maillist - Distutils-SIG@python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R(206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception chris.bar...@noaa.gov ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] draft PEP: manylinux1
On Thu, Jan 21, 2016 at 6:31 PM, Nathaniel Smith wrote: > On Thu, Jan 21, 2016 at 2:22 PM, Nate Coraor wrote: > > Could this instead use the more powerful json-based syntax proposed by > Nick > > here: > > > > > https://mail.python.org/pipermail/distutils-sig/2015-July/026617.html > > > > I have already implemented support for this in pip and wheel. > > Totally happy to change the compatibility.cfg stuff -- the version in > the PEP was written in about 5 minutes in hopes of sparking discussion > :-). > > Some other questions: > 1) is this going to work for multi-arch (binaries for multiple cpu > architectures sharing a single /etc)? Multiple interpreters? I guess > the advantage of Nick's design is that it's scoped by the value of > distutils.util.get_platform(), so multi-arch installs could have > different values -- a distro could declare that their x86-64 python > builds are manylinux1 compatible but their i386 python builds aren't. > Maybe it would be even better if the files were > /etc/python/binary-compatibility/linux_x86_64.cfg etc., so that the > different .cfg files could be shipped in each architecture's package > without colliding. OTOH I don't know if any of this is very useful in > practice. > I don't think the proposed syntax would have any trouble with multiarch other than that it's contained in one file and so would need to live in a single package, or be dynamically generated based on which arches you installed. If that was a problem we could support a /etc/python/binary-compatibility.d type of thing. > 2) in theory one could imaging overriding this on a per-venv, > per-user, or per-system level; which of these are useful to support? > (Per-system seems like the most obvious, since an important use case > will be distros setting this flag on behalf of their users.) > Per-venv overriding was part of the original proposal and my implementation, per-user could be useful too. > There is one feature that I do think is important in the PEP 513 > draft, and that Nick's suggestion from July doesn't have: in the PEP > 513 design, the manylinux1 flag can be true, false, or unspecified, > and this is independent of other compatibility settings; in Nick's > proposal there's an exhaustive list of all compatible tags, and > everything not on that list is assumed to be incompatible. Where this > matters is when we want to release manylinux2. At this point we'll > want pip to use some autodetection logic on old distros that were > released before manylinux2, while respecting the compatibility flag on > newer distros that do know about manylinux2. This requires a tri-state > setting with "not specified" as a valid value. > One solution would be `compatible` and `incompatible` keys rather than `install`? --nate ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] draft PEP: manylinux1
On Fri, Jan 22, 2016 at 6:29 AM, Nick Coghlan wrote: > On 22 January 2016 at 20:48, M.-A. Lemburg wrote: > > People who rely on Linux distributions want to continue > > to do so and get regular updates for system packages from > > their system vendor. Having wheel files override these > > system packages by including libs directly in the wheel > > silently breaks this expectation, potentially opening > > up the installations for security holes, difficult to > > track bugs and possible version conflicts with already > > loaded versions of the shared libs. > > For the time being, these users should either pass the "--no-binary" > option to pip, ask their distro to provide an index of pre-built wheel > files for that distro (once we have the distro-specific wheel tagging > PEP sorted out), or else ask their distro to update system Python > packages in a more timely fashion (or all of the above). > Is there a distro-specific wheel tagging PEP in development somewhere that I missed? If not, I will get the ball rolling on it. --nate ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] How to get pip to really, really, I mean it -- rebuild this damn package!
>> Requirement already satisfied (use --upgrade to upgrade): gsw==3.0.3 from >> file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 in >> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 > > I think this is saying that pip thinks it has found an > already-installed version of gsw 3.0.3 in sys.path, and that the > directory in your sys.path where it's already installed is > > /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 That is the temp dir conda sets up to unpack downloaded files, and do its work in -- hence the name. I'll look and see what's there. I'm pretty sure conda build starts out with an empty dir, however. And that dir should not be on sys.path. > I think this means that that directory is (a) in sys.path, and (b) > contains a .egg-info/.dist-info directory for gsw 3.0.3. Part (a) > seems weird and broken. Indeed. And I get the same symptoms with a clean environment that I've set up outside conda build. Though with the same source dir. But with conda build, it's a fresh unpack of the tarball. > Do you have "." in your PYTHONPATH or anything like that? God no! > Don't know why it seems to be building a wheel for it, if it already > thinks that it's installed... this is also odd. Yes it is. But it doesn't install it :-( > > $PYTHON -m pip install --no-cache-dir --upgrade --force-reinstall ./ > > ? Though I'd think that -I would have the same affect as --force-reinstall... > So did I, and I think I tried --force-reinstall already, but I will again. > (It doesn't look like the cache dir is your problem here, but you do > probably want to use --no-cache-dir anyway just as good practice, just > because you don't want to accidentally package up a stale version of > the software that got pulled out of your cache instead of the version > you thought you were packaging in the tree in front of you. Exactly. Doesn't seem to make a difference, though. > Also, I think it's a bug in pip that it caches builds of source trees > -- PyPI can enforce the rule that each (package name, version number) > sdist is unique, but for a work-in-progress VCS checkout it's just not > true that (package name, version number) uniquely identifies a > snapshot of the whole tree. So in something like 'pip install .', then > requirement resolution code should treat this as a special requirement > that it wants *this exact tree*, not just any package that has the > same (package name, version number) as this tree; and the resulting > wheel should not be cached. Absolutely! In fact, I'll bet that approach is the source of the problem here. If not automagically, there should be a flag, at least. However, what seems to be happening is that pip is looking outside the current Python environment somewhere to see if this package needs to be installed. It may be something that works with virtualenv, but doesn't with conda environments for some reason. I guess on some level pip simply isn't designed to build and install from local source :-( In the end, I'm still confused: does pip install give me anything that: setup.py install single-version-externally-managed Doesn't? Other that support for non-setuptools installs, anyway. CHB > I don't know if there are any bugs filed > in pip on this...) > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] PEP 513: A Platform Tag for Portable Linux Built Distributions Version
On 29 January 2016 at 13:28, Nathaniel Smith wrote: > On Thu, Jan 28, 2016 at 2:35 AM, Oscar Benjamin > wrote: >> On 28 January 2016 at 07:46, Nathaniel Smith wrote: >>> >>> On further thought, I realized that it actually has to be in the >>> standard library directory / namespace, and can't live in >>> site-packages: for the correct semantics it needs to be inherited by >>> virtualenvs; if it isn't then we'll see confusing rare problems. And >>> virtualenvs inherit the stdlib, but not the site-packages (in >>> general). >> >> Surely virtualenv can be made to import this information from the >> parent environment. It's already virtualenv's job to set up pip etc. >> so that you're ready to install things from PyPI. > > This doesn't seem like it would be attractive to distributors to me. > For them, it's just as easy either way to drop a file into > /usr/lib/python3.5/ versus /usr/lib/python3.5/site-packages. But if > you use site-packages/, you have to make sure you also have a patched > virtualenv and pyvenv, and worry about whether your distribution will > ever get used with old versions of virtualenv, etc., whereas if you > drop it into the stdlib directory then it just works robustly > everywhere. I guess we could put some finger-wagging in a PEP telling > them to use site-packages/, patch virtualenv, and eat their > vegetables, but I don't know why they'd listen to us :-). We'd also be back in "patching the standard library" territory again, since we'd also have to modify venv. I like the simple approach Nathaniel now has in the PEP, as that seems like it should be relatively easy to handle as either modifications directly to a python package, or as an add-on module (ala installing LSB support, only much lighter weight). It doesn't scale to lots of platform tags, but we don't really intend it to - we'd like to avoid platform tag proliferation if we can, and if we're successful in that, then we won't need to solve the problem with a lack of scalability. Cheers, Nick. P.S. To provide a bit more context on "Why is a separate file easier?", the main benefit is that adding a new file simply *cannot* break the sys or platform modules, while patching them can. That's still not a guarantee that any given distro will actually provide an importable "_manylinux" module to indicate compatibility, but given the fallback glibc check, _manylinux is more a hedge against future *in*compatibility than it is anything else. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
[Distutils] new devpi releases (2.6.0) with pip-search/offline mode support
This trinity release of devpi, the private packaging and workflow system, is drop-in compatible to earlier releases and comes with these improvements: - support for pip search on the server side which is also configured when "devpi use" writes to pip configuration files. - explicit --offline-mode for devpi-server to avoid trying unneccessary and potentially laggy network requests and to streamline simple pages to only contain releases that are locally cached. thanks Daniel Panteleit for the PR. - push from root/pypi to other indexes works now. Docs are to be found as usual at: http://doc.devpi.net This release brought to you mainly by Florian Schulze and me and a few still unnamed sponsoring companies. Speaking of which, if you need support, training, adjustments wrt packaging and professional testing you may contact us through http://merlinux.eu. You can also expect devpi-server-3.0 soon, a major new release which is to bring improvements like generalized mirroring, storage backends, speed and internal code cleanups. best, holger devpi-server-2.6.0 (2016-1-29) -- - fix issue262: new experimental option --offline-mode will prevent devpi-server from even trying to perform network requests and it also strip all non-local release files from the simple index. Thanks Daniel Panteleit for the PR. - fix issue304: mark devpi-server versions older than 2.2.x as incompatible and requiring an import/export cycle. - fix issue296: try to fetch files from master again when requested, if there were checksum errors during replication. - if a user can't be found during authentication (with ``setup.py upload`` for example), then the http return code is now 401 instead of 404. - fix issue293: push from root/pypi to another index is now supported - fix issue265: ignore HTTP(S) proxies when checking if the server is already running. - Add ``content_type`` route predicate for use by plugins. devpi-web-2.6.0 (2016-1-29) --- - fix issue305: read documentation html files in binary and let BeautifulSoup detect the encoding. - require devpi-server >= 2.6.0 - support for ``pip search`` command on indexes devpi-client-2.4.0 (2016-1-29) -- - fix issue291: transfer file modes with vcs exports. Thanks Sergey Vasilyev for the report. - new option "--index" for "install", "list", "push", "remove", "upload" and "test" which allows to use a different than the current index without using "devpi use" before - set ``index`` in ``[search]`` section of ``pip.cfg`` when writing cfgs, to support ``pip search`` ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig