Well, creating on Windows and deploying on Linux will only be possible if the entire set of dependencies either have no C extensions or are manylinux1 wheels.. but yeah, that's pretty much what we're doing right now with our reference implementation.
Regarding zipimporter, as far as I understand (correct me if I'm wrong) there's no such a solution for wheels (i.e. you can't use zipimporter on a zip of wheels) so does that means we'll have to package python files for all dependencies directly in the archive? Our current implementation simply runs `pip wheel --wheel-dir /my/wheelhosue/path --find-links /my/wheelhosue/path`, packages the wheelhouse, adds metadata and applies a name to the file and on the destination machine, wagon simply extracts the wheels and runs `pip install --no-index --find-links /extracted/wheelhosue/path`. On Wed, Nov 23, 2016 at 9:30 PM Brett Cannon <[email protected]> wrote: > This then ties into Kenneth's pipfile idea he's working on as it then > makes sense to make a wagon/wheelhouse for a lock file. To also tie into > the container aspect, if you dev on Windows but deploy to Linux, this can > allow for gathering your dependencies locally for Linux on your Windows box > and then deploy the set as a unit to your server (something Steve Dower and > I have thought about and why we support a lock file concept). > > And if we use zip files with no nesting then as long as it's only Python > code you could use zipimporter on the bundle directly. > > On Tue, Nov 22, 2016, 22:07 Nick Coghlan, <[email protected]> wrote: > > [Some folks are going to get this twice - unfortunately, Google's > mailing list mirrors are fundamentally broken, so replies to them > don't actually go to the original mailing list properly] > > (Note for context: I stumbled across Wagon recently, and commented > that we don't currently have a good target-environment-independent way > of bundling up a set of wheels as a single transferable unit) > > On 23 November 2016 at 03:44, Nir Cohen <[email protected]> wrote: > > We came up with a tool (http://github.com/cloudify-cosmo/wagon) to do > just > > that and that's what we currently use to create and install our plugins. > > While wheel solves the problem of generating wheels, there is no single, > > standard method for taking an entire set of dependencies packaged in a > > single location and installing them in a different location. > > Where I see this being potentially valuable is in terms of having a > common "multiwheel" transfer format that can be used for cases where > the goal is essentially wheelhouse caching and transfer. The two main > cases I'm aware of where this comes up: > > - offline installation support (i.e. the Cloudify plugins use case, > where the installation environment doesn't have networked access to an > index server) > - saving and restoring the wheelhouse cache (e.g. this comes up in > container build pipelines) > > The latter problem arises from an issue with the way some container > build environments (most notable Docker's) currently work: they always > run in a clean environment, which means they can't see the host's > wheel cache. One of the solutions to this is to let container builds > specify a "cache state" which is archived by the build management > service at the end of the build process, and then restored when > starting the next incremental image build. > > This kind of cache transfer is already *possible* today, but having a > standardised way of doing it makes it easier for people to write > general purpose tooling around the concept, without requiring that the > tool used to create the archive be the same tool used to unpack it at > install time. > > Cheers, > Nick. > > -- > Nick Coghlan | [email protected] | Brisbane, Australia > > _______________________________________________ > Distutils-SIG maillist - [email protected] > https://mail.python.org/mailman/listinfo/distutils-sig > >
_______________________________________________ Distutils-SIG maillist - [email protected] https://mail.python.org/mailman/listinfo/distutils-sig
