On Thu, Nov 26, 2015 at 3:05 AM Simon Davy <bloodearn...@gmail.com> wrote:

> On Thursday, 26 November 2015, Marco Ceppi <marco.ce...@canonical.com>
> wrote:
> > On Wed, Nov 25, 2015 at 4:08 PM Simon Davy <bloodearn...@gmail.com>
> wrote:
> >>
> >> On 25 November 2015 at 16:02, Marco Ceppi <marco.ce...@canonical.com>
> wrote:
> >> > ## Wheel House for layer dependencies
> >> >
> >> > Going forward we recommend all dependencies for layers and charms be
> >> > packaged in a wheelhouse.txt file. This perform the installation of
> pypi
> >> > packages on the unit instead of first on the local machine meaning
> Python
> >> > libraries that require architecture specific builds will do it on the
> units
> >> > architecture.
> >>
> >> If I'm understanding the above correctly, this approach is a blocker
> for us.
> >>
> >> We would not want to install direct from pypi on a production service
> >>
> >>  1) pypi packages are not signed (or when they are, pip doesn't verify
> >> the signature)
> >>  2) pypi is an external dependency and thus unreliable (although not
> >> as bad these days)
> >>  3) old versions can disappear from pypi at an authors whim.
> >>  4) installing c packages involves installing a c toolchain on your
> prod machine
> >>
> >> Additionally, our policy (Canonical's, that is), does not allow access
> >> to the internet on production machines, for very good reasons. This is
> >> the default policy in many (probably most) production environments.
> >>
> >> Any layer or charm that consumes a layer that uses this new approach
> >> for dependencies would thus be unusable to us :(
> >>
> >> It also harms repeatability, and I would not want to use it even if
> >> our access policy allowed access to pypi.
> >>
> >> For python charm dependencies, we use system python packages as much
> >> as possible, or if we need any wheels, we ship that wheel in the
> >> charm, and pip install it directly from the there. No external
> >> network, completely repeatable.
> >
> > So, allow me to clarify. If you review the pastebin outputs from the
> original announcement email, what this shift does is previously `charm
> build` would create and embed installed dependencies into the charm under
> lib/ much like charm-helper-sync did for instead for any arbitrary Pypi
> dependency. Issues there are for PyYAML it will build a yaml.so file which
> would be built based on the architecture of your machine and not the cloud.
>
> Right. This was the bit which confused me, I think.
>
> Can we not just use python-yaml, as its installed by default on cloud
> images anyway?
>
> We use virtualenv with --system-site-packages, and use system packages for
> python libs with c packages where possible, leaving wheels for things which
> aren't packaged or we want newer versions of.
>
>
Again, this is for hook dependencies, not exactly for dependencies of the
workload. The charm could apt intall python-yaml, but using
--system-site-packages when building is something I'd discourage since not
everyone has the same apt pacakges installed. Unless that user is building
on a fresh cloud-image there's a chance they won't catch some packages that
don't get declared.

We'd be interested in making this a better story. The wheelhousing for
dependencies not yet available in the archive instead of embedding them in
the charm was a first step but certainly not the last. I'm not sure how
this would work when we generate a wheelhouse since the wheelhouse
generation grabs dependencies of the install. That's why PyYAML is showing
up in the generated charm artifact. We're not explicitly saying "included
PyYAML" we're simply saying we need charmhelpers and charms.reactive from
PyPI as a minimum dependency for all charm hooks built with charm build to
work. Suggestions around this are welcome.

Thanks,
Marco Ceppi
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

Reply via email to