Re: [ANN] charm-tools 1.9.3

2015-12-03 Thread Cory Johns
Stuart,

Yes, the `--no-binary` option is supposed to prevent that but doesn't seem
to be being honored.  https://github.com/juju/charm-tools/issues/58 has
been opened to track it, and I'm looking in to creating a work-around using
`--no-use-wheel` until I can get clarification or a fix from upstream.

On Thu, Dec 3, 2015 at 5:16 AM, Stuart Bishop 
wrote:

> On 1 December 2015 at 05:55, Cory Johns  wrote:
> > PyYAML includes a pure-Python version which would be used if building
> fails.
>
> I'm getting confused now.
>
> When I do 'charm build' with 1.9.3, my generated charm ends up with an
> embedded amd64 binary wheel for PyYAML which includes a .so file. What
> happens when I deploy this charm on power8 or arm? How does the pure
> python version of PyYAML appear when the only thing available is a
> amd64 binary wheel?
>
> --
> Stuart Bishop 
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] charm-tools 1.9.3

2015-11-27 Thread Marco Ceppi
On Thu, Nov 26, 2015 at 3:05 AM Simon Davy  wrote:

> On Thursday, 26 November 2015, Marco Ceppi 
> wrote:
> > On Wed, Nov 25, 2015 at 4:08 PM Simon Davy 
> wrote:
> >>
> >> On 25 November 2015 at 16:02, Marco Ceppi 
> wrote:
> >> > ## Wheel House for layer dependencies
> >> >
> >> > Going forward we recommend all dependencies for layers and charms be
> >> > packaged in a wheelhouse.txt file. This perform the installation of
> pypi
> >> > packages on the unit instead of first on the local machine meaning
> Python
> >> > libraries that require architecture specific builds will do it on the
> units
> >> > architecture.
> >>
> >> If I'm understanding the above correctly, this approach is a blocker
> for us.
> >>
> >> We would not want to install direct from pypi on a production service
> >>
> >>  1) pypi packages are not signed (or when they are, pip doesn't verify
> >> the signature)
> >>  2) pypi is an external dependency and thus unreliable (although not
> >> as bad these days)
> >>  3) old versions can disappear from pypi at an authors whim.
> >>  4) installing c packages involves installing a c toolchain on your
> prod machine
> >>
> >> Additionally, our policy (Canonical's, that is), does not allow access
> >> to the internet on production machines, for very good reasons. This is
> >> the default policy in many (probably most) production environments.
> >>
> >> Any layer or charm that consumes a layer that uses this new approach
> >> for dependencies would thus be unusable to us :(
> >>
> >> It also harms repeatability, and I would not want to use it even if
> >> our access policy allowed access to pypi.
> >>
> >> For python charm dependencies, we use system python packages as much
> >> as possible, or if we need any wheels, we ship that wheel in the
> >> charm, and pip install it directly from the there. No external
> >> network, completely repeatable.
> >
> > So, allow me to clarify. If you review the pastebin outputs from the
> original announcement email, what this shift does is previously `charm
> build` would create and embed installed dependencies into the charm under
> lib/ much like charm-helper-sync did for instead for any arbitrary Pypi
> dependency. Issues there are for PyYAML it will build a yaml.so file which
> would be built based on the architecture of your machine and not the cloud.
>
> Right. This was the bit which confused me, I think.
>
> Can we not just use python-yaml, as its installed by default on cloud
> images anyway?
>
> We use virtualenv with --system-site-packages, and use system packages for
> python libs with c packages where possible, leaving wheels for things which
> aren't packaged or we want newer versions of.
>
>
Again, this is for hook dependencies, not exactly for dependencies of the
workload. The charm could apt intall python-yaml, but using
--system-site-packages when building is something I'd discourage since not
everyone has the same apt pacakges installed. Unless that user is building
on a fresh cloud-image there's a chance they won't catch some packages that
don't get declared.

We'd be interested in making this a better story. The wheelhousing for
dependencies not yet available in the archive instead of embedding them in
the charm was a first step but certainly not the last. I'm not sure how
this would work when we generate a wheelhouse since the wheelhouse
generation grabs dependencies of the install. That's why PyYAML is showing
up in the generated charm artifact. We're not explicitly saying "included
PyYAML" we're simply saying we need charmhelpers and charms.reactive from
PyPI as a minimum dependency for all charm hooks built with charm build to
work. Suggestions around this are welcome.

Thanks,
Marco Ceppi
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] charm-tools 1.9.3

2015-11-27 Thread Simon Davy
On Friday, 27 November 2015, Marco Ceppi  wrote:
> On Thu, Nov 26, 2015 at 3:05 AM Simon Davy  wrote:
>>
>> On Thursday, 26 November 2015, Marco Ceppi 
wrote:
>> > On Wed, Nov 25, 2015 at 4:08 PM Simon Davy 
wrote:
>> >>
>> >> On 25 November 2015 at 16:02, Marco Ceppi 
wrote:
>> >> > ## Wheel House for layer dependencies
>> >> >
>> >> > Going forward we recommend all dependencies for layers and charms be
>> >> > packaged in a wheelhouse.txt file. This perform the installation of
pypi
>> >> > packages on the unit instead of first on the local machine meaning
Python
>> >> > libraries that require architecture specific builds will do it on
the units
>> >> > architecture.
>> >>
>> >> If I'm understanding the above correctly, this approach is a blocker
for us.
>> >>
>> >> We would not want to install direct from pypi on a production service
>> >>
>> >>  1) pypi packages are not signed (or when they are, pip doesn't verify
>> >> the signature)
>> >>  2) pypi is an external dependency and thus unreliable (although not
>> >> as bad these days)
>> >>  3) old versions can disappear from pypi at an authors whim.
>> >>  4) installing c packages involves installing a c toolchain on your
prod machine
>> >>
>> >> Additionally, our policy (Canonical's, that is), does not allow access
>> >> to the internet on production machines, for very good reasons. This is
>> >> the default policy in many (probably most) production environments.
>> >>
>> >> Any layer or charm that consumes a layer that uses this new approach
>> >> for dependencies would thus be unusable to us :(
>> >>
>> >> It also harms repeatability, and I would not want to use it even if
>> >> our access policy allowed access to pypi.
>> >>
>> >> For python charm dependencies, we use system python packages as much
>> >> as possible, or if we need any wheels, we ship that wheel in the
>> >> charm, and pip install it directly from the there. No external
>> >> network, completely repeatable.
>> >
>> > So, allow me to clarify. If you review the pastebin outputs from the
original announcement email, what this shift does is previously `charm
build` would create and embed installed dependencies into the charm under
lib/ much like charm-helper-sync did for instead for any arbitrary Pypi
dependency. Issues there are for PyYAML it will build a yaml.so file which
would be built based on the architecture of your machine and not the cloud.
>>
>> Right. This was the bit which confused me, I think.
>>
>> Can we not just use python-yaml, as its installed by default on cloud
images anyway?
>>
>> We use virtualenv with --system-site-packages, and use system packages
for python libs with c packages where possible, leaving wheels for things
which aren't packaged or we want newer versions of.
>>
>
> Again, this is for hook dependencies, not exactly for dependencies of the
workload.

Right. I understand that :)

I detailed how we solve this for our python app payloads as a possible
solution for how to solve it for python charm deps also, but of course
those deps would be completely separate things, not even installed in the
same virtualenv.


> The charm could apt intall python-yaml, but using --system-site-packages
when building is something I'd discourage since not everyone has the same
apt pacakges installed.

Except that they do specifically have python-yaml installed, I believe. Its
installed by default in Ubuntu cloud images, due to cloud-init I think.

But yes, other system python packages could be exposed. I wish once again
there was a way to include specific list of system packages in a virtualenv
rather than all them.

And it should be easy enough to add a way to declare which system packages
are required by a layer?

>Unless that user is building on a fresh cloud-image there's a chance they
won't catch some packages that don't get declared.
> We'd be interested in making this a better story. The wheelhousing for
dependencies not yet available in the archive instead of embedding them in
the charm was a first step but certainly not the last. I'm not sure how
this would work when we generate a wheelhouse since the wheelhouse
generation grabs dependencies of the install. That's why PyYAML is showing
up in the generated charm artifact. We're not explicitly saying "included
PyYAML" we're simply saying we need charmhelpers and charms.reactive from
PyPI as a minimum dependency for all charm hooks built with charm build to
work. Suggestions around this are welcome.

Right, the wheelhouse seems a good approach for that. I'm just wondering if
we can do a specific solution for python-yaml that avoids the binary wheel
(which AIUI is brittle even on same arch, due to glibc/cc verions)

I would be surprised if there were many charm python deeps that had c
extensions ?

Also, iirc there is a pure python yaml lib? Given speed is not an issue
here (as the yaml is 

[ANN] charm-tools 1.9.3

2015-11-26 Thread Simon Davy
On Thursday, 26 November 2015, Marco Ceppi 
wrote:
> On Wed, Nov 25, 2015 at 4:08 PM Simon Davy  wrote:
>>
>> On 25 November 2015 at 16:02, Marco Ceppi 
wrote:
>> > ## Wheel House for layer dependencies
>> >
>> > Going forward we recommend all dependencies for layers and charms be
>> > packaged in a wheelhouse.txt file. This perform the installation of
pypi
>> > packages on the unit instead of first on the local machine meaning
Python
>> > libraries that require architecture specific builds will do it on the
units
>> > architecture.
>>
>> If I'm understanding the above correctly, this approach is a blocker for
us.
>>
>> We would not want to install direct from pypi on a production service
>>
>>  1) pypi packages are not signed (or when they are, pip doesn't verify
>> the signature)
>>  2) pypi is an external dependency and thus unreliable (although not
>> as bad these days)
>>  3) old versions can disappear from pypi at an authors whim.
>>  4) installing c packages involves installing a c toolchain on your prod
machine
>>
>> Additionally, our policy (Canonical's, that is), does not allow access
>> to the internet on production machines, for very good reasons. This is
>> the default policy in many (probably most) production environments.
>>
>> Any layer or charm that consumes a layer that uses this new approach
>> for dependencies would thus be unusable to us :(
>>
>> It also harms repeatability, and I would not want to use it even if
>> our access policy allowed access to pypi.
>>
>> For python charm dependencies, we use system python packages as much
>> as possible, or if we need any wheels, we ship that wheel in the
>> charm, and pip install it directly from the there. No external
>> network, completely repeatable.
>
> So, allow me to clarify. If you review the pastebin outputs from the
original announcement email, what this shift does is previously `charm
build` would create and embed installed dependencies into the charm under
lib/ much like charm-helper-sync did for instead for any arbitrary Pypi
dependency. Issues there are for PyYAML it will build a yaml.so file which
would be built based on the architecture of your machine and not the cloud.

Right. This was the bit which confused me, I think.

Can we not just use python-yaml, as its installed by default on cloud
images anyway?

We use virtualenv with --system-site-packages, and use system packages for
python libs with c packages where possible, leaving wheels for things which
aren't packaged or we want newer versions of.

> This new method builds source wheels and embeds the wheel in the charm.
There's a bootstrap process on deploy that will unpackage and install the
dependencies on the system when deployed. The deps are still bundled in the
charm just the output of the charm is much more sane and easier to read
>
>>
>> Another option is to require/configure a local pypi to pull the
>> packages from, but  again, an external dependency and spof.
>>
>> I much prefer what the current tool seems to do, bundle deps as wheels
>> into a wheels/ dir as part of the charm build process.  Then that
>> charm is self-contained, and requires no external access, and is more
>> reliable/repeatable.
>>
>> > This also provides the added bonus of making `charm layers` a
>> > much cleaner experience.
>> >
>> > Here's an example of side-by-side output of a charm build of the basic
layer
>> > before and after converting to Wheelhouse.
>> >
>> > Previous: http://paste.ubuntu.com/13502779/ (53 directories, 402 files)
>> > Wheelhouse:  http://paste.ubuntu.com/13502779// (3 directories, 21
files)
>>
>> These are the same link?
>>
>> But looking at the link, I much prefer that version - everything is
>> bundled with the charm, as I suggest above.
>
> Sorry, meant to send two links.
> The first: http://paste.ubuntu.com/13502779/
> The Second: http://paste.ubuntu.com/13511384/
> Now which one would you prefer :)

Great :) Problem solved, sorry for the noise.



-- 
Simon
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Fwd: [ANN] charm-tools 1.9.3

2015-11-25 Thread Merlijn Sebrechts
Awesome!


Now I have an explanation for the weird behaviour I got today. `charm
compose` was working fine until suddenly I started getting errors because
wheel was not installed in the venv. Running `sudo apt-get update; sudo
apt-get upgrade` fixed my problems, I guess this was right after fix
8c09431... :)

2015-11-25 17:02 GMT+01:00 Marco Ceppi :

> Hello everyone!
>
> I'm happy to announce another charm-tools release today, this is the 1.9.3
> which succeeds 1.8.0 as the latest release of charm-tools. If you've
> managed to install 1.9.0, 1.9.1, or 1.9.2 in the past several days please
> be sure to upgrade. As always, you can verify the version you are running
> by executing: `charm version`
>
> # Changes
>
> 5cadfda [Marco Ceppi] version bump
> 8c09431 [Marco Ceppi] Make sure wheel is available in build venv fixes #51
> 11557b3 [Marco Ceppi] version bump
> 2a70207 [Marco Ceppi] install_requires needed because Homebrew can't
> handle a pip install
> f820bfd [Marco Ceppi] version bump
> 3ae864a [Marco Ceppi] virtualenv is a dependency
> 975702a [Marco Ceppi] version bump
> 55193cd [Cory Johns] Switch WheelhouseTactic to use a venv and include
> (newer) pip in wheelhouse
> 67639fa [Cory Johns] Added support for building a wheelhouse
> da058a3 [Tim Van Steenburgh] Implement charm-proof for storage
> a098a99 [Benjamin Saller] special case 'help' for issue #35
> b2ec3b1 [Benjamin Saller] metrics no longer defaults to off
>
>
> ## Proof now supports storage
>
> This was a nice update with the new storage feature in 1.25 - we're
> keeping a close eye on 1.26 and will make sure metadata changes there are
> supported.
>
> ## Wheel House for layer dependencies
>
> Going forward we recommend all dependencies for layers and charms be
> packaged in a wheelhouse.txt file. This perform the installation of pypi
> packages on the unit instead of first on the local machine meaning Python
> libraries that require architecture specific builds will do it on the units
> architecture. This also provides the added bonus of making `charm layers` a
> much cleaner experience.
>
> Here's an example of side-by-side output of a charm build of the basic
> layer before and after converting to Wheelhouse.
>
> Previous: http://paste.ubuntu.com/13502779/ (53 directories, 402 files)
> Wheelhouse: http://paste.ubuntu.com/13502787/ (3 directories, 21 files)
>
> This is the superior way package dependencies in charms, and we look
> forward to current layers migrating to a wheelhouse tactic. That said,
> charms which currently use a .pypi file in the lib directory will continue
> to work as expected and is a supported method of including dependencies.
>
> # Install
> Charm Tools is available to users either via the juju/stable PPA,
> Homebrew, or pip
>
> ## PPA
>
> sudo add-apt-repository ppa:juju/stable
> sudo apt-get update
> sudo apt-get install charm-tools
>
> ## Homebrew
>
> brew install charm-tools
>
> * This will be available once
> https://github.com/Homebrew/homebrew/pull/46352 has been merged
>
> ## PIP
>
> pip install -U charm-tools
>
> Thanks,
> Marco Ceppi
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


[ANN] charm-tools 1.9.3

2015-11-25 Thread Marco Ceppi
Hello everyone!

I'm happy to announce another charm-tools release today, this is the 1.9.3
which succeeds 1.8.0 as the latest release of charm-tools. If you've
managed to install 1.9.0, 1.9.1, or 1.9.2 in the past several days please
be sure to upgrade. As always, you can verify the version you are running
by executing: `charm version`

# Changes

5cadfda [Marco Ceppi] version bump
8c09431 [Marco Ceppi] Make sure wheel is available in build venv fixes #51
11557b3 [Marco Ceppi] version bump
2a70207 [Marco Ceppi] install_requires needed because Homebrew can't handle
a pip install
f820bfd [Marco Ceppi] version bump
3ae864a [Marco Ceppi] virtualenv is a dependency
975702a [Marco Ceppi] version bump
55193cd [Cory Johns] Switch WheelhouseTactic to use a venv and include
(newer) pip in wheelhouse
67639fa [Cory Johns] Added support for building a wheelhouse
da058a3 [Tim Van Steenburgh] Implement charm-proof for storage
a098a99 [Benjamin Saller] special case 'help' for issue #35
b2ec3b1 [Benjamin Saller] metrics no longer defaults to off


## Proof now supports storage

This was a nice update with the new storage feature in 1.25 - we're keeping
a close eye on 1.26 and will make sure metadata changes there are supported.

## Wheel House for layer dependencies

Going forward we recommend all dependencies for layers and charms be
packaged in a wheelhouse.txt file. This perform the installation of pypi
packages on the unit instead of first on the local machine meaning Python
libraries that require architecture specific builds will do it on the units
architecture. This also provides the added bonus of making `charm layers` a
much cleaner experience.

Here's an example of side-by-side output of a charm build of the basic
layer before and after converting to Wheelhouse.

Previous: http://paste.ubuntu.com/13502779/ (53 directories, 402 files)
Wheelhouse: http://paste.ubuntu.com/13502787/ (3 directories, 21 files)

This is the superior way package dependencies in charms, and we look
forward to current layers migrating to a wheelhouse tactic. That said,
charms which currently use a .pypi file in the lib directory will continue
to work as expected and is a supported method of including dependencies.

# Install
Charm Tools is available to users either via the juju/stable PPA, Homebrew,
or pip

## PPA

sudo add-apt-repository ppa:juju/stable
sudo apt-get update
sudo apt-get install charm-tools

## Homebrew

brew install charm-tools

* This will be available once
https://github.com/Homebrew/homebrew/pull/46352 has been merged

## PIP

pip install -U charm-tools

Thanks,
Marco Ceppi
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] charm-tools 1.9.3

2015-11-25 Thread Marco Ceppi
On Wed, Nov 25, 2015 at 4:08 PM Simon Davy  wrote:

> On 25 November 2015 at 16:02, Marco Ceppi 
> wrote:
> > ## Wheel House for layer dependencies
> >
> > Going forward we recommend all dependencies for layers and charms be
> > packaged in a wheelhouse.txt file. This perform the installation of pypi
> > packages on the unit instead of first on the local machine meaning Python
> > libraries that require architecture specific builds will do it on the
> units
> > architecture.
>
> If I'm understanding the above correctly, this approach is a blocker for
> us.
>
> We would not want to install direct from pypi on a production service
>
>  1) pypi packages are not signed (or when they are, pip doesn't verify
> the signature)
>  2) pypi is an external dependency and thus unreliable (although not
> as bad these days)
>  3) old versions can disappear from pypi at an authors whim.
>  4) installing c packages involves installing a c toolchain on your prod
> machine
>
> Additionally, our policy (Canonical's, that is), does not allow access
> to the internet on production machines, for very good reasons. This is
> the default policy in many (probably most) production environments.
>
> Any layer or charm that consumes a layer that uses this new approach
> for dependencies would thus be unusable to us :(
>
> It also harms repeatability, and I would not want to use it even if
> our access policy allowed access to pypi.
>
> For python charm dependencies, we use system python packages as much
> as possible, or if we need any wheels, we ship that wheel in the
> charm, and pip install it directly from the there. No external
> network, completely repeatable.
>

So, allow me to clarify. If you review the pastebin outputs from the
original announcement email, what this shift does is previously `charm
build` would create and embed installed dependencies into the charm under
lib/ much like charm-helper-sync did for instead for any arbitrary Pypi
dependency. Issues there are for PyYAML it will build a yaml.so file which
would be built based on the architecture of your machine and not the cloud.

This new method builds source wheels and embeds the wheel in the charm.
There's a bootstrap process on deploy that will unpackage and install the
dependencies on the system when deployed. The deps are still bundled in the
charm just the output of the charm is much more sane and easier to read


>
> Another option is to require/configure a local pypi to pull the
> packages from, but  again, an external dependency and spof.
>
> I much prefer what the current tool seems to do, bundle deps as wheels
> into a wheels/ dir as part of the charm build process.  Then that
> charm is self-contained, and requires no external access, and is more
> reliable/repeatable.
>
> > This also provides the added bonus of making `charm layers` a
> > much cleaner experience.
> >
> > Here's an example of side-by-side output of a charm build of the basic
> layer
> > before and after converting to Wheelhouse.
> >
> > Previous: http://paste.ubuntu.com/13502779/ (53 directories, 402 files)
> > Wheelhouse:  http://paste.ubuntu.com/13502779// (3 directories, 21
> files)
>
> These are the same link?
>
> But looking at the link, I much prefer that version - everything is
> bundled with the charm, as I suggest above.
>

Sorry, meant to send two links.

The first: http://paste.ubuntu.com/13502779/
The Second: http://paste.ubuntu.com/13511384/

Now which one would you prefer :)

Marco
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] charm-tools 1.9.3

2015-11-25 Thread Marco Ceppi
On Wed, Nov 25, 2015 at 7:44 PM Rick Harding 
wrote:

> On Wed, Nov 25, 2015 at 4:08 PM Simon Davy
>
>> I don't know where we are at with the resources work, but maybe that
>> could have a part to play here?
>>
>
> This is exactly what I wanted to bring up for discussion. This work is
> about to start at the end of this month/start of next. It sounds like
> there's something here to building a wheel and using it as a resource and
> letting the charm pull the correct wheel based on the unit architecture.
>
> The harder part is how to get those built and upload to the store as
> resources. Marco, can you work with Simon on the start of a spec and let's
> start to look through some ideas on how to handle this.
>
> I do want to say the wheel based work is great. The UI Engineering team
> has done it and it's sped things up immensely. However, in their work
> there's only one architecture targeted so the wheels are all pre-built and
> only for one platform. To have truly reusable charms (big data on power and
> amd64 and ...) we need to think through this much closer to a PPA-like
> approach than building on the units, especially as we're working so hard to
> get deployment times down when charms get placed on units.
>

I'd be happy to help work on a spec and follow up with the team working on
resources. I do want to mention though that these wheel/dep embedded in the
charms is more resources/deps needed for hook execution and less workload
dependencies/resources. I'm not sure if that makes a difference or if the
distinction is worth discussing further.

Marco
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] charm-tools 1.9.3

2015-11-25 Thread Adam Stokes
What if we only needed pure python modules? It seems like the toolchain
will always be installed because of some of the dependencies of
charmhelpers? Will these additional deps become optional once charmhelpers
is refactored?




On Wed, Nov 25, 2015, 11:18 PM Marco Ceppi 
wrote:

On Wed, Nov 25, 2015 at 4:08 PM Simon Davy  wrote:

On 25 November 2015 at 16:02, Marco Ceppi  wrote:
> ## Wheel House for layer dependencies
>
> Going forward we recommend all dependencies for layers and charms be
> packaged in a wheelhouse.txt file. This perform the installation of pypi
> packages on the unit instead of first on the local machine meaning Python
> libraries that require architecture specific builds will do it on the
units
> architecture.

If I'm understanding the above correctly, this approach is a blocker for us.

We would not want to install direct from pypi on a production service

 1) pypi packages are not signed (or when they are, pip doesn't verify
the signature)
 2) pypi is an external dependency and thus unreliable (although not
as bad these days)
 3) old versions can disappear from pypi at an authors whim.
 4) installing c packages involves installing a c toolchain on your prod
machine

Additionally, our policy (Canonical's, that is), does not allow access
to the internet on production machines, for very good reasons. This is
the default policy in many (probably most) production environments.

Any layer or charm that consumes a layer that uses this new approach
for dependencies would thus be unusable to us :(

It also harms repeatability, and I would not want to use it even if
our access policy allowed access to pypi.

For python charm dependencies, we use system python packages as much
as possible, or if we need any wheels, we ship that wheel in the
charm, and pip install it directly from the there. No external
network, completely repeatable.

So, allow me to clarify. If you review the pastebin outputs from the
original announcement email, what this shift does is previously `charm
build` would create and embed installed dependencies into the charm under
lib/ much like charm-helper-sync did for instead for any arbitrary Pypi
dependency. Issues there are for PyYAML it will build a yaml.so file which
would be built based on the architecture of your machine and not the cloud.
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju