charm-helpers 0.9.1

2016-09-08 Thread Marco Ceppi
Hello everyone!

Ahead of the charmer summit we've released 0.9.1 (and 0.9.0) of
charm-helpers. We typically don't announce these changes but I wanted to
highlight a few of the new features as well as place a caveat for those who
may have built a charm during a 40 minute window.

# Highlights

- charmhelpers.core.hookenv add support for application_version_set
  This is a new hook-tool added in an earlier 2.0 beta that allows you to
set the version of the application running. It takes a single parameter, a
string, that's meant to represent the app version
- charmhelpers.core.host and charmhelpers.core.kernel are now supported on
Ubuntu and CentOS charms.
  Big shout out to Denis Buliga from Cloudbase for landing this support.
- Lots of smaller fixes to OpenStack and Storage plugins

# Downside

If you've managed to get `charmhelpers-0.9.0.tar.gz` in your wheelhouse
directory for your built charm, you will need to run charm build again.
There was a problem in the setup.py for the project which excluded a few
packages. The project's been updated from distutils to setuptools and
future proofed against this problem.

Thanks,
Marco Ceppi
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Follow-up on unit testing layered charms

2016-09-08 Thread Pete Vander Giessen
Hi All,

> Stuart Bishop wrote:
> The tearDown method could reset the mock easily enough.

If only it were that simple :-)

To patch imports, the harness was actually providing a context that you
could use the wrap the imports at the top of your test module. That solved
the immediate issue of executing imports without errors, but it created a
very complex situation when you went to figure out which references to
cleanup or update when you wanted to reset mocks. You also weren't able to
clean them up in tearDown, or even tearDownClass, because you had to handle
the situation where you had multiple test classes in a module.

One workaround is to do your imports inside of the setUp for a test. That
didn't feel like the correct way to do things in a library meant for
general use, where I'd prefer to stick to things that don't make Guido sad.
I wouldn't necessarily object to the technique if it came up in a code
review for a specific charm, though :-)

~ PeteVG

On Thu, Sep 1, 2016 at 4:48 AM Stuart Bishop 
wrote:

> On 30 August 2016 at 23:02, Pete Vander Giessen <
> pete.vandergies...@canonical.com> wrote:
>
> The problems with the harness: patching sys.modules leads to a catch-22:
>> if we don't leave the mocks in place, we still get import errors when using
>> the mock library's mock.patch method, but if we do leave them in place,
>> tests that set properties on them can end up interfering with each other.
>> There are workarounds, but they're not intuitive, and they don't generate
>> friendly error messages when they fail. We felt it best to leave the
>> harness behind, and provide some advice on more straightforward things that
>> you can do to work around the import errors. Thus the PR referenced above.
>>
>
> The tearDown method could reset the mock easily enough. I didn't need to
> do that in the simple case I had (a single layer), but it should solve the
> test isolation issue you have.
>
>
>
> --
> Stuart Bishop 
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Juju GUI 2.1.12 released

2016-09-08 Thread Jeff Pihach
The next version of the Juju GUI has been released!

This release includes a number of fixes which bring it back in line with
the recent changes in the Juju 2 betas which re-enables the ability to
switch models from the GUI which was temporarily removed in the previous
release.

Other improvements include:
  - Bundles now use "applications" top level key instead of "services".
  - Use a different WebSocket connection for the model and controller.
  - Create New Model button moved into the user profile.
  - Deploying bundles with lxc placements automatically convert to lxd.
  - Multi-series subordinates now have their series locked to the series of
the first related parent application.
  - (Fix) Local charms now deploy without issuing error about charm
location.
  - (Fix) When relating to subordinates, invalid targets are now faded.

To upgrade your existing models to use this version of the GUI:

Juju 2 beta:
  - Download the *.bz2 from
https://github.com/juju/juju-gui/releases/tag/2.1.12
  - Run `juju upgrade-gui /path/to/bz2`
  - Run `juju gui --show-credentials`

Juju 1:
  - juju upgrade-charm juju-gui

We welcome any feedback you may have on the GUI, you can chat with us in
#juju on irc.freenode.net or you can file issues here:
https://github.com/juju/juju-gui/issues
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Juju GUI 2.1.12 released

2016-09-08 Thread Jeff Pihach
The next version of the Juju GUI has been released!

This release includes a number of fixes which bring it back in line with
the recent changes in the Juju 2 betas which re-enables the ability to
switch models from the GUI which was temporarily removed in the previous
release.

Other improvements include:
  - Bundles now use "applications" top level key instead of "services".
  - Use a different WebSocket connection for the model and controller.
  - Create New Model button moved into the user profile.
  - Deploying bundles with lxc placements automatically convert to lxd.
  - Multi-series subordinates now have their series locked to the series of
the first related parent application.
  - (Fix) Local charms now deploy without issuing error about charm
location.
  - (Fix) When relating to subordinates, invalid targets are now faded.

To upgrade your existing models to use this version of the GUI:

Juju 2 beta:
  - Download the *.bz2 from
https://github.com/juju/juju-gui/releases/tag/2.1.12
  - Run `juju upgrade-gui /path/to/bz2`
  - Run `juju gui --show-credentials`

Juju 1:
  - juju upgrade-charm juju-gui

We welcome any feedback you may have on the GUI, you can chat with us in
#juju on irc.freenode.net or you can file issues here:
https://github.com/juju/juju-gui/issues
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Deploying Xenial charms in LXD? Read this

2016-09-08 Thread Tom Barber
Yeah I hit something like this on EC2 last night.

--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)

On 8 September 2016 at 14:28, Andrew Wilkins 
wrote:

> On Thu, Sep 8, 2016 at 9:23 PM Marco Ceppi 
> wrote:
>
>> Hey everyone,
>>
>> An issue was identified late yesterday for those deploying Xenial charms
>> to either the LXD provider or LXD instances in a cloud.
>>
>> The symptoms for this manifest as the LXD machine running is in a running
>> state (and has an IP address assigned) but the agent does not start. This
>> leaves the workload permanently stuck in a "Waiting for agent to
>> initialize" state. This problem originates from a problem in cloud-init and
>> systemd, being triggered by an update to the snapd package for xenial.
>>
>
> FYI this is not LXD-specific. I've just now seen the same thing happen on
> Azure.
>
> Thanks to James Beedy, from the community, for posting this workaround
>> which appears to be working consistently for the moment:
>>
>> juju set-model-config enable-os-refresh-update=false
>> juju set-model-config enable-os-upgrade=false
>>
>>
>> This should bypass the section of the cloud-init process that's causing
>> the hang at the moment. For those interested in tracking the bugs I believe
>> these are the two related ones for this problem:
>>
>> - https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1621229
>> - https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1576692
>>
>> I'll make sure to post an update when this has been resolved.
>>
>> Thanks,
>> Marco Ceppi
>> --
>> canonical-juju mailing list
>> canonical-j...@lists.canonical.com
>> Modify settings or unsubscribe at: https://lists.canonical.com/
>> mailman/listinfo/canonical-juju
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Deploying Xenial charms in LXD? Read this

2016-09-08 Thread Andrew Wilkins
On Thu, Sep 8, 2016 at 9:23 PM Marco Ceppi 
wrote:

> Hey everyone,
>
> An issue was identified late yesterday for those deploying Xenial charms
> to either the LXD provider or LXD instances in a cloud.
>
> The symptoms for this manifest as the LXD machine running is in a running
> state (and has an IP address assigned) but the agent does not start. This
> leaves the workload permanently stuck in a "Waiting for agent to
> initialize" state. This problem originates from a problem in cloud-init and
> systemd, being triggered by an update to the snapd package for xenial.
>

FYI this is not LXD-specific. I've just now seen the same thing happen on
Azure.

Thanks to James Beedy, from the community, for posting this workaround
> which appears to be working consistently for the moment:
>
> juju set-model-config enable-os-refresh-update=false
> juju set-model-config enable-os-upgrade=false
>
>
> This should bypass the section of the cloud-init process that's causing
> the hang at the moment. For those interested in tracking the bugs I believe
> these are the two related ones for this problem:
>
> - https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1621229
> - https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1576692
>
> I'll make sure to post an update when this has been resolved.
>
> Thanks,
> Marco Ceppi
> --
> canonical-juju mailing list
> canonical-j...@lists.canonical.com
> Modify settings or unsubscribe at:
> https://lists.canonical.com/mailman/listinfo/canonical-juju
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Deploying Xenial charms in LXD? Read this

2016-09-08 Thread Marco Ceppi
Hey everyone,

An issue was identified late yesterday for those deploying Xenial charms to
either the LXD provider or LXD instances in a cloud.

The symptoms for this manifest as the LXD machine running is in a running
state (and has an IP address assigned) but the agent does not start. This
leaves the workload permanently stuck in a "Waiting for agent to
initialize" state. This problem originates from a problem in cloud-init and
systemd, being triggered by an update to the snapd package for xenial.

Thanks to James Beedy, from the community, for posting this workaround
which appears to be working consistently for the moment:

juju set-model-config enable-os-refresh-update=false
juju set-model-config enable-os-upgrade=false


This should bypass the section of the cloud-init process that's causing the
hang at the moment. For those interested in tracking the bugs I believe
these are the two related ones for this problem:

- https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1621229
- https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1576692

I'll make sure to post an update when this has been resolved.

Thanks,
Marco Ceppi
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-09-08 Thread Casey Marshall
I discovered another trick that works: set the streams and urls to invalid
values in your bootstrap config. This will force Juju to use an already
compiled jujud in your $PATH. For example, bootstrap --config with:

image-metadata-url: http://localhost
image-stream: nope
agent-metadata-url: http://localhost
agent-stream: nope

I'd be happier if I could get this effect with a --use-agent=/path/to/jujud
option, but this works.

On Wed, Sep 7, 2016 at 9:28 PM, Nate Finch  wrote:

> Just a note, because it wasn't clear to me - there are a couple cases
> where the automatic upload tools won't do what you want, if you use a
> version of juju you built locally.
>
> If you're not a developer or someone who builds juju from source, it's
> safe to ignore this email.
>
> *1. If the version of juju you're building is available in streams and you
> *want* to upload, you have to use --build-agent, because by default we
> won't upload.  *
> This happens if you're purposely building an old release, or if QA just
> released a new build and you haven't pulled yet and/or master hasn't been
> updated with a new version number.  --build-agent works like the old
> upload-tools, except it always rebuilds jujud, even if there's an existing
> jujud binary.
>
> *2. If you're building a version of the code that is not available in
> streams, and you *don't* want to upload, you must use
> --agent-version=.*
> This can happen if you want to deploy a known-good server version, but
> only have a dev client around.  I use this to make sure I can reproduce
> bugs before testing my fixes.  --agent-version works basically like the old
> default (non-upload) behavior, except you have to explicitly specify a juju
> version that exists in streams to deploy (e.g. --agent-version=2.0-beta17)
>
>
> Note that if you want to be *sure* that juju bootstrap always does what
> you expect it to, IMO you should always use either --build-agent (to
> upload) or --agent-version (to not upload), depending on your intent.  The
> behavior of the bare juju bootstrap can change without warning (from
> uploading to not uploading) if a new version of jujud is pushed to streams
> that matches what you're building locally (which happens every time a new
> build is pushed to streams, until master is updated and you git pull and
> rebuild), and that can be really confusing if you are expecting your
> changes to be uploaded, and they're not.  It also changes behavior if you
> switch from master (which always uploads) to a release tag (which never
> uploads), which while predictable, can be easy to forget.
>
> Related note, beta18 (which is in current master) and later versions of
> the client can't bootstrap with --agent-version 2.0-beta17 or earlier, due
> to a breaking change (you'll get an error about mismatching UUIDs).  This
> type of breakage is rare, and generally should only happen during betas (or
> pre-beta), but it impacts us right now, so... yeah.
>
> -Nate
>
>
>
> --
> canonical-juju mailing list
> canonical-j...@lists.canonical.com
> Modify settings or unsubscribe at: https://lists.canonical.com/
> mailman/listinfo/canonical-juju
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


A couple of API changes coming in Juju beta18 this week

2016-09-08 Thread Ian Booth
Just a heads up, 3 APIs are moving to a different facade. There's no other
semantic changes other than the move. The only externally end user visible
difference is that the juju model-defaults command operates only on a controller
and no longer supports specifying a model using -m.

The APIs are to do with setting inherited default model values, so if you don't
care about those, don't bother reading on. These APIs are quite new so hopefully
any downstream impact will be zero or negligible.

The APIs are

ModelDefaults() (config.ModelDefaultAttributes, error)
SetModelDefaults(cloud, region string, config map[string]interface{})
UnsetModelDefaults(cloud, region string, keys ...string) error

These were on the ModelConfig facade but are now on the ModelManager facade. The
latter is a facade which is accessed via a controller endpoint rather than a
model endpoint.





-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev