Re: Openstack User Feedback

2018-04-04 Thread Ryan Beisner
Touched base on IRC out of band with this, just circling back here as well
to confirm that we're working to reproduce and advise.

Thanks again,

Ryan

On Wed, Apr 4, 2018 at 11:04 AM, James Beedy  wrote:

> forgot one: https://bugs.launchpad.net/charms.ceph/+bug/1761230
>
> On Wed, Apr 4, 2018 at 8:19 AM, James Beedy  wrote:
>
>> Here are the bugs:
>>
>> 1) https://bugs.launchpad.net/charms.ceph/+bug/1761214
>> 2) https://bugs.launchpad.net/charms.ceph/+bug/1761208
>> 3) https://bugs.launchpad.net/charm-ceph-fs/+bug/1753640
>>
>>
>> On Wed, Apr 4, 2018 at 5:52 AM, James Beedy  wrote:
>>
>>> Hello,
>>>
>>> @chrismacnaughton @jamespage @beisner @openstack-charmers ceph <->
>>> openstack such a broken bone right now.
>>>
>>> Please fix.
>>>
>>> Here is my bundle https://pastebin.com/gEUYaiFh
>>>
>>>
>>> (From IRC)
>>> 05:29  heres my deploy, looks great
>>> 05:29  https://pastebin.com/a2iNrBxv
>>> 
>>> jamesbeedy - Pastebin.com
>>> 
>>> 05:30  following a (what appears to be successful) deploy, I only
>>> see a 'glance' pool in ceph, and its in warning due to pg_num > pgp_num
>>> 05:31  increasing pgp_num gets me a healthy status
>>> 05:32  follow by another health warning
>>> https://paste.ubuntu.com/p/zD7HxMz6s6/
>>> Ubuntu Pastebin
>>> 
>>> 05:32  running `sudo ceph osd pool application enable glance rbd`
>>> got me back to healthy on my 'glance' pool
>>> 05:33  these are the two first papercuts
>>> 05:33  next - Manager for service cinder-volume cinder@cinder-ceph
>>> is reporting problems, not sending heartbeat. Service will appear "down".
>>> 05:34   the only pool I see following a deploy is 'glance'
>>> https://paste.ubuntu.com/p/Gg5G3Bmhq3/
>>> Ubuntu Pastebin
>>> 
>>> 05:36  when I try to create an instance in openstack I get the
>>> good ol' - Error: Failed to perform requested operation on instance "aset",
>>> the instance has an error status: Please try again later [Error: Build of
>>> instance 13e170cd-6aea-43ed-9ca1-de2ae62c3118 aborted: Volume
>>> 72e36907-81df-446c-b580-78bd96134ce0 did not finish being created even
>>> after we waited 0 seconds or 1 attempts. And its status is error.].
>>> 05:36  possibly there is a bunch of post deploy configuration that
>>> need be done here?
>>>
>>>
>>> Thanks
>>>
>>
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Openstack - Blocked

2018-02-19 Thread Ryan Beisner
Great!  Thanks for confirming.  I'm happy to have helped de-obfuscate the
situation.  I may or may not have caused that same little flavor overrun
"once" - cannot confirm.  ;-)

Cheers,

Ryan

On Mon, Feb 19, 2018 at 4:42 PM, James Beedy  wrote:

> I wanted to chime back in with the solution to the issue I was
> experiencing.
>
> The source of the issue undoubtedly was me. I was trying to launch a
> flavor which had insufficient root-disk for a xenial.img (2G).
>
> The the traceback which indicated the insufficient disk error was being
> obfuscated by the user facing error I was seeing, block_device_mapping
> error [0].
> This alongside the erroneous log error messages from service startup led
> me down unwarranted rabbit holes.
>
> I thought I was deploying the most stripped down instance with the bare
> essentials from the horizon ui, in reality (thanks beisner) you can deploy
> an even more stripped down instance (without a block device) from the
> command line with the command `openstack server create foo --flavor
>  --image `.
>
> Deploying an instance without a block device allowed us to bypass the
> block_device_mapping error and get the real underlying traceback [1].
>
> Massive thanks to Beisner for taking the time to talk through that with
> me, and pointing out the command^ that led us to the real error.
>
>
> ~James
>
> [0] https://paste.ubuntu.com/p/pTPh5vhBPp/
> [1] https://paste.ubuntu.com/p/3XtxTTFXHM/
>
>
> On Thu, Feb 15, 2018 at 7:49 AM, James Beedy  wrote:
>
>> Junaid and James,
>>
>>
>> Thanks for the response. Here are the logs.
>>
>>
>> Nova-Cloud-Controller
>>
>> $ cat /var/log/nova/nova-scheduler.log | http://paste.ubuntu.com/p/TQjD
>> SXQSDt/
>>
>> $ cat /var/log/nova/nova-conductor.log | http://paste.ubuntu.com/p/68Tc
>> mMCr82/
>>
>> $ sudo cat /var/log/nova/nova-api-os-compute.log |
>> http://paste.ubuntu.com/p/5xWpXbD5PC/
>>
>>
>> Neutron-Gateway
>>
>> $ sudo cat /var/log/neutron/neutron-metadata-agent.log  |
>> http://paste.ubuntu.com/p/MW3qkQqntJ/
>>
>>
>> $ sudo cat /var/log/neutron/neutron-openvswitch-agent.log |
>> http://paste.ubuntu.com/p/qz3vfzG9b9/
>>
>>
>> Neutron-api
>>
>> $ sudo cat /var/log/neutron/neutron-server.log |
>> http://paste.ubuntu.com/p/sCCNw4bXtW/
>>
>> Thanks,
>>
>>
>> James
>>
>>
>> On Thu, Feb 15, 2018 at 7:24 AM, James Page 
>> wrote:
>>
>>> Hi James
>>>
>>> On Wed, 14 Feb 2018 at 20:22 James Beedy  wrote:
>>>
 Hello,

 I am experiencing some issues with a base-openstack deploy.

 I can get a base-openstack to deploy legitimately using MAAS with no
 apparent errors from Juju's perspective. Following some init ops and the
 launching of an instance, I find myself stuck with errors I'm unsure how to
 diagnose. I upload an image, create my networks, flavors, and launch an
 instance, and see the instance erring out with a "host not found" error
 when I try to launch them.

 My nova/ceph node and neutron node interface configuration [0] all have
 a single flat 1G mgmt-net interface configured via MAAS, and vlans trunked
 in on enp4s0f0 (untracked by maas).


 Looking to the nova logs, I find [1] [2]

>>>
>>> You can ignore most of those errors - prior to the charm being fully
>>> configured the daemons will log some error messages about broken db/rmq
>>> etc...  newer reactive charms tend to disable services until config is
>>> complete, older classic ones do not.
>>>
>>> The compute node is not recording and error which would indicate some
>>> sort of scheduler problem - /var/log/nova/nova-scheduler.log from the
>>> nova-cloud-controller would tell us more.
>>>
>>> The bundle I'm using [3] is lightly modified version of the openstack
 base bundle [4] with modifications to match my machine tags and mac
 addresses for my machines.

>>>
>>> Seems reasonable - bundles are meant as a start point after all!
>>>
>>> I've gone back and forth with network and charm config trying different
 combinations in hope this error is caused by some misconfiguration on my
 end, but I am now convinced this is something outside of my scope as an
 operator, and am hoping for some insight from the greater community.

 I seem to be able to reproduce this consistently (using both Juju <
 2.3.2 and 2.3.2).

 Not even sure if I should create a bug somewhere as I'm not 100% sure
 this isn't my fault. Let me know if additional info is needed.

>>>
>>>  Lets dig into the scheduler log and see.
>>>
>>> Cheers
>>>
>>> James
>>>
>>
>>
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Openstack - Blocked

2018-02-19 Thread Ryan Beisner
Great!  Thanks for confirming.  I'm happy to have helped de-obfuscate the
situation.  I may or may not have caused that same little flavor overrun
"once" - cannot confirm.  ;-)

Cheers,

Ryan

On Mon, Feb 19, 2018 at 4:42 PM, James Beedy  wrote:

> I wanted to chime back in with the solution to the issue I was
> experiencing.
>
> The source of the issue undoubtedly was me. I was trying to launch a
> flavor which had insufficient root-disk for a xenial.img (2G).
>
> The the traceback which indicated the insufficient disk error was being
> obfuscated by the user facing error I was seeing, block_device_mapping
> error [0].
> This alongside the erroneous log error messages from service startup led
> me down unwarranted rabbit holes.
>
> I thought I was deploying the most stripped down instance with the bare
> essentials from the horizon ui, in reality (thanks beisner) you can deploy
> an even more stripped down instance (without a block device) from the
> command line with the command `openstack server create foo --flavor
>  --image `.
>
> Deploying an instance without a block device allowed us to bypass the
> block_device_mapping error and get the real underlying traceback [1].
>
> Massive thanks to Beisner for taking the time to talk through that with
> me, and pointing out the command^ that led us to the real error.
>
>
> ~James
>
> [0] https://paste.ubuntu.com/p/pTPh5vhBPp/
> [1] https://paste.ubuntu.com/p/3XtxTTFXHM/
>
>
> On Thu, Feb 15, 2018 at 7:49 AM, James Beedy  wrote:
>
>> Junaid and James,
>>
>>
>> Thanks for the response. Here are the logs.
>>
>>
>> Nova-Cloud-Controller
>>
>> $ cat /var/log/nova/nova-scheduler.log | http://paste.ubuntu.com/p/TQjD
>> SXQSDt/
>>
>> $ cat /var/log/nova/nova-conductor.log | http://paste.ubuntu.com/p/68Tc
>> mMCr82/
>>
>> $ sudo cat /var/log/nova/nova-api-os-compute.log |
>> http://paste.ubuntu.com/p/5xWpXbD5PC/
>>
>>
>> Neutron-Gateway
>>
>> $ sudo cat /var/log/neutron/neutron-metadata-agent.log  |
>> http://paste.ubuntu.com/p/MW3qkQqntJ/
>>
>>
>> $ sudo cat /var/log/neutron/neutron-openvswitch-agent.log |
>> http://paste.ubuntu.com/p/qz3vfzG9b9/
>>
>>
>> Neutron-api
>>
>> $ sudo cat /var/log/neutron/neutron-server.log |
>> http://paste.ubuntu.com/p/sCCNw4bXtW/
>>
>> Thanks,
>>
>>
>> James
>>
>>
>> On Thu, Feb 15, 2018 at 7:24 AM, James Page 
>> wrote:
>>
>>> Hi James
>>>
>>> On Wed, 14 Feb 2018 at 20:22 James Beedy  wrote:
>>>
 Hello,

 I am experiencing some issues with a base-openstack deploy.

 I can get a base-openstack to deploy legitimately using MAAS with no
 apparent errors from Juju's perspective. Following some init ops and the
 launching of an instance, I find myself stuck with errors I'm unsure how to
 diagnose. I upload an image, create my networks, flavors, and launch an
 instance, and see the instance erring out with a "host not found" error
 when I try to launch them.

 My nova/ceph node and neutron node interface configuration [0] all have
 a single flat 1G mgmt-net interface configured via MAAS, and vlans trunked
 in on enp4s0f0 (untracked by maas).


 Looking to the nova logs, I find [1] [2]

>>>
>>> You can ignore most of those errors - prior to the charm being fully
>>> configured the daemons will log some error messages about broken db/rmq
>>> etc...  newer reactive charms tend to disable services until config is
>>> complete, older classic ones do not.
>>>
>>> The compute node is not recording and error which would indicate some
>>> sort of scheduler problem - /var/log/nova/nova-scheduler.log from the
>>> nova-cloud-controller would tell us more.
>>>
>>> The bundle I'm using [3] is lightly modified version of the openstack
 base bundle [4] with modifications to match my machine tags and mac
 addresses for my machines.

>>>
>>> Seems reasonable - bundles are meant as a start point after all!
>>>
>>> I've gone back and forth with network and charm config trying different
 combinations in hope this error is caused by some misconfiguration on my
 end, but I am now convinced this is something outside of my scope as an
 operator, and am hoping for some insight from the greater community.

 I seem to be able to reproduce this consistently (using both Juju <
 2.3.2 and 2.3.2).

 Not even sure if I should create a bug somewhere as I'm not 100% sure
 this isn't my fault. Let me know if additional info is needed.

>>>
>>>  Lets dig into the scheduler log and see.
>>>
>>> Cheers
>>>
>>> James
>>>
>>
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Trouble adding IPv6 to installed Keystone charm.

2017-12-05 Thread Ryan Beisner
Hi Ken,

It is possible that you've hit a bug.  But to know for sure, can you tell
us the output of this command on the unit?:
ip addr show eth0

Also feel free to raise a bug [1] with further details and/or chat on
freenode IRC, the #openstack-charms channel.
[1] https://bugs.launchpad.net/charm-keystone/+filebug

Cheers,

Ryan


On Mon, Dec 4, 2017 at 1:25 PM, Ken D'Ambrosio  wrote:

> Hey, all.  Looking to enable IPv6 endpoints on our Newton cloud.  "juju
> config keystone" shows this:
>
>  prefer-ipv6:
> description: |
>   If True enables IPv6 support. The charm will expect network
> interfaces
>   to be configured with an IPv6 address. If set to False (default) IPv4
>   is expected.
>   .
>   NOTE: these charms do not currently support IPv6 privacy extension.
> In
>   order for this charm to function correctly, the privacy extension
> must be
>   disabled and a non-temporary address must be configured/available on
>   your network interface.
> type: boolean
> value: true
>
> It had originally been false.  I manually added 2004::100 to eth0 on my
> keystone host, and changed the value to true.  The Juju log on the Keystone
> host comes back with this:
>
> 2017-12-04 19:16:06 INFO config-changed Exception: Interface 'eth0' does
> not have a scope global non-temporary ipv6 address.
>
> eth0 looks like this:
> eth0  Link encap:Ethernet  HWaddr 00:16:3e:7f:eb:70
>   inet addr:172.23.248.38  Bcast:172.23.248.63
> Mask:255.255.255.192
>   inet6 addr: 2004::100/64 Scope:Global
> [...]
>
> I've googled like crazy, and can't find anything that really seems to fit
> the bill for "non-temporary ipv6 address", except in a different charm (
> https://api.jujucharms.com/v5/~sdn-charmers/trusty/contrail
> -analytics-31/archive/hooks/charmhelpers/contrib/network/ip.py) which
> says:
>
> We currently only support scope global IPv6 addresses i.e.
> non-temporary
> addresses. If no global IPv6 address is found, return the first one
> found
> in the ipv6 address list.
>
> Which seems to imply that a global IPv6 address *is* non-temporary by
> definition, which confuses me even more as to what's broken.
>
> Any pointers here?
>
> Thanks kindly,
>
> -Ken
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
> an/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Proposing Alex Kavanagh for ~charmers

2017-04-26 Thread Ryan Beisner
Strong +1 here.

On Wed, Apr 26, 2017 at 12:45 PM, Konstantinos Tsakalozos <
kos.tsakalo...@canonical.com> wrote:

> +1 from me too!
>
> On Wed, Apr 26, 2017 at 5:07 PM, Tom Barber  wrote:
>
>> Unofficial +1 from the other half of Norwich...
>>
>> On Wed, Apr 26, 2017 at 3:06 PM, Pete Vander Giessen <
>> pete.vandergies...@canonical.com> wrote:
>>
>>> +1 from me :-)
>>>
>>> On Wed, Apr 26, 2017 at 9:43 AM Charles Butler <
>>> charles.but...@canonical.com> wrote:
>>>
 I for one, cast my +1 vote

 It's been great working with tinwood regarding charms, layers, and
 reactive 2.0. His reviews have been on point and his helpful hand with the
 community has been a pleasure to witness.

 All the best,

 Charles



 On Wed, Apr 26, 2017 at 4:52 AM James Page 
 wrote:

> Hi Charmers
>
> I'd like to propose Alex (tinwood) for membership of charmers; he's
> worked extensively across the OpenStack charms, providing valuable reviews
> to the rest of the team, has been reviewing charm-helpers updates and has
> been instrumental in the OpenStack charms use of reactive and layers for
> building newer charms.
>
> He's also working as part of the team for Reactive 2.0.
>
> I think he'll be a valuable addition to the core charmers team!
>
> Cheers
>
> James
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
> an/listinfo/juju
>
 --
 Juju Charmer
 Canonical Group Ltd.
 Ubuntu - Linux for human beings | www.ubuntu.com
 conjure-up canonical-kubernetes | jujucharms.com
 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
 an/listinfo/juju

>>>
>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>>> an/listinfo/juju
>>>
>>>
>>
>>
>> --
>> Tom Barber
>> CTO Spicule LTD
>> t...@spicule.co.uk
>>
>> http://spicule.co.uk
>>
>> @spiculeim 
>>
>> Schedule a meeting with me 
>>
>> GB: +44(0)5603641316 <+44%2056%200364%201316>
>> US: +18448141689 <(844)%20814-1689>
>>
>> 
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>> an/listinfo/juju
>>
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Goodbye Opsworks

2017-04-03 Thread Ryan Beisner
Nice work, bdx :)

On Fri, Mar 31, 2017 at 8:06 PM, James Beedy  wrote:

> The day has finally come for me to turn down the last of our Opsworks
> instances for our PRM application. This marks the completion of one of many
> Opsworks -> Juju conversion projects I've taken on. Thanks everyone for
> your help along the way!
>
> Goodbye Opsworks - http://imgur.com/a/4pkgP
>
> Hello Juju PRM!
>
> Staging - http://paste.ubuntu.com/24291143/
> Demo - http://paste.ubuntu.com/24291156/
> Production - http://paste.ubuntu.com/24291133/
> Walmart - http://paste.ubuntu.com/24291173/
>
> W00T!
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Goodbye Opsworks

2017-04-03 Thread Ryan Beisner
Nice work, bdx :)

On Fri, Mar 31, 2017 at 8:06 PM, James Beedy  wrote:

> The day has finally come for me to turn down the last of our Opsworks
> instances for our PRM application. This marks the completion of one of many
> Opsworks -> Juju conversion projects I've taken on. Thanks everyone for
> your help along the way!
>
> Goodbye Opsworks - http://imgur.com/a/4pkgP
>
> Hello Juju PRM!
>
> Staging - http://paste.ubuntu.com/24291143/
> Demo - http://paste.ubuntu.com/24291156/
> Production - http://paste.ubuntu.com/24291133/
> Walmart - http://paste.ubuntu.com/24291173/
>
> W00T!
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Can't push to my own team

2017-02-06 Thread Ryan Beisner
Hi Tom,

On a rare occasion, I've had to use a slightly larger hammer than
logout/login, which is to remove (or rename) the following files:
~/.go-cookies
~/.local/share/juju/store-usso-token

That should force a fresh re-auth.

Cheers,

Ryan



On Mon, Feb 6, 2017 at 5:38 PM, Tom Barber  wrote:

> Hi folks
>
> For whatever reason
>
> charm whoami
>
> has decided it wont see my own membership of my own LP team even though LP
> lists me as the owner:
> https://launchpad.net/~spiculecharms/+members
>
> This prevents me pushing charms to the charm store for a reason I can't
> fathom.
>
> I did some work with Cory and did the usual logout and in on
> jujucharms.com and charm login, but none of them brought my group
> membership back.
>
> So where have I messed up?
>
>
>
> --
> Tom Barber
> CTO Spicule LTD
> t...@spicule.co.uk
>
> http://spicule.co.uk
>
> @spiculeim 
>
> Schedule a meeting with me 
>
> GB: +44(0)5603641316 <+44%2056%200364%201316>
> US: +18448141689 <(844)%20814-1689>
>
> 
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


charm store series enablement/timing (unrecognized series "zesty")

2016-12-03 Thread Ryan Beisner
Hi All,

Do we have an eta for zesty series recognition in the charm store?  It
looks like it is already committed in master, so I imagine it's just a
matter of the store release schedule.

OpenStack Charm CI consumes cs: artifacts for our charm dev and test, and
all of the official OpenStack charms are multi-series.  That means that the
presence of zesty series metadata effectively halts our dev/test and CI for
all series.  We are seeing:

unrecognized series "zesty" in metadata
https://github.com/juju/charmstore/issues/695


To unblock devs in the mean time, I've raised reviews to remove zesty
series metadata from the affected charm repos.

I think it's pretty important for the charm store to get ahead of this by
always enabling the next dev series as soon as that series opens for
development.  And, we'll look before we leap on each cycle.

Cheers & thanks,

Ryan
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: How to reproduce the same build output binaries?

2016-11-03 Thread Ryan Beisner
As far as I know, there is no notion of a stable Layer or a stable
Interface.  That makes it difficult to carry any layered charm as "stable,"
and quite awkward to cherry-pick and backport fixes to stable charms which
depend on Layers and Interfaces.

As you mention, you could synthesize stability (or point-in-time) by
branching, forking repos, but I think Layers and Interfaces should
ultimately grow proper versioning semantics.

Cheers,

Ryan


On Thu, Nov 3, 2016 at 4:31 AM, Konstantinos Tsakalozos <
kos.tsakalo...@canonical.com> wrote:

> Hi everyone,
>
> This is probably a question on best practises.
>
> A reasonable ask for a build process is to be able to reproduce the same
> output artifacts from a certain point in time. For example, I would like to
> be able to rebuild the same charm I build 10 minutes, or a week or a month
> ago. I can think of a way to do that but it involves forking the layers
> used and getting them locally before charm build. Is there a better way?
> What would you do to accommodate this requirement?
>
> Thanks,
> Konstantinos
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Zuul Charm

2016-11-02 Thread Ryan Beisner
ie. Project repos would be born, developed and maintained here.  :-)
https://github.com/openstack?query=charm

On Wed, Nov 2, 2016 at 12:28 PM, Ryan Beisner <ryan.beis...@canonical.com>
wrote:

> As Zuul is an OpenStack project, I'd like to see this developed in line
> with the other OpenStack Charms.  I would be happy to help light the path
> along the way to ensure that all efforts are as efficient as possible, and
> that the resultant layer(s), interface(s) and charm(s) can leverage the
> powerful charm CI already in place, including automatic build, push,
> publish to the charm store.
>
> http://docs.openstack.org/developer/charm-guide/
>
> Cheers,
>
> Ryan
>
>
>
> On Wed, Nov 2, 2016 at 9:49 AM, Marco Ceppi <marco.ce...@canonical.com>
> wrote:
>
>> Hey James,
>>
>> I think this is the best way about it for the time being, discussing what
>> people are working on ahead of it being perfect gives everyone a chance to
>> see what's going on and can help focus people on getting help from others
>> interested!
>>
>> On Wed, Nov 2, 2016 at 10:48 AM James Beedy <jamesbe...@gmail.com> wrote:
>>
>>> I would like to update/rewrite the Zuul charm. In lieu writing of
>>> writing chrams twice, I'm reaching out to see if anyone is currently
>>> working on or maintaining the Zuul charm already?  if you are interested in
>>> this, or already have something going on with Zuul please let me know.
>>>
>>> Secondly, as @marcoceppi and I lightly discussed, we should find a
>>> better way to introspect who is working on what so we don't end up with
>>> multiple people writing same charms.
>>>
>>> Thoughts?
>>>
>>> ~James
>>> --
>>> Juju-dev mailing list
>>> juju-...@lists.ubuntu.com
>>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>>> an/listinfo/juju-dev
>>>
>>
>> --
>> Juju-dev mailing list
>> juju-...@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>> an/listinfo/juju-dev
>>
>>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Zuul Charm

2016-11-02 Thread Ryan Beisner
ie. Project repos would be born, developed and maintained here.  :-)
https://github.com/openstack?query=charm

On Wed, Nov 2, 2016 at 12:28 PM, Ryan Beisner <ryan.beis...@canonical.com>
wrote:

> As Zuul is an OpenStack project, I'd like to see this developed in line
> with the other OpenStack Charms.  I would be happy to help light the path
> along the way to ensure that all efforts are as efficient as possible, and
> that the resultant layer(s), interface(s) and charm(s) can leverage the
> powerful charm CI already in place, including automatic build, push,
> publish to the charm store.
>
> http://docs.openstack.org/developer/charm-guide/
>
> Cheers,
>
> Ryan
>
>
>
> On Wed, Nov 2, 2016 at 9:49 AM, Marco Ceppi <marco.ce...@canonical.com>
> wrote:
>
>> Hey James,
>>
>> I think this is the best way about it for the time being, discussing what
>> people are working on ahead of it being perfect gives everyone a chance to
>> see what's going on and can help focus people on getting help from others
>> interested!
>>
>> On Wed, Nov 2, 2016 at 10:48 AM James Beedy <jamesbe...@gmail.com> wrote:
>>
>>> I would like to update/rewrite the Zuul charm. In lieu writing of
>>> writing chrams twice, I'm reaching out to see if anyone is currently
>>> working on or maintaining the Zuul charm already?  if you are interested in
>>> this, or already have something going on with Zuul please let me know.
>>>
>>> Secondly, as @marcoceppi and I lightly discussed, we should find a
>>> better way to introspect who is working on what so we don't end up with
>>> multiple people writing same charms.
>>>
>>> Thoughts?
>>>
>>> ~James
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>>> an/listinfo/juju-dev
>>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>> an/listinfo/juju-dev
>>
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Zuul Charm

2016-11-02 Thread Ryan Beisner
As Zuul is an OpenStack project, I'd like to see this developed in line
with the other OpenStack Charms.  I would be happy to help light the path
along the way to ensure that all efforts are as efficient as possible, and
that the resultant layer(s), interface(s) and charm(s) can leverage the
powerful charm CI already in place, including automatic build, push,
publish to the charm store.

http://docs.openstack.org/developer/charm-guide/

Cheers,

Ryan



On Wed, Nov 2, 2016 at 9:49 AM, Marco Ceppi 
wrote:

> Hey James,
>
> I think this is the best way about it for the time being, discussing what
> people are working on ahead of it being perfect gives everyone a chance to
> see what's going on and can help focus people on getting help from others
> interested!
>
> On Wed, Nov 2, 2016 at 10:48 AM James Beedy  wrote:
>
>> I would like to update/rewrite the Zuul charm. In lieu writing of writing
>> chrams twice, I'm reaching out to see if anyone is currently working on or
>> maintaining the Zuul charm already?  if you are interested in this, or
>> already have something going on with Zuul please let me know.
>>
>> Secondly, as @marcoceppi and I lightly discussed, we should find a better
>> way to introspect who is working on what so we don't end up with multiple
>> people writing same charms.
>>
>> Thoughts?
>>
>> ~James
>> --
>> Juju-dev mailing list
>> juju-...@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>> mailman/listinfo/juju-dev
>>
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] Updated Python Juju Client

2016-11-01 Thread Ryan Beisner
On Tue, Nov 1, 2016 at 9:30 AM, Marco Ceppi <marco.ce...@canonical.com>
wrote:

> This is really one of the goals. python-jujuclient, amulet, and even to an
> extent deployer all poorly implement an abstraction to Juju. These will all
> eventually fall away in favor of a consolidated, focused, well built Object
> Oriented Python library.
>
+1000 :-)


> Marco
>
> On Tue, Nov 1, 2016, 4:25 PM Ryan Beisner <ryan.beis...@canonical.com>
> wrote:
>
>> This is good stuff.  I think keeping it focused on Juju 2.0 and later,
>> completely free of legacy shims, is a good thing.  I'd love to be using
>> this natively instead of the collective stack of [Amulet + Juju-deployer +
>> python-jujuclient], and plan to take it for a spin.
>>
>> Cheers,
>>
>> Ryan
>>
>> On Tue, Nov 1, 2016 at 8:49 AM, Tim Van Steenburgh <
>> tim.van.steenbu...@canonical.com> wrote:
>>
>> Hi everyone,
>>
>> We've been working on a new python client for Juju. It's still in
>> development,
>> but we wanted to share the first bits to illicit feedback:
>> https://github.com/juju/python-libjuju
>>
>> Features of this library include:
>>
>>  * fully asynchronous - uses asyncio and async/await features of python
>> 3.5
>>  * websocket-level bindings are programmatically generated (indirectly)
>> from the
>>Juju golang code, ensuring full api coverage
>>  * provides an OO layer which encapsulates much of the websocket api and
>>provides familiar nouns and verbs (e.g. Model.deploy(),
>> Application.add_unit(),
>>etc.)
>>
>> Caveats:
>>
>>  * Juju 2+ only. Juju 1 support may be added in the future.
>>  * Requires Python 3.5+
>>  * Currently async-only. A synchronous wrapper will be provided in the
>> future.
>>
>> If you want to try it out, take a look at the examples/ directory.
>> https://github.com/juju/python-libjuju/blob/master/examples/unitrun.py
>> is a
>> fairly simple one that deploys a unit, runs a command on that unit, waits
>> for
>> and prints the results, then exits.
>>
>> Any and all comments, questions, and contributions are welcomed.
>>
>> Thanks,
>>
>> Tim
>>
>> --
>> Juju-dev mailing list
>> juju-...@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>> mailman/listinfo/juju-dev
>>
>>
>> --
>> Juju-dev mailing list
>> juju-...@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>> mailman/listinfo/juju-dev
>>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] Updated Python Juju Client

2016-11-01 Thread Ryan Beisner
On Tue, Nov 1, 2016 at 9:30 AM, Marco Ceppi <marco.ce...@canonical.com>
wrote:

> This is really one of the goals. python-jujuclient, amulet, and even to an
> extent deployer all poorly implement an abstraction to Juju. These will all
> eventually fall away in favor of a consolidated, focused, well built Object
> Oriented Python library.
>
+1000 :-)


> Marco
>
> On Tue, Nov 1, 2016, 4:25 PM Ryan Beisner <ryan.beis...@canonical.com>
> wrote:
>
>> This is good stuff.  I think keeping it focused on Juju 2.0 and later,
>> completely free of legacy shims, is a good thing.  I'd love to be using
>> this natively instead of the collective stack of [Amulet + Juju-deployer +
>> python-jujuclient], and plan to take it for a spin.
>>
>> Cheers,
>>
>> Ryan
>>
>> On Tue, Nov 1, 2016 at 8:49 AM, Tim Van Steenburgh <
>> tim.van.steenbu...@canonical.com> wrote:
>>
>> Hi everyone,
>>
>> We've been working on a new python client for Juju. It's still in
>> development,
>> but we wanted to share the first bits to illicit feedback:
>> https://github.com/juju/python-libjuju
>>
>> Features of this library include:
>>
>>  * fully asynchronous - uses asyncio and async/await features of python
>> 3.5
>>  * websocket-level bindings are programmatically generated (indirectly)
>> from the
>>Juju golang code, ensuring full api coverage
>>  * provides an OO layer which encapsulates much of the websocket api and
>>provides familiar nouns and verbs (e.g. Model.deploy(),
>> Application.add_unit(),
>>etc.)
>>
>> Caveats:
>>
>>  * Juju 2+ only. Juju 1 support may be added in the future.
>>  * Requires Python 3.5+
>>  * Currently async-only. A synchronous wrapper will be provided in the
>> future.
>>
>> If you want to try it out, take a look at the examples/ directory.
>> https://github.com/juju/python-libjuju/blob/master/examples/unitrun.py
>> is a
>> fairly simple one that deploys a unit, runs a command on that unit, waits
>> for
>> and prints the results, then exits.
>>
>> Any and all comments, questions, and contributions are welcomed.
>>
>> Thanks,
>>
>> Tim
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>> mailman/listinfo/juju-dev
>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>> mailman/listinfo/juju-dev
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: [ANN] Updated Python Juju Client

2016-11-01 Thread Ryan Beisner
This is good stuff.  I think keeping it focused on Juju 2.0 and later,
completely free of legacy shims, is a good thing.  I'd love to be using
this natively instead of the collective stack of [Amulet + Juju-deployer +
python-jujuclient], and plan to take it for a spin.

Cheers,

Ryan

On Tue, Nov 1, 2016 at 8:49 AM, Tim Van Steenburgh <
tim.van.steenbu...@canonical.com> wrote:

> Hi everyone,
>
> We've been working on a new python client for Juju. It's still in
> development,
> but we wanted to share the first bits to illicit feedback:
> https://github.com/juju/python-libjuju
>
> Features of this library include:
>
>  * fully asynchronous - uses asyncio and async/await features of python 3.5
>  * websocket-level bindings are programmatically generated (indirectly)
> from the
>Juju golang code, ensuring full api coverage
>  * provides an OO layer which encapsulates much of the websocket api and
>provides familiar nouns and verbs (e.g. Model.deploy(),
> Application.add_unit(),
>etc.)
>
> Caveats:
>
>  * Juju 2+ only. Juju 1 support may be added in the future.
>  * Requires Python 3.5+
>  * Currently async-only. A synchronous wrapper will be provided in the
> future.
>
> If you want to try it out, take a look at the examples/ directory.
> https://github.com/juju/python-libjuju/blob/master/examples/unitrun.py is
> a
> fairly simple one that deploys a unit, runs a command on that unit, waits
> for
> and prints the results, then exits.
>
> Any and all comments, questions, and contributions are welcomed.
>
> Thanks,
>
> Tim
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] Updated Python Juju Client

2016-11-01 Thread Ryan Beisner
This is good stuff.  I think keeping it focused on Juju 2.0 and later,
completely free of legacy shims, is a good thing.  I'd love to be using
this natively instead of the collective stack of [Amulet + Juju-deployer +
python-jujuclient], and plan to take it for a spin.

Cheers,

Ryan

On Tue, Nov 1, 2016 at 8:49 AM, Tim Van Steenburgh <
tim.van.steenbu...@canonical.com> wrote:

> Hi everyone,
>
> We've been working on a new python client for Juju. It's still in
> development,
> but we wanted to share the first bits to illicit feedback:
> https://github.com/juju/python-libjuju
>
> Features of this library include:
>
>  * fully asynchronous - uses asyncio and async/await features of python 3.5
>  * websocket-level bindings are programmatically generated (indirectly)
> from the
>Juju golang code, ensuring full api coverage
>  * provides an OO layer which encapsulates much of the websocket api and
>provides familiar nouns and verbs (e.g. Model.deploy(),
> Application.add_unit(),
>etc.)
>
> Caveats:
>
>  * Juju 2+ only. Juju 1 support may be added in the future.
>  * Requires Python 3.5+
>  * Currently async-only. A synchronous wrapper will be provided in the
> future.
>
> If you want to try it out, take a look at the examples/ directory.
> https://github.com/juju/python-libjuju/blob/master/examples/unitrun.py is
> a
> fairly simple one that deploys a unit, runs a command on that unit, waits
> for
> and prints the results, then exits.
>
> Any and all comments, questions, and contributions are welcomed.
>
> Thanks,
>
> Tim
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: lxd hook failed change-config

2016-10-21 Thread Ryan Beisner
Hi All - we need a LP bug for this, please.  The gerrit review will need to
have its commit message updated with the lp bug info so that CI can do bug
management on it.  Then, once it lands in the charm's master, we can
cherry-pick it for a stable backport to the stable charm.
Thanks,
Ryan

On Fri, Oct 21, 2016 at 9:54 AM, Chuck Short 
wrote:

> Hi,
>
> I proposed a fix:
>
> https://review.openstack.org/#/c/389740/
>
> chuck
>
> On Fri, Oct 21, 2016 at 10:27 AM, Adam Stokes 
> wrote:
>
>> This looks like it's due to the way we deploy OpenStack with NovaLXD in
>> all containers, this effectively breaks anyone wanting to do an all-in-one
>> install on their system.
>>
>> On Fri, Oct 21, 2016 at 10:22 AM Adam Stokes 
>> wrote:
>>
>>> So it looks like a recent change to the LXD charm, see here:
>>>
>>> https://github.com/openstack/charm-lxd/commit/017246768e097c
>>> 5fcd5283e23f19f075ff9f9d4e
>>>
>>> Chuck, are you aware of this issue?
>>>
>>> On Fri, Oct 21, 2016 at 10:19 AM Heather Lanigan 
>>> wrote:
>>>
>>> Adam,
>>>
>>> The entire container is not readonly.  Just /sys, the mount point for
>>> /dev/.lxc/sys.  I choose another charm (neutron-api) to look at, /sys on
>>> that unit is readonly as well.Is that normal?
>>>
>>> What would be different in my config?  My Xenial install is on a VM, but
>>> I’ve been running that way for weeks.  I did have the openstack-novalxd
>>> bundle successfully deployed on it previously using juju 2.0_rc1.
>>>
>>> -Heather
>>>
>>> On Oct 20, 2016, at 11:30 PM, Adam Stokes 
>>> wrote:
>>>
>>> Odd it looks like the container has a read only file system? I ran
>>> through a full openstack-novalxd deployment today and one of the upstream
>>> maintainers ran through the same deployment and didn't run into any issues.
>>>
>>> On Thu, Oct 20, 2016, 10:02 PM Heather Lanigan 
>>> wrote:
>>>
>>>
>>> I used conjure-up to deploy openstack-novalxd on a Xenial system.
>>> Before deploying, the operating system was updated.  LXD init was setup
>>> with dir, not xfs.  All but one of the charms has a status of “unit is
>>> ready"
>>>
>>> The lxd/0 subordinate charm has a status of: hook failed:
>>> "config-changed”.  See details below.
>>>
>>> I can boot an instance within this OpenStack deployment.  However
>>> deleting the instance fails. A side effect of the lxd/0 issues?
>>>
>>> Juju version 2.0.0-xenial-amd64
>>> conjure-up version 2.0.2
>>> lxd charm version 2.0.5
>>>
>>> Any ideas?
>>>
>>> Thanks in advance,
>>> Heather
>>>
>>> ++
>>>
>>> The /var/log/juju/unit-lxd-0.log on the unit reports:
>>>
>>> 2016-10-21 01:09:33 INFO config-changed Traceback (most recent call
>>> last):
>>> 2016-10-21 01:09:33 INFO config-changed   File
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 140,
>>> in 
>>> 2016-10-21 01:09:33 INFO config-changed main()
>>> 2016-10-21 01:09:33 INFO config-changed   File
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 134,
>>> in main
>>> 2016-10-21 01:09:33 INFO config-changed hooks.execute(sys.argv)
>>> 2016-10-21 01:09:33 INFO config-changed   File
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/charmhelpers/core/hookenv.py",
>>> line 715, in execute
>>> 2016-10-21 01:09:33 INFO config-changed self._hooks[hook_name]()
>>> 2016-10-21 01:09:33 INFO config-changed   File
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 78,
>>> in config_changed
>>> 2016-10-21 01:09:33 INFO config-changed configure_lxd_host()
>>> 2016-10-21 01:09:33 INFO config-changed   File
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/charmhelpers/core/decorators.py",
>>> line 40, in _retry_on_exception_inner_2
>>> 2016-10-21 01:09:33 INFO config-changed return f(*args, **kwargs)
>>> 2016-10-21 01:09:33 INFO config-changed   File
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/lxd_utils.py", line 429,
>>> in configure_lxd_host
>>> 2016-10-21 01:09:33 INFO config-changed with
>>> open(EXT4_USERNS_MOUNTS, 'w') as userns_mounts:
>>> 2016-10-21 01:09:33 INFO config-changed IOError: [Errno 30] Read-only
>>> file system: '/sys/module/ext4/parameters/userns_mounts'
>>> 2016-10-21 01:09:33 ERROR juju.worker.uniter.operation runhook.go:107
>>> hook "config-changed" failed: exit status 1
>>>
>>>
>>> root@juju-456efd-13:~# touch /sys/module/ext4/parameters/temp-file
>>> touch: cannot touch '/sys/module/ext4/parameters/temp-file': Read-only
>>> file system
>>> root@juju-456efd-13:~# df -h /sys/module/ext4/parameters/userns_mounts
>>> Filesystem  Size  Used Avail Use% Mounted on
>>> sys0 0 0- /dev/.lxc/sys
>>> root@juju-456efd-13:~# touch /home/ubuntu/temp-file
>>> root@juju-456efd-13:~# ls /home/ubuntu/temp-file
>>> /home/ubuntu/temp-file
>>> root@juju-456efd-13:~# df -h
>>> Filesystem   

Re: Feedback wanted: Changes to the Ubuntu Charm

2016-09-22 Thread Ryan Beisner
On Thu, Sep 22, 2016 at 8:47 AM, Cory Johns 
wrote:

> For clarity, I'd just like to note that https://jujucharms.com/ubuntu/8
> is the candidate revision, and you can deploy this on 1.25.6 (without
> --channel support) by being specific about the revision number:
>

Great.  How can I determine that rev is the current candidate?



>
> juju deploy cs:ubuntu-8
>
> On Fri, Sep 16, 2016 at 2:56 PM, Rick Harding 
> wrote:
>
>> Typically you'd be able to tell the source for charms set via
>>
>> charm show ubuntu homepage
>> charm show ubuntu bugs-url
>>
>> In this case they're both set to https://ubuntu.com so not helpful in
>> getting to the source.
>>
>
> I feel like this highlights the fact that there is some ambiguity in these
> fields.  Many charms set these (or at least homepage) to point to the
> upstream project.  Of course, ideally, the charm would be a part of its
> respective upstream project, but that may not be the case for various
> reasons and in those cases it would be helpful to differentiate the
> upstream project URL, and possibly upstream bug tracker, from the charm
> repo URL and bug tracker.
>
> On a related note, though, the homepage field is no longer displayed
> anywhere that I can find on jujucharms.com.  Why is it not included in
> the Contribute sidebar?
>

Appears to be a regression in the latest charm store.  I believe rick_h
raised a bug, but I'm not readily finding that.



>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Specify lxd container bridge

2016-09-19 Thread Ryan Beisner
Where can I get fresh RC builds?


On Mon, Sep 19, 2016 at 1:52 PM, Corey Bryant 
wrote:

> I just wanted to follow up on this thread to say I tested with a
> pre-release of juju rc1 and it fixed up the issues I was hitting.
>
> On Sat, Sep 17, 2016 at 9:04 AM, Dimiter Naydenov <
> dimiter.nayde...@canonical.com> wrote:
>
>> Hey Corey,
>>
>> That specific error I haven't seen at that stage - allocating container
>> addresses. Can you please paste the machine-0.log as well? Are you able
>> to consistently reproduce this or it's intermittent?
>>
>> Cheers,
>> Dimiter
>>
>> On 09/17/2016 12:17 AM, Corey Bryant wrote:
>> >
>> >
>> > On Thu, Sep 1, 2016 at 4:25 AM, Dimiter Naydenov
>> > > >>
>> > wrote:
>> >
>> > Hello!
>> >
>> > When using juju 2.0 on maas 1.9 or 2.0, you should get lxd
>> containers
>> > provisioned with as many interfaces as their host machine has,
>> because
>> > we're creating bridges on all configured host interfaces at initial
>> boot
>> > (e.g. eth0 becomes br-eth0, ens4.250 - br-ens4.250 and so on).
>> Nothing
>> > needs configuring to get this behaviour, but there's a caveat:
>> >
>> > In order for the above to work, there's a limitation currently being
>> > addressed - all interfaces on the host machine in MAAS need to be
>> linked
>> > to a subnet and have an IP address configured - either as Static or
>> > Auto, but not DHCP or Unconfigured. Otherwise the process of
>> allocating
>> > addresses for the container (represented as a MAAS Device, visible
>> on
>> > the host node's details page in MAAS UI under Containers and VMs)
>> can
>> > fail half way through and Juju will instead fall back to a the
>> single
>> > NIC LXD default profile, using lxdbr0 on a local subnet. You can
>> tell
>> > whether this happened, because there will be a WARNING in
>> > /var/log/juju/machine-0.log on the bootstrap machine, like: `failed
>> to
>> > prepare container "0/lxd/0" network config: ...` describing the
>> > underlying error encountered.
>> >
>> > Please note, the above limitation will be gone very soon - likely
>> > beta18, not beta17 scheduled for release this week. In that upcoming
>> > beta, unlinked or unconfigured host machine interfaces won't
>> prevent the
>> > multi-NIC container provisioning and address allocation - Juju will
>> just
>> > allocate addresses where it can, leaving the rest unconfigured, and
>> not
>> > falling back to using LXD default profile's lxdbr0.
>> >
>> > HTH,
>> > Dimiter
>> >
>> >
>> > Hey Dimiter,
>> >
>> > I'm hitting the same issue.  I have all the interfaces linked to subnets
>> > with auto but I still get the 'failed to prepare container "0/lxd/0"'
>> > error message saying 'connection is shut down'.  The containers are
>> > still using lxdbr0 (see http://paste.ubuntu.com/23188824/
>> > ).  The containers show up on the
>> > nodes page with juju*-lxd-*.maas names.  Do you have any other tips for
>> > getting past this?
>> >
>> > Thanks,
>> > Corey
>>
>>
>> --
>> Dimiter Naydenov 
>> Juju Core Sapphire team 
>>
>>
>
>
> --
> Regards,
> Corey
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Feedback wanted: Changes to the Ubuntu Charm

2016-09-18 Thread Ryan Beisner
On Fri, Sep 16, 2016 at 2:00 PM, Marco Ceppi <marco.ce...@canonical.com>
wrote:

> I am the upstream (for this charm) and this is an entire rework of the
> charm including a repo change.
>

Ok, thanks for the info.  What will become of the old repos?:

https://code.launchpad.net/~charmers/charms/trusty/ubuntu/trunk
lp:charms/trusty/ubuntu

https://code.launchpad.net/~charmers/charms/precise/ubuntu/trunk
lp:charms/ubuntu




> Future mp will be against this repo https://github.com/marcoceppi/
> charm-ubuntu the readme is up to date: http://juju
> charms.com/u/marcoceppi/ubuntu/1
>
> I'll have the repo fixed tomorrow.
>
>
> Marco
>
> On Fri, Sep 16, 2016, 1:36 PM Ryan Beisner <ryan.beis...@canonical.com>
> wrote:
>
>> Was there a merge proposal or pull request for these changes in the
>> charm's upstream repo?  If not, is there a branch that can be proposed
>> against the charm repo?  Or, is there a new upstream repo for the charm?
>>
>> The candidate charm in the store is helpful to validate functionality,
>> but as a contributor to the existing charm, I simply cannot determine where
>> I might base future changes, or rebase existing works in progress.
>>
>> -1 to this moving forward, from an upstream charm code contributor
>> perspective, pending clarification/completion of the upstream repo and
>> review.
>>
>> Setting aside those issues of principle, I've confirmed that it does work
>> with 1.25.6: Precise, Trusty, Xenial units.  Having this in place will be a
>> nice touch.
>>
>> Cheers,
>>
>> Ryan
>>
>>
>> On Wed, Sep 14, 2016 at 4:24 PM, Tim Penhey <tim.pen...@canonical.com>
>> wrote:
>>
>>> Marco,
>>>
>>> This is awesome. I use the ubuntu charm all the time for testing, and
>>> seeing the workload version and workload status being set is pretty cool.
>>>
>>> I had hoped that seeing the "unknown" status would apply gentle pressure
>>> to get people to set a workload status.
>>>
>>> Winning!!!
>>>
>>> Tim
>>>
>>> On 15/09/16 08:39, Marco Ceppi wrote:
>>>
>>>> Hi Ryan,
>>>>
>>>> I have granted everyone access to the candidate channel. Could you try
>>>> again?
>>>>
>>>> Thanks,
>>>> Marco Ceppi
>>>>
>>>> On Wed, Sep 14, 2016 at 3:26 PM Ryan Beisner <
>>>> ryan.beis...@canonical.com
>>>> <mailto:ryan.beis...@canonical.com>> wrote:
>>>>
>>>> Is there a merge proposal or pull request for the changes?  I'd like
>>>> to validate with 1.25.6 as the current stable release, but --channel
>>>> isn't a thing there.
>>>>
>>>> I tried to `charm pull ubuntu --channel candidate` but received:
>>>>  ERROR cannot get archive: unauthorized: access denied.
>>>>
>>>> Thanks,
>>>>
>>>> Ryan
>>>>
>>>>
>>>> On Wed, Sep 14, 2016 at 8:57 AM, Marco Ceppi
>>>> <marco.ce...@canonical.com <mailto:marco.ce...@canonical.com>>
>>>> wrote:
>>>>
>>>> Hey everyone,
>>>>
>>>> Normally, I wouldn't bother with an update like this, but it's
>>>> slightly larger than I'd care to just push out. Today, the
>>>> Ubuntu charm is a no-op, which is largely the goal of the charm.
>>>> However, as juju becomes more rich this no-op charm starts to
>>>> look incomplete. I know a few people depend on the Ubuntu charm
>>>> for setup purposes and testing. I'd hate to be the source of
>>>> breakage for that charm so I'm announcing an update here.
>>>>
>>>> Screenshot from 2016-09-14 09-48-31.png
>>>>
>>>> Other than the obvious changes to status, this also implements
>>>> workload version.
>>>>
>>>> If you depend on the Ubuntu charm for anything I urge you to
>>>> test the latest version with
>>>>
>>>> `juju deploy ubuntu --channel candidate`
>>>>
>>>> If I don't receive any negative feedback by the end of this week
>>>> I'll move what's in candidate to stable.
>>>>
>>>> Thanks,
>>>> Marco Ceppi
>>>>
>>>> --
>>>> Juju mailing list
>>>> Juju@lists.ubuntu.com <mailto:Juju@lists.ubuntu.com>
>>>> Modify settings or unsubscribe at:
>>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>>
>>>>
>>>>
>>>>
>>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: LXD instances fail to start

2016-09-14 Thread Ryan Beisner
In one case yesterday, with a full openstack-on-lxd deployed and in use, I
quickly hit the too-many-open-files issue.

I raised fs.inotify.max_user_instances on the host to 50 which
unblocked me for a while.  I ended up raising both to 99 and have had
smooth sailing since.  Currently, the host shows 676063 files open.  That
is way above the default values of 128, respectively.  Bear in mind that
these are workarounds/observations to make-it-work(tm) and might not be the
best thing to do.  Also curious if there is more official input to this
issue.

Cheers,

Ryan



On Wed, Sep 14, 2016 at 12:15 AM, James Beedy  wrote:

> For those who have been following the lxd issue that we've been digressing
> on at the charmer summit, see https://bugs.launchpad.net/
> juju-core/+bug/1602192
>
> 7/22-now no activity
>
> It looks like the bug already has eyes on it, but has been idle for a
> while now. It would be nice to get this thing fixed/resolved to some
> extent. Can we get some heat on this?!?
>
> Thanks
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: LXD instances fail to start

2016-09-14 Thread Ryan Beisner
In one case yesterday, with a full openstack-on-lxd deployed and in use, I
quickly hit the too-many-open-files issue.

I raised fs.inotify.max_user_instances on the host to 50 which
unblocked me for a while.  I ended up raising both to 99 and have had
smooth sailing since.  Currently, the host shows 676063 files open.  That
is way above the default values of 128, respectively.  Bear in mind that
these are workarounds/observations to make-it-work(tm) and might not be the
best thing to do.  Also curious if there is more official input to this
issue.

Cheers,

Ryan



On Wed, Sep 14, 2016 at 12:15 AM, James Beedy  wrote:

> For those who have been following the lxd issue that we've been digressing
> on at the charmer summit, see https://bugs.launchpad.net/
> juju-core/+bug/1602192
>
> 7/22-now no activity
>
> It looks like the bug already has eyes on it, but has been idle for a
> while now. It would be nice to get this thing fixed/resolved to some
> extent. Can we get some heat on this?!?
>
> Thanks
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Feedback wanted: Changes to the Ubuntu Charm

2016-09-14 Thread Ryan Beisner
Is there a merge proposal or pull request for the changes?  I'd like to
validate with 1.25.6 as the current stable release, but --channel isn't a
thing there.

I tried to `charm pull ubuntu --channel candidate` but received:  ERROR
cannot get archive: unauthorized: access denied.

Thanks,

Ryan


On Wed, Sep 14, 2016 at 8:57 AM, Marco Ceppi 
wrote:

> Hey everyone,
>
> Normally, I wouldn't bother with an update like this, but it's slightly
> larger than I'd care to just push out. Today, the Ubuntu charm is a no-op,
> which is largely the goal of the charm. However, as juju becomes more rich
> this no-op charm starts to look incomplete. I know a few people depend on
> the Ubuntu charm for setup purposes and testing. I'd hate to be the source
> of breakage for that charm so I'm announcing an update here.
>
> [image: Screenshot from 2016-09-14 09-48-31.png]
>
> Other than the obvious changes to status, this also implements workload
> version.
>
> If you depend on the Ubuntu charm for anything I urge you to test the
> latest version with
>
> `juju deploy ubuntu --channel candidate`
>
> If I don't receive any negative feedback by the end of this week I'll move
> what's in candidate to stable.
>
> Thanks,
> Marco Ceppi
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Migration of jenkins-charm to upstream repo

2016-08-26 Thread Ryan Beisner
Hi All,

With the Jenkins (master) charm moved into upstream [1], we need to do some
cleanup on the old code/bug spaces [2].

My thought is that we should triage and move any valid LP bugs over to the
upstream issue tracker [3] and retire the old code repo [4] to avoid
confusion in development going forward.

[1]  https://github.com/jenkinsci/jenkins-charm
[2]  https://code.launchpad.net/~charmers/charms/trusty/jenkins/trunk/
[3]  https://github.com/jenkinsci/jenkins-charm/issues
[4]  lp:charms/trusty/jenkins

Any takers?  Additional thoughts on the scope/approach needed here?

Cheers,

Ryan



On Fri, Feb 19, 2016 at 2:27 PM, Jorge O. Castro  wrote:

> Hi everyone,
>
> Just a quick note that the Jenkins charm is now at https://github.com/
> jenkinsci/jenkins-charm
>
> Thanks to the Jenkins folks who worked with us to make this happen!
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Followup on 16.07 OpenStack charm release

2016-08-02 Thread Ryan Beisner
On Tue, Aug 2, 2016 at 12:32 PM, Daniel Bidwell  wrote:

> Can I install OpenStack Mitaka on 16.04 with juju-1.25.6 and maas-
> 1.9.3?


Yes.  :)  A basic example is available at:
https://jujucharms.com/u/openstack-charmers/openstack-base



> Does this support using lxd containers for the OpenStack
> services?
>

Note that with Juju 1.25.x, the container placement nomenclature is lxc.
The above example deploys the control plane into lxc units, and storage +
compute onto metal.



>
> On Mon, 2016-08-01 at 12:02 +, Rick Harding wrote:
> > You can get juju 1 from the default repositories on xenial with the
> > package juju-1.25. Right now 1.25.6 is working its way into xenial to
> > update that. That'll give you a juju-1 command that can live
> > alongside juju (which is juju 2.0).
> >
> > On Mon, Aug 1, 2016 at 4:14 AM Adam Collard  > om> wrote:
> > > On Mon, 1 Aug 2016 at 02:43 Daniel Bidwell 
> > > wrote:
> > > > I am building a pair of maas servers, one for test and one for
> > > > prod in
> > > > lxd containers on a single host.  The containers are
> > > > ubuntu/xenial/amd64.  Trying to follow David Ames post on Friday
> > > > to use
> > > > juju-1.25.6 and maas-1.9.3.  What I got from the default
> > > > repositories
> > > > was juju 1.25.5 and maas-2.0.0-rc2. While I would like to bring a
> > > > production OpenStack up on Maas-2.0 and juju-2.0, I don't know if
> > > > I can
> > > > wait any longer.
> > > >
> > > > What ppa's do I need to use with xenial to get juju-1.25.6
> > > https://launchpad.net/~juju/+archive/ubuntu/stable
> > >
> > > > and maas-1.9.3?  Or does it need to be a trusty server?
> > > For MAAS 1.9.x, yes, you need a release before Xenial (Trusty being
> > > the best bet).
> > >
> > > https://launchpad.net/~maas/+archive/ubuntu/stable
> > >
> > > > --
> > > > Daniel Bidwell 
> > > >
> > > >
> > > > --
> > > > Juju mailing list
> > > > Juju@lists.ubuntu.com
> > > > Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
> > > > an/listinfo/juju
> > > >
> > > --
> > > Juju mailing list
> > > Juju@lists.ubuntu.com
> > > Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman
> > > /listinfo/juju
> > >
> --
> Daniel Bidwell 
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Question for cosmetic and understandability of breadcrumbs in github

2016-06-17 Thread Ryan Beisner
On Fri, Jun 17, 2016 at 2:10 AM, Jay Wren  wrote:

> If it helps, the charm set command supports arbitrary key values to be
> stored in extra-info in the charm store.
>
> e.g.
>
> $ charm set cs:~evarlast/trusty/parse-server-0 layer-x=rev1 layer-y=rev2
>
> Then this will be displayed in extrainfo:
>
> $ charm show cs:~containers/trusty/swarm extra-info
> extra-info:
>   layer-x: rev1
>   layer-y: rev2
>
> After pushing, you could use set to save the revisions of what was used to
> build the charm.
>

Thanks, Jay.  There is some potential in that.  I think it'd be worth
discussing some k:v norms around this that we can all gather around.  Then,
perhaps more importantly, how that info will persist in the places where it
needs to be observed, such as:  charm store ui, a pulled charm, and the
resulting deployments.



>
> --
> Jay
>
> On Thu, Jun 16, 2016 at 8:51 PM, Charles Butler <
> charles.but...@canonical.com> wrote:
>
>> Greetings,
>>
>> I deposit many of my layers in GitHub, and one of the things  I've been
>> striving to do is keep tag releases at the revisions i cut a charm release
>> for a given channel. As we know, the default channel is seen by no-one, and
>> runs in increments of n+1.
>>
>> My prior projects i've been following semver for releases, but that has
>> *nothing* in terms of a breadcrumb trail back to the store.
>>
>> Would it be seen as good practice to tag releases - on the top most layer
>> of a charm - with what charm release its coordinated with?
>>
>> Given the scenario that i'm ready to release swarm, and lets assume that
>> to date i haven't tagged any releases in the layer repository:
>>
>> charm show cs:~containers/trusty/swarm revision-info
>> revision-info:
>>   Revisions:
>>   - cs:~containers/trusty/swarm-2
>>   - cs:~containers/trusty/swarm-1
>>   - cs:~containers/trusty/swarm-0
>>
>> I see that i'm ready to push swarm-3 to the store:
>>
>> git tag 3
>> git push origin --tags
>>
>> I can now correlate the source revision to what i've put in my account on
>> the store, but this does not account for promulgation (which has an
>> orthogonal revision history), and mis-match of those id's.
>>
>> I think this can simply be documented that tags track
>> <>/<> pushes, and to correlate source with release, to use
>> the method shown above to fetch release info.
>>
>> Does this sound useful/helpful or am I being pedantic? (I say this
>> because Kubernetes touches ~ 7 layers, and it gets confusing keeping
>> everything up to date locally while testing, and then again re-testing with
>> --no-local-layers to ensure our repositories are caught up with current
>> development work. (Cant count the number of open pull requests hanging
>> waiting for review because we've moved to the next hot-ticket item)
>>
>> All the best,
>>
>> Charles
>> --
>> Juju Charmer
>> Canonical Group Ltd.
>> Ubuntu - Linux for human beings | www.ubuntu.com
>> Juju - The fastest way to model your service | www.jujucharms.com
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Question for cosmetic and understandability of breadcrumbs in github

2016-06-16 Thread Ryan Beisner
Git tagging is certainly helpful to developers, so a big +1 to that for
git-based projects.

I think there is also a definite need to tie a specific charm store charm
revision to its exact place and "time" of origin.  Charms which are pushed
into the charm store (and the deployed charms) are a bit mysterious in
terms of source code origin and revision level.  The two relevant `charm
set` tagging mechanisms (homepage and bugs-url) are helpful, and we do
populate that info.  But to get the granularity of identification for our
desired level of support, some other "thing" is necessary.

This week, we started injecting repo-info files [1] into OpenStack charms
[2] just ahead of the push and publish automation in our git/gerrit CI
pipeline.  The repo-info file will only ever exist in pushed charms, never
in the charm source code repos.  From the user and support perspective,
this ensures that the code origin and revision info is in the user's
possession at all times, and that it flows through and exists on-disk in
all units in the deployed models.  Many moons later, if a bug is raised or
a support call comes in, there will be a way to know exactly what charm
code is in play, down to the repo and commit hash level.

Combining both a git tag breadcrumb and a repo-info breadcrumb would really
squash the code origin mystery from any vantage point.  I'm not certain we
can tag commits via the upstream OpenStack cgit:gerrit workflow, but we'll
look into it.

On another plane:  I've seen some charm metadata buckets which appear to
accept arbitrary data bits.  It may be useful to also stash repo/commit
type of info there, but I've not explored that much at all.

[1]
https://api.jujucharms.com/charmstore/v5/~openstack-charmers-next/xenial/neutron-gateway/archive/repo-info
[2] https://jujucharms.com/u/openstack-charmers-next/neutron-gateway/xenial

Cheers!

Ryan


On Thu, Jun 16, 2016 at 4:42 PM, David Britton 
wrote:

> We are planning a tag for every push for the landscape-server and client
> charms, and bundles.  +1 on it being mentioned as a best practice (the same
> type of thing as when you release a version of any other software!).
> Though, I would recommend using the full charm store identifier eg:
> 'cs:~user/charm-3'.  Basically, the full standard out of the charm push
> operation.
>
> I also like the repo-info for traceability the other way around.  They
> solve a similar problem but depending on where you are looking are both
> useful.
>
> On Thu, Jun 16, 2016 at 2:54 PM, Merlijn Sebrechts <
> merlijn.sebrec...@gmail.com> wrote:
>
>> Yep, seems like something useful!
>>
>> 2016-06-16 22:48 GMT+02:00 Charles Butler :
>>
>>> I was actually talking to beisner about this in Irc and the open
>>> stackers are putting a report in their artifacts with the repository
>>> information.
>>>
>>>
>>> https://api.jujucharms.com/charmstore/v5/~openstack-charmers-next/xenial/neutron-gateway/archive/repo-info
>>>
>>> I think I like this better.
>>>
>>> We are generating a manifest of the other layers but I'm not certain we
>>> are storing any commit hash info in that manifest. I don't think we are.
>>>
>>> But it would give me a nice trail to follow.
>>>
>>> On Thu, Jun 16, 2016 at 4:42 PM Merlijn Sebrechts <
>>> merlijn.sebrec...@gmail.com> wrote:
>>>
 Well, Charles, I must admit that I'm a bit lost. There's some lingo in
 this email I don't quite understand, and it's quite late on my side of the
 globe ;)

 What I understand you want: You have a Github repo that contains the
 top layer of a Charm. Each tag in that repo corresponds to the revision of
 the charm build from that layer. Is this correct?

 This would allow you to see what Charm corresponds to what layer
 version.

 I don't quite understand how this would solve your kubernetes problem.
 Don't you want this information about every layer instead of just the top
 one? Is this something 'charm build' would be able to do automatically? It
 gets the layers from a repo so it might be able to put that info
 (repo/commit) in a log somewhere?

 2016-06-16 19:51 GMT+02:00 Charles Butler :

> Greetings,
>
> I deposit many of my layers in GitHub, and one of the things  I've
> been striving to do is keep tag releases at the revisions i cut a charm
> release for a given channel. As we know, the default channel is seen by
> no-one, and runs in increments of n+1.
>
> My prior projects i've been following semver for releases, but that
> has *nothing* in terms of a breadcrumb trail back to the store.
>
> Would it be seen as good practice to tag releases - on the top most
> layer of a charm - with what charm release its coordinated with?
>
> Given the scenario that i'm ready to release swarm, and lets assume
> that to date i haven't tagged any releases in 

Re: Integration with OpenStack app catalog

2016-06-13 Thread Ryan Beisner
Hi Merlijn,

That's a great question.  It is something that we've followed.  Take note
that the headline in the OpenStack Apps Catalog is:
*"The OpenStack Application Catalog will help you make applications
available on your cloud."*

But, I think it should more accurately state:
*"The OpenStack Application Catalog will help you make applications
available on **OpenStack clouds**."  (full stop)*

Juju Charms make applications available on all/most major public clouds AND
OpenStack clouds ... and on bare metal, and in containers on your dev
laptop, and who knows what next.

Write one charm, deploy and manage an application in a whole load [1] of
places. :)

[1] https://jujucharms.com/docs/devel/clouds

Cheers,

Ryan



On Mon, Jun 13, 2016 at 7:51 AM, Rick Harding 
wrote:

> Thanks Merlijn, I'd not come across this yet. I think the best summary is
> here:
>
> https://wiki.openstack.org/wiki/App-Catalog#Horizon_Plugin_for_Native_Access
>
> It looks like it's the start of a hub for images, heat templates, and
> murano packages. At this time there's no integration with Juju. It's
> interesting in that it seems like it's a hub of data that you can pull down
> into your openstack. For instance, you can grab a glance image and copy it
> to your glance repository.
>
> My first thought is that it'd be great for someone to write up a way to
> view/search charmstore content from within the app catalog plugin, but
> there's not a local place to pull those charms down into in the current
> model of that catalog. We'd prefer something more along the lines of
> finding and taking that straight into a deployment. It's kind of an oddball
> fit to how we normally think and work with things adding some extra
> steps/tools.
>
> I'll bring this up and we'll talk through other ideas.
>
> On Mon, Jun 13, 2016 at 7:53 AM Merlijn Sebrechts <
> merlijn.sebrec...@gmail.com> wrote:
>
>> Hi all
>>
>>
>> Just found out about the OpenStack app catalog:
>> http://apps.openstack.org/#
>>
>> What do you guys think about this? Is this something that might become a
>> competitor for Juju? Are there any plans for integrating Juju into this?
>>
>>
>>
>> Kind regards
>> Merlijn
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Introducing a Daily PPA for Juju

2016-05-12 Thread Ryan Beisner
Absolutely <3 this.

On Thu, May 12, 2016 at 3:36 PM, Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> Whether you want to track Juju development more closely, or simply like
> living on the edge, there is a new ppa available for you. Dubbed the Juju
> Daily ppa[1], it] contains the latest blessed builds from CI testing.
> Installing this ppa and upgrading regularly allows you to stay in sync with
> the absolute latest version of Juju that passes our CI testing. New
> packages are copied as soon as a blessed build is complete. This can occur
> multiple times a day should every build pass, or it may be several days
> between builds should new revisions contain failures.
>
>
> The Juju QA team don’t recommend running this ppa for production or
> critical system usage, but we are happy to hear about bugs[2] you may
> encounter while running these versions of Juju. To add the ppa, you will
> need to add ppa:juju/daily to your software sources.
>
>
> sudo add-apt-repository ppa:juju/daily
>
>
> Do be aware that adding this ppa will upgrade any version of Juju you may
> have installed. Also note this ppa contains builds without published
> streams, so you will need to generate or acquire streams on your own. For
> most users, this means you should pass --upload-tools during the bootstrap
> process. However you may also pass the agent-metadata-url and agent-stream
> as config options. See the ppa description and simplestreams documentation
> for more details[3].
>
>
> Finally, should you wish to revert to a stable version of Juju, you can
> use the ppa-purge tool[4] to remove the daily ppa and the installed version
> of Juju.
>
>
> We hope this proves useful to you, feedback welcome!
>
>
>
> Nicholas
>
> 1.
>
>https://launchpad.net/~Juju/+archive/ubuntu/daily
>
>
> 2.
>
>https://bugs.launchpad.net/juju-core/+filebug
>
> 3.
>
>https://github.com/juju/juju/blob/master/doc/simplestreams-metadata.txt
>
> 4.
>
>http://askubuntu.com/a/310
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: keystone deploy issue

2016-04-19 Thread Ryan Beisner
Discussed on IRC. TLDR for the list:

The package SRU landed after the keystone charm was updated (good) but
before the openstack-base and openstack-telemetry bundles wer updated (the
gap Ed hit).  I've put up the necessary bundle changes and linked them to
the bug.  We'll also adjust our charm SRU process to guard against that
sort of gap in the future.

Many thanks for raising the bug and for the discussion.

Ryan



On Tue, Apr 19, 2016 at 9:25 PM, Edward Bond <celpa.f...@gmail.com> wrote:

> Beisner,
>
> I saw the update and don't quite understand.
>
> Trying the bundle from the store doesn't work. Brand new install.
>
> Do I need to do something on my end differently or just wait for it to get
> fixed ?
>
> Thanks!
> On Apr 19, 2016 9:19 PM, "Ryan Beisner" <ryan.beis...@canonical.com>
> wrote:
>
>> Hi Ed,
>>
>> Bug updated @
>> https://bugs.launchpad.net/charms/+source/keystone/+bug/1572358.
>>
>> TLDR:  The SRU for
>> https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1559935 necessitates
>> a charm upgrade before upgrading the keystone package to 8.1.0, and/or
>> before redeploying the keystone charm for Trusty-Liberty or Wily-Liberty.
>>
>> Cheers,
>>
>> --
>> Ryan Beisner
>> QA Engineer, Ubuntu OpenStack Engineering, Canonical, Ltd.
>> irc:beisner  gh/gerrit:ryan-beisner  lp:~1chb1n
>>
>>
>> On Tue, Apr 19, 2016 at 8:02 PM, ed bond <celpa.f...@gmail.com> wrote:
>>
>>> Anyone know why I might be getting this charm issue:
>>>
>>> FATAL ERROR: Could not determine OpenStack codename for version 8.1
>>>
>>> I get this in keystone.
>>> https://bugs.launchpad.net/charms/+source/keystone/+bug/1572358
>>>
>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>
>>>
>>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: keystone deploy issue

2016-04-19 Thread Ryan Beisner
Hi Ed,

Bug updated @
https://bugs.launchpad.net/charms/+source/keystone/+bug/1572358.

TLDR:  The SRU for
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1559935 necessitates a
charm upgrade before upgrading the keystone package to 8.1.0, and/or before
redeploying the keystone charm for Trusty-Liberty or Wily-Liberty.

Cheers,

-- 
Ryan Beisner
QA Engineer, Ubuntu OpenStack Engineering, Canonical, Ltd.
irc:beisner  gh/gerrit:ryan-beisner  lp:~1chb1n


On Tue, Apr 19, 2016 at 8:02 PM, ed bond <celpa.f...@gmail.com> wrote:

> Anyone know why I might be getting this charm issue:
>
> FATAL ERROR: Could not determine OpenStack codename for version 8.1
>
> I get this in keystone.
> https://bugs.launchpad.net/charms/+source/keystone/+bug/1572358
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: New feature for charmers - min-juju-version

2016-03-30 Thread Ryan Beisner
On Wed, Mar 30, 2016 at 1:30 AM, Martin Packman <
martin.pack...@canonical.com> wrote:

> On 23/03/2016, Ryan Beisner <ryan.beis...@canonical.com> wrote:
> >
> > To summarize:
> > If we do nothing with regard to juju 1.25.x or the various tools, and if
> a
> > relevant charm grows a series list in metadata, a load of existing
> > validation and demo bundles will no longer be deployable with 1.25.x
> > because `juju deploy` on 1.25.x traces when receiving a list type as a
> > metadata series value.  These type of bundles often point at gh: and lp:
> > charm dev/feature branches, so the charm series metadata is as it is in
> the
> > dev charm's trunk (unmodified by the charm store).
>
> I've landed a change on the 1.25 branch that accepts a list of series
> in charm metadata and takes the first value as the default series.
>
> <https://bugs.launchpad.net/juju-core/+bug/1563607>
>
> Charm authors will still need to that that kind of multi series charm
> and place it in named-series-directories under $JUJU_REPOSITORY to
> deploy with 1.X but I think this is fine for your use case?
>

Indeed, that directly addresses the issue and opens us up for cross-1.x-2.x
charm dev and testing.  Many thanks!



>
> Martin
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: New feature for charmers - min-juju-version

2016-03-23 Thread Ryan Beisner
On Wed, Mar 23, 2016 at 1:03 PM, roger peppe 
wrote:

> On 23 March 2016 at 15:06, David Ames  wrote:
> > On 03/21/2016 06:54 PM, Stuart Bishop wrote:
> >>
> >> On 22 March 2016 at 11:42, Rick Harding 
> >> wrote:
> >>>
> >>> I believe that went out and is ok Stuart. The charmstore update is
> >>> deployed
> >>> and when you upload a multi-series charm to the charmstore it creates
> >>> separate charms that work on older clients. If you hit issues with that
> >>> please let me know.
> >>
> >>
> >> Its only fixed for charms served from the charm store. CI systems and
> >> such test branches, for example ensuring tests pass before uploading a
> >> release to the charm store. I suspect this is exactly what Ryan needs
> >> to do and why I mentioned the open bug. Unless 1.25 is updated to
> >> handle the different data types, CI systems will need to work around
> >> the issue by either roundtripping through the charm store (in a
> >> personal namespace to avoid mid air collisions) or munging
> >> metadata.yaml.
>
> ISTM that munging metadata.yaml could be a reasonable way
> to go here. It's not too hard. Here's a program you could use:
> http://play.golang.org/p/xl7yArJhtT
>
>
Indeed that is an option, and that approach would require additional work
on other tooling to enable metadata munging for 1.25.x use.  Such as:
 bundletester, amulet, mojo, juju-deployer.

Whereas, if it is addressed in `juju deploy` for 1.25.x, then it's
addressed in one place for every scenario that I've observed.

To summarize:
If we do nothing with regard to juju 1.25.x or the various tools, and if a
relevant charm grows a series list in metadata, a load of existing
validation and demo bundles will no longer be deployable with 1.25.x
because `juju deploy` on 1.25.x traces when receiving a list type as a
metadata series value.  These type of bundles often point at gh: and lp:
charm dev/feature branches, so the charm series metadata is as it is in the
dev charm's trunk (unmodified by the charm store).




> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: New feature for charmers - min-juju-version

2016-03-21 Thread Ryan Beisner
On Mon, Mar 21, 2016 at 9:18 AM, Rick Harding <rick.hard...@canonical.com>
wrote:

> Checked with the team and older clients don't identify themselves to the
> charmstore so we can't tell 1.24 from 1.25. So yes, we should only take
> advantage of this with 2.0 and greater. I'll check ot make sure we're able
> to do this type of thing going forward though. It's something that would
> have been nicer if we had that version info all the time.
>

Thanks, Rick.  Indeed that will be useful going forward.

So, will 1.25.x clients be able to deploy charms which possess
min-juju-version and gracefully ignore that metadata?   Or, will they be
refused a charm by the store because the store knows the client version is
less than 2.0?



>
>
> On Mon, Mar 21, 2016 at 10:08 AM Rick Harding <rick.hard...@canonical.com>
> wrote:
>
>> Thanks Ryan, good point. I'll check with the team. I think, at least in
>> my mind, we were very focused on 2.0 feature set, such as resources, and so
>> anything that needed 2.0 would be in the new world order. Your desire to
>> actually reach out into the past and implement this via the charmstore for
>> 1.25 is interesting and we'll have to see if clients passed enough info in
>> the past to be able to do that intelligently.
>>
>>
>>
>> On Mon, Mar 21, 2016 at 10:06 AM Ryan Beisner <ryan.beis...@canonical.com>
>> wrote:
>>
>>> On Sun, Mar 20, 2016 at 3:21 PM, roger peppe <roger.pe...@canonical.com>
>>> wrote:
>>>
>>>> If the released Juju 2.0 uses v5 of the charmstore API (which it will
>>>> soon hopefully anyway when my branch to support the new publishing
>>>> model lands), then there's a straightforward solution here, I think:
>>>> change v4 of the charmstore API to refuse to serve min-juju-version
>>>> charm archives to clients. Since the only v4-using clients should be
>>>> old juju versions, this could provide a reasonably cheap to implement
>>>> and straightforward solution to the problem.
>>>>
>>>
>>> To re-confirm:  Would that be *"don't serve up a charm for 1.25.x
>>> clients when min-juju-verison is defined at all"* -or- *"cleverly
>>> interpret the min-juju-version server-side and selectively refuse to
>>> deliver the charm when client version is less than the min version?"*
>>>
>>> If the former, OpenStack charms may have to defer utilization of
>>> min-juju-version until such time as 1.x is fully deprecated (or fork 27+
>>> charms and maintain two separate sets of charms, which is naturally not our
>>> desire).
>>>
>>> If the latter, brilliant!  :)
>>>
>>> Rationale and use case:
>>> A single Keystone charm supports deployment (thereby enabling continued
>>> CI & testing) of Precise, Trusty, Wily, Xenial and onward.  It is planned
>>> to have a min-juju-version value of 1.25.x.  That charm will support >=
>>> 1.25.x, including 2.x, and is slated to release with 16.04.  This is
>>> representative of all of the OpenStack charms.
>>>
>>> Note:  I've raised a feature request bug against charm tools for
>>> min-juju-version proof recognition.  We'll need to have that in place
>>> before we can add min-juju-version metadata into the OpenStack charms as
>>> our CI gate proofs every charm change request.
>>>
>>> Thanks again!
>>>
>>>
>>>
>>>
>>>>
>>>> On 18 March 2016 at 09:49, Uros Jovanovic <uros.jovano...@canonical.com>
>>>> wrote:
>>>> > We’re looking in how we can identify 1.x Juju client/server in such a
>>>> way
>>>> > that at the same time we don’t block access to charms for other
>>>> clients
>>>> > using our HTTP API.
>>>> >
>>>> >
>>>> > On Fri, Mar 18, 2016 at 9:34 AM, Mark Shuttleworth <m...@ubuntu.com>
>>>> wrote:
>>>> >>
>>>> >> On 17/03/16 22:34, Nate Finch wrote:
>>>> >> > Yes, it'll be ignored, and the charm will be deployed normally.
>>>> >> >
>>>> >> > On Thu, Mar 17, 2016 at 3:29 PM Ryan Beisner
>>>> >> > <ryan.beis...@canonical.com>
>>>> >> > wrote:
>>>> >> >
>>>> >> >> This is awesome.  What will happen if a charm possesses the flag
>>>> in
>>>> >> >> metadata.yaml and is deployed with 1.25.x?  Will it gracefully
>>

Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Ryan Beisner
 the great tooling and thought leadership.  We leverage the
everloving *stuff* out of Amulet.

Charm on!

-- 
Ryan Beisner
QA Engineer, Ubuntu OpenStack Engineering, Canonical, Ltd.
irc:beisner  gh/gerrit:ryan-beisner  lp:~1chb1n



On Wed, Mar 16, 2016 at 7:52 PM, Marco Ceppi <marco.ce...@canonical.com>
wrote:

> Hello everyone!
>
> This is an email I've been meaning to write for a while, and have
> rewritten a few times now. With 2.0 on the horizon and the charm ecosystem
> rapidly growing, I couldn't keep the idea to myself any longer.
>
> # tl;dr:
>
> We should stop writing Amulet tests in charms and instead only write them
> Bundles and force charms to do unit-testing (when possible) and promote
> that all charms be included in bundles in the store.
>
> # Problem
>
> Without making this a novel, charm-testing and amulet started before
> bundles were even a construct in Juju with a spec written before Juju 1.0.
> Since then, many new comers to the ecosystem have remarked how odd it is to
> be writing deployment validations at the charm level. Indeed, as years have
> gone by and new tools have sprung up it's become clear that; having an
> author try to model all the permutations of a charms deployment and do the
> physical deploys at that charm level are tedious and incomplete at best.
>
> With the explosion of layers and improvements to uniting test in charms at
> that component level, I feel that continuing to create these bespoke
> "bundles" via amulet in a single charm will not be a robust solution going
> forward. As we sprint closer to Juju 2.0 we're seeing a higher demand for
> assurance of working scenarios, and a sharp focus on quality at every
> level. As such I'd like to propose the following policy changes:
>
> - All bundles must have tests before promulgation to the store
> - All charms need to have comprehensive tests (unit or amulet)
> - All charms should be included in a bundle
>
> I'll break down my reasoning and examples in the following sections:
>
> # All bundles must have tests before promulgation to the store
>
> Writing bundle tests with Amulet is actually a more compelling story today
> than writing an Amulet test case for a charm. As an example, there's a new
> ELK stack bundle being produced, here's what the test for that bundle looks
> like:
> https://github.com/juju-solutions/bundle-elk-stack/blob/master/tests/10-test-bundle
>
> This makes a lot of sense because it's asserting that the bundle is
> working as expected by the Author who put the bundle together. It's also
> loading the bundle.yaml as the deployment spec meaning as the bundle
> evolves the tests will make sure they continue to run as expected. Also,
> this could potentially be used in future smoke tests for charms being
> updated if a CI process swaps out, say elasticsearch, for a newer version
> of a charm being reviewed. We can assert that both the unittests in
> elasticsearch work and it operates properly in an existing real world
> solution a la the bundle.
>
> Additional examples:
> -
> https://github.com/juju-solutions/bundle-realtime-syslog-analytics/blob/master/tests/01-bundle.py
> -
> https://github.com/juju-solutions/bundle-apache-core-batch-processing/blob/master/tests/01-bundle.py
>
> # All charms need to have comprehensive tests (unit or amulet)
>
> This is just a clarification and more strongly typed policy change that
> require charms have (preferred) unit tests or, if not applicable, then an
> Amulet test. Bash doesn't really allow for unittesting, so in those
> scenarios, Amulet tests would function as a valid testing case.
>
> There are also some charms which will not make sense as a bundle. One
> example is the recently promulgated Fiche charm:
> http://bazaar.launchpad.net/~charmers/charms/trusty/fiche/trunk/view/head:/tests/10-deploy
>  It's
> a standalone pastebin, but it's an awesome service that provides deployment
> validation with an Amulet test. The test stands up the charm, exercises
> configuration, and validates the service responds in an expected way. For
> scenarios where a charm does not have a bundle an Amulet test would be
> required.
>
> Any charm that currently includes an Amulet test is welcome to continue
> keeping such a test.
>
> # All charms should be included in a bundle
>
> This last one is to underscore that charms need to serve a purpose. This
> policy is written as not an absolute, but instead a strongly worded
> suggestion as there are always charms that are exceptions to the rules. One
> such example is the aforementioned Fiche charm which as a bundle would not
> make as much sense, but is still a purposeful charm.
>
> That being said, most users coming to consume Juju are looking to solve a

Re: Proposal: Charm testing for 2.0

2016-03-19 Thread Ryan Beisner
On Thu, Mar 17, 2016 at 8:38 AM, Marco Ceppi <marco.ce...@canonical.com>
wrote:

> On Thu, Mar 17, 2016 at 12:39 AM Ryan Beisner <ryan.beis...@canonical.com>
> wrote:
>
>> Good evening,
>>
>> I really like the notion of a bundle possessing functional tests as an
>> enhancement to test coverage.  I agree with almost all of those ideas.  :-)
>>   tldr;  I would suggest that we consider bundle tests 'in addition to' and
>> not 'as a replacement of' individual charm tests, because:
>>
>>
>> *# Coverage and relevance*
>> Any given charm may have many different modes of operation -- features
>> which are enabled in some bundles but not in others.  A bundle test will
>> likely only exercise that charm in the context of its configuration as it
>> pertains to that bundle.  However, those who propose changes to the
>> individual charm should know (via tests) if they've functionally broken the
>> wider set of its knobs, bells and levers, which may be irrelevant to, or
>> not testable in the bundle's amulet test due to its differing perspective.
>> This opens potential functional test coverage gaps if we lean solely on the
>> bundle for the test.
>>
>
> In a world with layered charms, do we still need functional tests at the
> charm level?
>

I believe the answer is not only yes, but that it becomes even more
important with layered charms because of the multiple variables involved.
Let's say you've rebuilt an existing layered charm that is composed of one
of your own updated layers and 4 other published layers which may or may
not have changed since the last charm build.  After rebuilding the charm,
you can now re-run the charm Amulet test to verify that the rebuilt charm
hasn't regressed in functionality.

If there's no charm amulet test, you modify the bundle (because the bundle
is not likely pointing to your newly-built charm), deploy the bundle and
execute its tests.  This may only utilize a portion of the charm's
capability, so the remaining capability of the re-built charm is at risk
for breakage if that is the gate.



>
> I'd also like to clarify that this would not deprecate functional tests in
> charms but makes it so teh policy reads EITHER unit tests or functional
> tests, with a preference on a layered approach using unittests. Development
> is quicker and since ever level of the charm is now segmented into
> repositories which (should) have their own testing we don't need to
> validate that X interface works in Y permutation as the interface is a
> library with both halves of the communication channel and with tests.
>
>
>> There are numerous cases where a charm can shift personalities and use
>> cases, but not always on-the-fly in an already-deployed model.  In those
>> cases, it may take a completely different and new deployment topology and
>> configuration (bundle) to be able to exercise the relevant functional
>> tests.  Without integrated amulet tests within the charm, one would have to
>> publish multiple bundles, each containing separate amulet tests.  For
>> low-dev-velocity charms, for simple charms, or for charms that aren't
>> likely to be involved in complex workloads, this may be manageable.  But I
>> don't think we should discourage or stop looking for individual charm
>> amulet tests even there.
>>
>
> We will always support charms with Amulet tests, ALWAYS, I think it'd even
> be the hallmark of an exceptionally well written charm if it had the means
> to do extensive functional testing.
>

*>>> # tl;dr:*

*>>> We should stop writing Amulet tests in charms and instead only write
them Bundles and force charms to do unit-testing (when possible) and
promote that all charms be included in bundles in the store.*

My main alarm here was the tl;dr.  I think we should not stop writing
Amulet tests in charms.  Rather, encourage all fronts:  charm level Amulet
tests, bundle Amulet tests, layer unit tests and charm unit tests.



> I also have a separate email I'm authoring where we should be leveraging
> health checks /inside/ a charm rather than externally poking it, so that
> any deployment at anytime, in any mutation can be asked "hey, you ok?" and
> get back a detailed report that assertions could be run against. I have a
> follow up (and one of the reasons this email took me so long)
>

+1 to the "self-aware charm" as discussed (in Malta I think), as an added
enhancement to quality and observability efforts.



>
>
>> A charm's integrated amulet test can be both more focused and more
>> expansive in what it exercises, as it can contain multiple deployment
>> topologies and configurations (equivalent to cycling multiple unique
>> bundles).  For example: 

Re: charmers + openstack-charmers application

2016-02-24 Thread Ryan Beisner
Thank you all.  I appreciate the kind words!

Cheers,

Ryan

On Wed, Feb 24, 2016 at 10:54 AM, Sean Feole <sean.fe...@canonical.com>
wrote:

> I'm not a juju-charmer but Ryans work and contributions to the Openstack
> ecosystem are beyond stellar. +1 from me
>
> On Fri, Feb 19, 2016 at 10:14 AM, Ryan Beisner <ryan.beis...@canonical.com
> > wrote:
>
>> Happy Friday, charmers!
>>
>> Please consider my application for membership to ~charmers and an
>> ~openstack-charmers.
>>
>> Over the past two years, I've contributed to each of the 20+ OpenStack
>> charms (and jenkins, ubuntu, mysql, mongodb).  While most of my work has
>> been in the field of charm testing, I've done a load of reviews, bug
>> triage, bug fixes, charm and charm-helper contributions, partner and
>> feature integration and validation.
>>
>> As a ~charm-contributors member, I've watched the broader charm review
>> queue for the proposals where I have specific domain knowledge, and have
>> taken some of those reviews.
>>
>> One of my babies is the Ubuntu OpenStack Charm Integration test
>> automation system (aka UOSCI).  That system continuously gates our Ubuntu
>> OpenStack development activity, charm and package SRU and release
>> processes.  It has deployed and tested ~14,000+ OpenStack clouds in the
>> past ~1yr, plus all of the accompanying amulet, lint, mojo and unit tests.
>>
>> As Juju core approaches and reaches "proposed" in each dev cycle, we flip
>> some bits and hammer on the proposed Juju version in the UOSCI automation
>> as a pre-release cross-validation effort.  Same for MAAS.
>>
>> I've delivered and participated in remote and in-person customer demos of
>> our tool sets and charms, and have given UOS and Charmer Summit demos and
>> talks.  I've made a point over the past year or so to chip in on AskUbuntu,
>> generally with OpenStack-specific questions.
>>
>>
>> I am:
>>  - https://github.com/ryan-beisner
>>  - https://launchpad.net/~1chb1n
>>  - https://launchpad.net/~1chb1n/+karma
>>  - http://askubuntu.com/users/382225/beisner
>>
>> Bugs:
>>  - https://goo.gl/vUsGXN
>>
>> My alternate bot identities work while I sleep:
>>  - https://github.com/uoscibot
>>  - https://launchpad.net/~uosci-testing-bot
>>
>> Other points of interest:
>>  -
>> https://code.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk
>>  -
>> https://code.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs
>>  - https://github.com/openstack-charmers
>>  -
>> http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/files/head:/charmhelpers/contrib/openstack/amulet/
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/ceilometer/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/ceilometer-agent/next
>>  - https://code.launchpad.net/~openstack-charmers/charms/trusty/ceph/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/ceph-osd/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/ceph-radosgw/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/cinder/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/cinder-ceph/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/glance/next
>>  - https://code.launchpad.net/~openstack-charmers/charms/trusty/heat/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/keystone/next
>>  - https://code.launchpad.net/~openstack-charmers/charms/trusty/lxd/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-gateway/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-openvswitch/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/nova-cloud-controller/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/nova-compute/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/openstack-dashboard/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/percona-cluster/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/rabbitmq-server/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/swift-proxy/next
>>  -
>> https://code.launchpad.net/~openstack-charmers/charms/trusty/swift-storage/next
>>
>>
>> Thanks for all the great tools, and thank you for your consideration.
>>
>> Cheers & happy charming!
>>
>> Ryan Beisner
>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


charmers + openstack-charmers application

2016-02-19 Thread Ryan Beisner
Happy Friday, charmers!

Please consider my application for membership to ~charmers and an
~openstack-charmers.

Over the past two years, I've contributed to each of the 20+ OpenStack
charms (and jenkins, ubuntu, mysql, mongodb).  While most of my work has
been in the field of charm testing, I've done a load of reviews, bug
triage, bug fixes, charm and charm-helper contributions, partner and
feature integration and validation.

As a ~charm-contributors member, I've watched the broader charm review
queue for the proposals where I have specific domain knowledge, and have
taken some of those reviews.

One of my babies is the Ubuntu OpenStack Charm Integration test automation
system (aka UOSCI).  That system continuously gates our Ubuntu OpenStack
development activity, charm and package SRU and release processes.  It has
deployed and tested ~14,000+ OpenStack clouds in the past ~1yr, plus all of
the accompanying amulet, lint, mojo and unit tests.

As Juju core approaches and reaches "proposed" in each dev cycle, we flip
some bits and hammer on the proposed Juju version in the UOSCI automation
as a pre-release cross-validation effort.  Same for MAAS.

I've delivered and participated in remote and in-person customer demos of
our tool sets and charms, and have given UOS and Charmer Summit demos and
talks.  I've made a point over the past year or so to chip in on AskUbuntu,
generally with OpenStack-specific questions.


I am:
 - https://github.com/ryan-beisner
 - https://launchpad.net/~1chb1n
 - https://launchpad.net/~1chb1n/+karma
 - http://askubuntu.com/users/382225/beisner

Bugs:
 - https://goo.gl/vUsGXN

My alternate bot identities work while I sleep:
 - https://github.com/uoscibot
 - https://launchpad.net/~uosci-testing-bot

Other points of interest:
 - https://code.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk
 -
https://code.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs
 - https://github.com/openstack-charmers
 -
http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/files/head:/charmhelpers/contrib/openstack/amulet/
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/ceilometer/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/ceilometer-agent/next
 - https://code.launchpad.net/~openstack-charmers/charms/trusty/ceph/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/ceph-osd/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/ceph-radosgw/next
 - https://code.launchpad.net/~openstack-charmers/charms/trusty/cinder/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/cinder-ceph/next
 - https://code.launchpad.net/~openstack-charmers/charms/trusty/glance/next
 - https://code.launchpad.net/~openstack-charmers/charms/trusty/heat/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/keystone/next
 - https://code.launchpad.net/~openstack-charmers/charms/trusty/lxd/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-gateway/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-openvswitch/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/nova-cloud-controller/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/nova-compute/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/openstack-dashboard/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/percona-cluster/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/rabbitmq-server/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/swift-proxy/next
 -
https://code.launchpad.net/~openstack-charmers/charms/trusty/swift-storage/next


Thanks for all the great tools, and thank you for your consideration.

Cheers & happy charming!

Ryan Beisner
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


"Evolution of a Charm" presentation from the Charmer Summit

2015-09-21 Thread Ryan Beisner
>From the Juju Charmer Summit (Charm Testing session), here are the
"Evolution of a Charm" slides.

https://docs.google.com/presentation/d/1SRo8_qVM0RqZcK78yzgNcIIPuiSMa_bhSBi4s8Zx72M/pub?start=true=false=3300

TLDR;  We describe how we are evolving each of the Ubuntu OpenStack charms
over time to fit the underlying product support matrix as it also evolves.
Functionally testing that matrix is a core piece of the dev and release
process, and yes I am biased.  ;-)

It was really great to work with so many companies and individuals who are
using Juju, MAAS and/or Ubuntu OpenStack in various environments and use
cases.  Thanks again to those who attended and participated.

Cheers!

-- 
Ryan Beisner
QA Engineer, Ubuntu OpenStack Engineering, Canonical, Ltd.
irc:beisner  lp:~1chb1n
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How should we change default config values in charms?

2015-08-13 Thread Ryan Beisner
Actually:  the example bundle I listed is moot.  It uses the
percona-cluster charm.  And it's pinned to a specific charm revision like a
good little bundle.

On Thu, Aug 13, 2015 at 10:57 AM, Ryan Beisner ryan.beis...@canonical.com
wrote:

 Greetings,

 I'm all for sensible defaults and eliminating or mitigating paper cut
 risk.  Those are all good things.  I do think we need to take a hard look
 at the potential impact of a default charm config option value change, any
 time that is considered.  My main concern is that, in changing a default
 value, we risk breaking existing users.  But as long as we give advanced
 notice, eta, updated docs, release notes, etc., I think it could be
 completely reasonable to pursue a default value change.

 As I understand the original motivation behind this proposed default value
 change, when deploying the mysql charm into lxc, it may fail unless the
 dataset-size option value is set to something like 128MB instead of the
 default 80%.  So, a user may not be able to just drop the mysql charm into
 lxc with the defaults and experience success.  They need to set the config
 option value to something workable for their environment. That impacts
 adoption.  ie. It doesn't just work.

 If we do make the default value change, we need to take a look at the
 bundles in the charm store to identify and adjust any which rely on the
 established default value.  For example...  ;-)  and I'm not at all biased:
 https://jujucharms.com/openstack-base/36  (
 https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-36/archive/bundle.yaml
 )

 That one is easy enough to identify and fix, but it underscores the need
 to take the time to identify, evaluate and discuss the risks, then define
 the things-to-do, and proceed if that is the consensus.

 Cheers,

 Ryan

 On Thu, Aug 13, 2015 at 10:15 AM, John Meinel j...@arbash-meinel.com
 wrote:

 I believe there is work being done so that you can do juju get and send
 that output directly into juju set or even juju deploy --config. And I
 think that's a much better story around repeatable deployments than trying
 to make sure the defaults never change. If they really care about
 repeatable they're probably going to pin the version of the charm anyway.

 So I'd bias clearly on the fix something that isn't working well rather
 than fear change because someone might depend on the current sketchy
 behaviour. Obviously the call is somewhat situational.

 John
 =:-
 On Aug 13, 2015 10:03 AM, Jorge O. Castro jo...@ubuntu.com wrote:

 Hi everyone,

 See: https://bugs.launchpad.net/bugs/1373862

 This morning Marco proposed that we change the default dataset-size
 from 80% of the memory to 128M.

 Ryan thinks that before we make a default change like this that we
 should discuss the implications, for example, if you have an existing
 Juju MySQL deployment, and say you want to replicate that in another
 environment, the default change is an unexpected surprise to the user.

 I am of the opinion that MySQL is one of the first services people
 play with when trying Juju and that the charm not working ootb with
 the local provider is a big papercut we'd like to fix.



 --
 Jorge Castro
 Canonical Ltd.
 http://juju.ubuntu.com/ - Automate your Cloud Infrastructure

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju


 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju



-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How should we change default config values in charms?

2015-08-13 Thread Ryan Beisner
Greetings,

I'm all for sensible defaults and eliminating or mitigating paper cut
risk.  Those are all good things.  I do think we need to take a hard look
at the potential impact of a default charm config option value change, any
time that is considered.  My main concern is that, in changing a default
value, we risk breaking existing users.  But as long as we give advanced
notice, eta, updated docs, release notes, etc., I think it could be
completely reasonable to pursue a default value change.

As I understand the original motivation behind this proposed default value
change, when deploying the mysql charm into lxc, it may fail unless the
dataset-size option value is set to something like 128MB instead of the
default 80%.  So, a user may not be able to just drop the mysql charm into
lxc with the defaults and experience success.  They need to set the config
option value to something workable for their environment. That impacts
adoption.  ie. It doesn't just work.

If we do make the default value change, we need to take a look at the
bundles in the charm store to identify and adjust any which rely on the
established default value.  For example...  ;-)  and I'm not at all biased:
https://jujucharms.com/openstack-base/36  (
https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-36/archive/bundle.yaml
)

That one is easy enough to identify and fix, but it underscores the need to
take the time to identify, evaluate and discuss the risks, then define the
things-to-do, and proceed if that is the consensus.

Cheers,

Ryan

On Thu, Aug 13, 2015 at 10:15 AM, John Meinel j...@arbash-meinel.com
wrote:

 I believe there is work being done so that you can do juju get and send
 that output directly into juju set or even juju deploy --config. And I
 think that's a much better story around repeatable deployments than trying
 to make sure the defaults never change. If they really care about
 repeatable they're probably going to pin the version of the charm anyway.

 So I'd bias clearly on the fix something that isn't working well rather
 than fear change because someone might depend on the current sketchy
 behaviour. Obviously the call is somewhat situational.

 John
 =:-
 On Aug 13, 2015 10:03 AM, Jorge O. Castro jo...@ubuntu.com wrote:

 Hi everyone,

 See: https://bugs.launchpad.net/bugs/1373862

 This morning Marco proposed that we change the default dataset-size
 from 80% of the memory to 128M.

 Ryan thinks that before we make a default change like this that we
 should discuss the implications, for example, if you have an existing
 Juju MySQL deployment, and say you want to replicate that in another
 environment, the default change is an unexpected surprise to the user.

 I am of the opinion that MySQL is one of the first services people
 play with when trying Juju and that the charm not working ootb with
 the local provider is a big papercut we'd like to fix.



 --
 Jorge Castro
 Canonical Ltd.
 http://juju.ubuntu.com/ - Automate your Cloud Infrastructure

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju


 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju devel 1.22-beta4 is available for testing.

2015-02-27 Thread Ryan Beisner
Ditto, also seeing that.

On Fri, Feb 27, 2015 at 11:16 AM, Jason Hobbs jason.ho...@canonical.com
wrote:

 Hi Curtis,

 On 02/26/2015 01:47 PM, Curtis Hovey-Canonical wrote:
  juju-core 1.22-beta4
 
  Development releases use the 'devel' simple-streams. You must configure
  the 'agent-stream' option in your environments.yaml to use the matching
  juju agents.
 
  agent-stream: devel

 There don't seem to be tools for 1.22-beta4 in the devel stream:

 WARNING failed to find 1.22-beta4 tools, will attempt to use 1.22-beta3

 I don't see 1.22-beta4 in either

 Thanks,
 https://streams.canonical.com/juju/tools/streams/v1/com.ubuntu.juju:devel:tools.sjson

 or


 https://streams.canonical.com/juju/tools/streams/v1/com.ubuntu.juju-devel-tools.sjson

 (unsure which one gets used).

 Jason

 --
 Juju-dev mailing list
 juju-...@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Makefile target names

2015-01-22 Thread Ryan Beisner
Thanks for pointing out the yaml control file, that could be useful.  But
before we make any modifications to the OpenStack charms, I think it would
be helpful to have an agreed-upon convention for the following in terms of
Makefile target names:

   - nose / unit tests
  - make test
  - make unit_test
  - Both are in use.
  - 2 cents:  I would reserve both of these for unit tests, never for
  amulet tests.
   - lint checks
  - make lint
  - Already unified on this afaict.
   - amulet tests
   - make test
  - make functional_test
  - Both are in use.
  - 2 cents:  I think functional_test leaves no question as to usage.
   - charm-helpers sync
   - make sync
  - Already unified on this afaict.

If there is not a documented convention, can we have the necessary
discussions here to create one?

Thanks again,

Ryan




On Wed, Jan 21, 2015 at 11:40 AM, Benjamin Saller 
benjamin.sal...@canonical.com wrote:

 While convention is great there is an additional path, you can if your
 project differs from the de facto standards, include an explicit list of
 targets in your tests/tests.yaml file

 makefile:
 - lint
 - unit_test
 - something_else

 That file allows customization of much of bundletesters policy.

 -Ben

 On Wed, Jan 21, 2015 at 9:05 AM, Ryan Beisner ryan.beis...@canonical.com
 wrote:

 Greetings,

 I'd like to invite discussion on Makefile target names.  I've seen a few
 different takes on Makefile target naming conventions across charms.  For
 example, in the OpenStack charms, `make test` runs amulet and `make
 unit_test` performs nose tests.  In many/most other charms, `make test`
 infers unit/nose testing, and amulet target names can vary.

 As I understand bundletester:  it expects `make test` to be unit tests.
 Amulet targets in the Makefile aren't processed if they exist.  Instead,
 the executables in the test dir are fired off.  And, I think that should
 all be quite fine as long as the charm's amulet make target isn't doing
 anything important.

 The net effect for OpenStack charms at the moment is that when they hit
 Juju QA, amulet fires off twice, and unit is not run.  I'd like to make
 sure the OpenStack charms are in line with any established Makefile
 convention.  Is there a reference or doc for such a convention?

 I've seen 'unit_test' and 'functional_test' target names in use as well,
 and I quite like those, as they leave no question as to purpose.

 To work around the variations we've seen across charms, server team's
 OSCI (OpenStack CI charm testing) ignores make target names, and instead
 parses the Makefile, looking for the right thing-to-do, then execs the
 target found.  Bear in mind that OSCI isn't intended to be a replacement
 for general charm QA, rather it is an intense safety trigger for the
 OpenStack charm developers.  We also want these charms to succeed in Juju
 QA / CI.

 Input and advice are much appreciated!

 Many thanks,


 Ryan Beisner


 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju



-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Makefile target names

2015-01-22 Thread Ryan Beisner
Same here, the OpenStack charms have charm proof in the lint target.  I
expect it would be run twice in that case.

On Thu, Jan 22, 2015 at 10:36 AM, Simon Davy bloodearn...@gmail.com wrote:

  On 22 January 2015 at 16:29, David Britton david.brit...@canonical.com
 wrote:
  On Thu, Jan 22, 2015 at 04:17:26PM +, Simon Davy wrote:
  On 22 January 2015 at 15:13, David Britton david.brit...@canonical.com
 wrote:
  
   lint:
- make lint
  
 
  Could we also make[1] the charm linter lint the makefile for the
  presence of targets agreed in the outcome of this thread?
 
  charm proof
 
  I like it.  (bundle tester already runs this)

 Which is interesting, as my lint targets general runs charm proof too,
 so it'd be run twice in that case?

 Not a big issue, but if the charm store/review queue is automatically
 charm-proofing too, perhaps the make lint target should not be?

 --
 Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Makefile target names

2015-01-21 Thread Ryan Beisner
Greetings,

I'd like to invite discussion on Makefile target names.  I've seen a few
different takes on Makefile target naming conventions across charms.  For
example, in the OpenStack charms, `make test` runs amulet and `make
unit_test` performs nose tests.  In many/most other charms, `make test`
infers unit/nose testing, and amulet target names can vary.

As I understand bundletester:  it expects `make test` to be unit tests.
Amulet targets in the Makefile aren't processed if they exist.  Instead,
the executables in the test dir are fired off.  And, I think that should
all be quite fine as long as the charm's amulet make target isn't doing
anything important.

The net effect for OpenStack charms at the moment is that when they hit
Juju QA, amulet fires off twice, and unit is not run.  I'd like to make
sure the OpenStack charms are in line with any established Makefile
convention.  Is there a reference or doc for such a convention?

I've seen 'unit_test' and 'functional_test' target names in use as well,
and I quite like those, as they leave no question as to purpose.

To work around the variations we've seen across charms, server team's OSCI
(OpenStack CI charm testing) ignores make target names, and instead parses
the Makefile, looking for the right thing-to-do, then execs the target
found.  Bear in mind that OSCI isn't intended to be a replacement for
general charm QA, rather it is an intense safety trigger for the OpenStack
charm developers.  We also want these charms to succeed in Juju QA / CI.

Input and advice are much appreciated!

Many thanks,


Ryan Beisner
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju