Juju 2.4-rc3 has been released

2018-06-25 Thread Ian Booth
A new development release of Juju is here, 2.4-rc3.

This release candidate addresses an issue upgrading from earlier Juju versions
as described below.

## Fixes

An upgrade step has been added to initialise the Raft configuration. This would
normally be done at bootstrap time but needs to be done during upgrade for
controllers that were bootstrapped with an earlier version.

## How can I get it?

The best way to get your hands on this release of Juju is to install it as a
snap package (see https://snapcraft.io/ for more info on snaps).

 sudo snap install juju --classic --candidate

Other packages are available for a variety of platforms. Please see the online
documentation at https://jujucharms.com/docs/stable/reference-install. Those
subscribed to a snap channel should be automatically upgraded. If you’re using
the ppa/homebrew, you should see an upgrade available.

## Feedback Appreciated!

We encourage everyone to let us know how you're using Juju. Send us a
message on Twitter using #jujucharms, join us at #juju on freenode, and
subscribe to the mailing list at juju@lists.ubuntu.com.

## More information

To learn more about Juju please visit https://jujucharms.com.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Juju 2.4-rc3 has been released

2018-06-25 Thread Ian Booth
A new development release of Juju is here, 2.4-rc3.

This release candidate addresses an issue upgrading from earlier Juju versions
as described below.

## Fixes

An upgrade step has been added to initialise the Raft configuration. This would
normally be done at bootstrap time but needs to be done during upgrade for
controllers that were bootstrapped with an earlier version.

## How can I get it?

The best way to get your hands on this release of Juju is to install it as a
snap package (see https://snapcraft.io/ for more info on snaps).

 sudo snap install juju --classic --candidate

Other packages are available for a variety of platforms. Please see the online
documentation at https://jujucharms.com/docs/stable/reference-install. Those
subscribed to a snap channel should be automatically upgraded. If you’re using
the ppa/homebrew, you should see an upgrade available.

## Feedback Appreciated!

We encourage everyone to let us know how you're using Juju. Send us a
message on Twitter using #jujucharms, join us at #juju on freenode, and
subscribe to the mailing list at j...@lists.ubuntu.com.

## More information

To learn more about Juju please visit https://jujucharms.com.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: redhat, centos, oracle linux images for vmware deployment

2018-02-11 Thread Ian Booth
Hey Dan

The Ubuntu images used to bring up a vSphere VM are downloaded from
cloud-images.ubuntu.com. We use images in an ova archive format. Here's where
the xenial ones are sourced from for example:
http://cloud-images.ubuntu.com/xenial/current/

Juju uses simplestreams metadata to select the relevant image to be used, based
amongst other things on the series defined in the charm. For Ubuntu images, the
simplestreams metadata is here:
http://cloud-images.ubuntu.com/releases/streams/v1/

For images other than Ubuntu for use with Juju (eg centos), we publish the
metadata and image files elsewhere as they are not officially supported on
cloud-images. We only support centos7 images on AWS and Azure as can be seen 
here:
http://streams.canonical.com/juju/images/releases/streams/v1/

It's possible to "roll your own" image metadata and point that to a centos ova
image archive which should in theory work, but is not something we have tested
as to date there's not been a call for it to my knowledge. This metadata is
provided to Juju using the --metadata-source argument to bootstrap. There's
tooling to generate the image metadata (juju metadata generate-image), but bear
in mind that to date it's been used more as an advanced tool for internal use
and has some rough edges. There's some level of doc here which explains the
basics of setting up a private Openstack cloud:
https://jujucharms.com/docs/stable/howto-privatecloud

The above would need to be adapted to accommodate vSphere and centos images.
Things like where the image binaries would be hosted and made available to
vSphere would need to be sorted out. As I said, this is not something we have
invested time in testing as so far there's not been a call for it. Very hand
wavy, the steps would be:
- generate centos ova image archives
- host them somewhere accessible to vSphere and the bootstrap client
- generate image metadata cataloging the above images
- bootstrap juju using --metadata-source to point to the image metadata

ie it should or could be made to work with Juju as released but we've not tested
it. We can help you get things set up if that helps, and document the steps for
the next person as we go along.


On 10/02/18 04:41, Daniel Bidwell wrote:
> Where do I find the images that are used by juju to deploy to a vsphere
> controller?  My ubuntu systems come up great, but unfortunately I need
> to deploy some red hat/centos/oracle linux vms also, but juju deploy
> doesn't seem to be able to find them.
> 
> Is this an area that needs someone to get involved with?
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju deploys with a vsphere controller using hardware vm version 10

2018-02-06 Thread Ian Booth
Hi Daniel

The Juju vSphere provider currently only supports hardware version 10, but 14 is
now the most recent according to the VMWare website. If we were simply to track
and support the most recent hardware version, would that work for you?

On 05/02/18 12:38, Daniel Bidwell wrote:
> Is there anyway to make the vsphere controller to deploy vms with
> hardware vm version 13 instead of version 10?
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju deploys with a vsphere controller using hardware vm version 10

2018-02-06 Thread Ian Booth
Hi Daniel

The Juju vSphere provider currently only supports hardware version 10, but 14 is
now the most recent according to the VMWare website. If we were simply to track
and support the most recent hardware version, would that work for you?

On 05/02/18 12:38, Daniel Bidwell wrote:
> Is there anyway to make the vsphere controller to deploy vms with
> hardware vm version 13 instead of version 10?
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.3.0 is here!

2017-12-07 Thread Ian Booth


On 08/12/17 09:39, Micheal B wrote:
> Looks great here other than the LDX on VMware which is what I need or at 
> least part of it is. Wanting to run containerized openstack in kubernetes on 
> vmware.  Unless someones has a better idea, I could try.
>

Sorry about that issue. This LXD on VMWare issue will be fixed ASAP next week
and we'll be doing a 2.3.1 point release (by the end of the year all going well)
with this and some other small fixes which missed the cut for 2.3.0.

You can conjure-up (or deploy deploy) kubernetes on other clouds (or even
localhost with LXD if you have a machine with lots of RAM) in the meantime. Or
you could use the openstack-lxd bundle if your main goal is Openstack.


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.3 beta2 is here!

2017-11-02 Thread Ian Booth

> * Parallelization of the Machine Provisioner
>>
>> Provisioning of machines is now faster!  Groups of machines will now be
>> provisioned in parallel reducing deployment time, especially on large
>> bundles.  Please give it a try and let us know what you think.
>>
>> Benchmarks for time to deploy 16 machines on different clouds:
>>
>> AWS:
>>
>> juju 2.2.5 4m36s
>>
>> juju 2.3-beta2 3m17s
>>
>> LXD:
>>
>> juju 2.2.5 3m57s
>>
>> juju 2.3-beta2 2m57s
>>
>> Google:
>>
>> juju 2.2.5 5m21s
>>
>> juju 2.3-beta2 2m10s
>>
>> OpenStack:
>>
>> juju 2.2.5 12m40s
>>
>> juju 2.3-beta2 4m52s
>>
>>
>>
> Oh heck yes this is a great improvement! I don't see MAAS numbers here, but
> I imagine palatalization has been implemented there too? Bare metal can be
> so slow to boot sometimes ;)
>

Works for all clouds. The provisioning code is generic and has been extracted
from each provider and moved up a layer. It got complicated because of the need
to still ensure even spread of distribution groups across availability zones in
the parallel case. There just wasn't time to get any MAAS numbers prior to
cutting the beta, but empirically, there's improvement across the board.
Positive deployment stories to share would be welcome :-)




-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.3 beta2 is here!

2017-11-02 Thread Ian Booth

> * Parallelization of the Machine Provisioner
>>
>> Provisioning of machines is now faster!  Groups of machines will now be
>> provisioned in parallel reducing deployment time, especially on large
>> bundles.  Please give it a try and let us know what you think.
>>
>> Benchmarks for time to deploy 16 machines on different clouds:
>>
>> AWS:
>>
>> juju 2.2.5 4m36s
>>
>> juju 2.3-beta2 3m17s
>>
>> LXD:
>>
>> juju 2.2.5 3m57s
>>
>> juju 2.3-beta2 2m57s
>>
>> Google:
>>
>> juju 2.2.5 5m21s
>>
>> juju 2.3-beta2 2m10s
>>
>> OpenStack:
>>
>> juju 2.2.5 12m40s
>>
>> juju 2.3-beta2 4m52s
>>
>>
>>
> Oh heck yes this is a great improvement! I don't see MAAS numbers here, but
> I imagine palatalization has been implemented there too? Bare metal can be
> so slow to boot sometimes ;)
>

Works for all clouds. The provisioning code is generic and has been extracted
from each provider and moved up a layer. It got complicated because of the need
to still ensure even spread of distribution groups across availability zones in
the parallel case. There just wasn't time to get any MAAS numbers prior to
cutting the beta, but empirically, there's improvement across the board.
Positive deployment stories to share would be welcome :-)




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.3 beta2 is here!

2017-11-02 Thread Ian Booth


>>
>> * Parallelization of the Machine Provisioner
>>
>>
>> Provisioning of machines is now faster!  Groups of machines will now
>> be provisioned in parallel reducing deployment time, especially on
>> large bundles.  Please give it a try and let us know what you think.
>>
> 
> This is great. Did we also add support for automatic provisioning
> retries to handle sporadic cloud failures?
>

Some providers do have some such retries built in. eg Azure, Openstack,
Rackspace handle rate limit exceeded errors and Do The Right Thing. We're still
progressively addressing robustness concerns elsewhere.



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju Storage/MAAS

2017-11-01 Thread Ian Booth
Thanks James, we'll get to it. We'll work with the MAAS folks as on the surface
it looks like Juju is passing things correctly via the MAAS APIs. The fact that
the deployment works minus the storage constraint is interesting. Initially I
theorised it could have been a TB vs TiB mismatch but the disk size is large
enough to count that out. We'll update bug from here on


On 01/11/17 13:10, James Beedy wrote:
> I’ve created this bug for further tracking 
> https://bugs.launchpad.net/juju/+bug/1729127
> 
>> On Oct 31, 2017, at 7:59 PM, James Beedy <jamesbe...@gmail.com> wrote:
>>
>> Yes, deploying without —storage results in a successful deploy. 
>>
>>> On Oct 31, 2017, at 7:52 PM, Ian Booth <ian.bo...@canonical.com> wrote:
>>>
>>> And just to ask the obvious: deploying without the --storage constraint 
>>> results
>>> in a successful deploy, albeit to a machine with maybe the wrong disk?
>>>
>>>
>>>> On 01/11/17 10:51, James Beedy wrote:
>>>> Ian,
>>>>
>>>> So, I think I'm close here.
>>>>
>>>> The filesytem/device layout on my node(s): https://imgur.com/a/Nzn2H
>>>>
>>>> I have tagged the md0 device with the tag "raid0", then I have created the
>>>> storage pool as you have specified.
>>>>
>>>> `juju create-storage-pool ssd-disks maas tags=raid0`
>>>>
>>>> Then ran the following command to deploy my charm [0], attaching storage as
>>>> part of the command:
>>>>
>>>> `juju deploy cs:~jamesbeedy/elasticsearch-27 --bind "cluster=vlan20
>>>> public=mgmt-net" --storage data=ssd-disks,3T --constraints "tags=data"`
>>>>
>>>>
>>>> The result is here: http://paste.ubuntu.com/25862190/
>>>>
>>>>
>>>> Here machines 1 and 2 are deployed without the `--constraints`,
>>>> http://paste.ubuntu.com/25862219/
>>>>
>>>>
>>>> Am I missing something? Possibly like one more input to the `--storage` 
>>>> arg?
>>>>
>>>>
>>>> Thanks
>>>>
>>>> [0] https://jujucharms.com/u/jamesbeedy/elasticsearch/27
>>>>
>>>>> On Tue, Oct 31, 2017 at 3:14 PM, Ian Booth <ian.bo...@canonical.com> 
>>>>> wrote:
>>>>>
>>>>> Thanks for raising the issue - we'll get the docs updated!
>>>>>
>>>>>> On 01/11/17 07:44, James Beedy wrote:
>>>>>> I knew it would be something simple and sensible :)
>>>>>>
>>>>>> Thank you!
>>>>>>
>>>>>> On Tue, Oct 31, 2017 at 2:38 PM, Ian Booth <ian.bo...@canonical.com>
>>>>> wrote:
>>>>>>
>>>>>>> Of the top of my head, you want to do something like:
>>>>>>>
>>>>>>> $ juju create-storage-pool ssd-disks maas tags=ssd
>>>>>>> $ juju deploy postgresql --storage pgdata=ssd-disks,32G
>>>>>>>
>>>>>>> The above assumes you have tagged in MAAS any SSD disks with the "ssd"
>>>>>>> tag. You
>>>>>>> can select whatever criteria you want and whatever tags you want to use.
>>>>>>>
>>>>>>> The deploy command above selects a MAAS node with a disk tagged "ssd"
>>>>>>> which is
>>>>>>> at least 32GB in size.
>>>>>>>
>>>>>>>
>>>>>>>> On 01/11/17 07:04, James Beedy wrote:
>>>>>>>> Trying to check out Juju storage capabilities on MAAS I found [0], but
>>>>>>>> can't quite wrap my head around what the syntax might be to make it
>>>>> work,
>>>>>>>> and what the extent of the capability of the Juju storage features are
>>>>>>> when
>>>>>>>> used with MAAS.
>>>>>>>>
>>>>>>>> Re-reading [0], and looking for anything else I can find on Juju
>>>>> storage
>>>>>>>> every day for a week now thinking it may click or I might find the
>>>>> right
>>>>>>>> doc,  but it hasn't, and I haven't.
>>>>>>>>
>>>>>>>> I filed a bug with juju/docs here [1] .
>>>>>>>>
>>>>>>>> Does anyone have an example of how to consume Juju storage using the
>>>>> MAAS
>>>>>>>> provider?
>>>>>>>>
>>>>>>>> Thanks!
>>>>>>>>
>>>>>>>> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
>>>>>>>> [1] https://github.com/juju/docs/issues/2251
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
And just to ask the obvious: deploying without the --storage constraint results
in a successful deploy, albeit to a machine with maybe the wrong disk?


On 01/11/17 10:51, James Beedy wrote:
> Ian,
> 
> So, I think I'm close here.
> 
> The filesytem/device layout on my node(s): https://imgur.com/a/Nzn2H
> 
> I have tagged the md0 device with the tag "raid0", then I have created the
> storage pool as you have specified.
> 
> `juju create-storage-pool ssd-disks maas tags=raid0`
> 
> Then ran the following command to deploy my charm [0], attaching storage as
> part of the command:
> 
> `juju deploy cs:~jamesbeedy/elasticsearch-27 --bind "cluster=vlan20
> public=mgmt-net" --storage data=ssd-disks,3T --constraints "tags=data"`
> 
> 
> The result is here: http://paste.ubuntu.com/25862190/
> 
> 
> Here machines 1 and 2 are deployed without the `--constraints`,
> http://paste.ubuntu.com/25862219/
> 
> 
> Am I missing something? Possibly like one more input to the `--storage` arg?
> 
> 
> Thanks
> 
> [0] https://jujucharms.com/u/jamesbeedy/elasticsearch/27
> 
> On Tue, Oct 31, 2017 at 3:14 PM, Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> Thanks for raising the issue - we'll get the docs updated!
>>
>> On 01/11/17 07:44, James Beedy wrote:
>>> I knew it would be something simple and sensible :)
>>>
>>> Thank you!
>>>
>>> On Tue, Oct 31, 2017 at 2:38 PM, Ian Booth <ian.bo...@canonical.com>
>> wrote:
>>>
>>>> Of the top of my head, you want to do something like:
>>>>
>>>> $ juju create-storage-pool ssd-disks maas tags=ssd
>>>> $ juju deploy postgresql --storage pgdata=ssd-disks,32G
>>>>
>>>> The above assumes you have tagged in MAAS any SSD disks with the "ssd"
>>>> tag. You
>>>> can select whatever criteria you want and whatever tags you want to use.
>>>>
>>>> The deploy command above selects a MAAS node with a disk tagged "ssd"
>>>> which is
>>>> at least 32GB in size.
>>>>
>>>>
>>>> On 01/11/17 07:04, James Beedy wrote:
>>>>> Trying to check out Juju storage capabilities on MAAS I found [0], but
>>>>> can't quite wrap my head around what the syntax might be to make it
>> work,
>>>>> and what the extent of the capability of the Juju storage features are
>>>> when
>>>>> used with MAAS.
>>>>>
>>>>> Re-reading [0], and looking for anything else I can find on Juju
>> storage
>>>>> every day for a week now thinking it may click or I might find the
>> right
>>>>> doc,  but it hasn't, and I haven't.
>>>>>
>>>>> I filed a bug with juju/docs here [1] .
>>>>>
>>>>> Does anyone have an example of how to consume Juju storage using the
>> MAAS
>>>>> provider?
>>>>>
>>>>> Thanks!
>>>>>
>>>>> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
>>>>> [1] https://github.com/juju/docs/issues/2251
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
And just to ask the obvious: deploying without the --storage constraint results
in a successful deploy, albeit to a machine with maybe the wrong disk?


On 01/11/17 10:51, James Beedy wrote:
> Ian,
> 
> So, I think I'm close here.
> 
> The filesytem/device layout on my node(s): https://imgur.com/a/Nzn2H
> 
> I have tagged the md0 device with the tag "raid0", then I have created the
> storage pool as you have specified.
> 
> `juju create-storage-pool ssd-disks maas tags=raid0`
> 
> Then ran the following command to deploy my charm [0], attaching storage as
> part of the command:
> 
> `juju deploy cs:~jamesbeedy/elasticsearch-27 --bind "cluster=vlan20
> public=mgmt-net" --storage data=ssd-disks,3T --constraints "tags=data"`
> 
> 
> The result is here: http://paste.ubuntu.com/25862190/
> 
> 
> Here machines 1 and 2 are deployed without the `--constraints`,
> http://paste.ubuntu.com/25862219/
> 
> 
> Am I missing something? Possibly like one more input to the `--storage` arg?
> 
> 
> Thanks
> 
> [0] https://jujucharms.com/u/jamesbeedy/elasticsearch/27
> 
> On Tue, Oct 31, 2017 at 3:14 PM, Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> Thanks for raising the issue - we'll get the docs updated!
>>
>> On 01/11/17 07:44, James Beedy wrote:
>>> I knew it would be something simple and sensible :)
>>>
>>> Thank you!
>>>
>>> On Tue, Oct 31, 2017 at 2:38 PM, Ian Booth <ian.bo...@canonical.com>
>> wrote:
>>>
>>>> Of the top of my head, you want to do something like:
>>>>
>>>> $ juju create-storage-pool ssd-disks maas tags=ssd
>>>> $ juju deploy postgresql --storage pgdata=ssd-disks,32G
>>>>
>>>> The above assumes you have tagged in MAAS any SSD disks with the "ssd"
>>>> tag. You
>>>> can select whatever criteria you want and whatever tags you want to use.
>>>>
>>>> The deploy command above selects a MAAS node with a disk tagged "ssd"
>>>> which is
>>>> at least 32GB in size.
>>>>
>>>>
>>>> On 01/11/17 07:04, James Beedy wrote:
>>>>> Trying to check out Juju storage capabilities on MAAS I found [0], but
>>>>> can't quite wrap my head around what the syntax might be to make it
>> work,
>>>>> and what the extent of the capability of the Juju storage features are
>>>> when
>>>>> used with MAAS.
>>>>>
>>>>> Re-reading [0], and looking for anything else I can find on Juju
>> storage
>>>>> every day for a week now thinking it may click or I might find the
>> right
>>>>> doc,  but it hasn't, and I haven't.
>>>>>
>>>>> I filed a bug with juju/docs here [1] .
>>>>>
>>>>> Does anyone have an example of how to consume Juju storage using the
>> MAAS
>>>>> provider?
>>>>>
>>>>> Thanks!
>>>>>
>>>>> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
>>>>> [1] https://github.com/juju/docs/issues/2251
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
Thanks for raising the issue - we'll get the docs updated!

On 01/11/17 07:44, James Beedy wrote:
> I knew it would be something simple and sensible :)
> 
> Thank you!
> 
> On Tue, Oct 31, 2017 at 2:38 PM, Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> Of the top of my head, you want to do something like:
>>
>> $ juju create-storage-pool ssd-disks maas tags=ssd
>> $ juju deploy postgresql --storage pgdata=ssd-disks,32G
>>
>> The above assumes you have tagged in MAAS any SSD disks with the "ssd"
>> tag. You
>> can select whatever criteria you want and whatever tags you want to use.
>>
>> The deploy command above selects a MAAS node with a disk tagged "ssd"
>> which is
>> at least 32GB in size.
>>
>>
>> On 01/11/17 07:04, James Beedy wrote:
>>> Trying to check out Juju storage capabilities on MAAS I found [0], but
>>> can't quite wrap my head around what the syntax might be to make it work,
>>> and what the extent of the capability of the Juju storage features are
>> when
>>> used with MAAS.
>>>
>>> Re-reading [0], and looking for anything else I can find on Juju storage
>>> every day for a week now thinking it may click or I might find the right
>>> doc,  but it hasn't, and I haven't.
>>>
>>> I filed a bug with juju/docs here [1] .
>>>
>>> Does anyone have an example of how to consume Juju storage using the MAAS
>>> provider?
>>>
>>> Thanks!
>>>
>>> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
>>> [1] https://github.com/juju/docs/issues/2251
>>>
>>>
>>>
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
Thanks for raising the issue - we'll get the docs updated!

On 01/11/17 07:44, James Beedy wrote:
> I knew it would be something simple and sensible :)
> 
> Thank you!
> 
> On Tue, Oct 31, 2017 at 2:38 PM, Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> Of the top of my head, you want to do something like:
>>
>> $ juju create-storage-pool ssd-disks maas tags=ssd
>> $ juju deploy postgresql --storage pgdata=ssd-disks,32G
>>
>> The above assumes you have tagged in MAAS any SSD disks with the "ssd"
>> tag. You
>> can select whatever criteria you want and whatever tags you want to use.
>>
>> The deploy command above selects a MAAS node with a disk tagged "ssd"
>> which is
>> at least 32GB in size.
>>
>>
>> On 01/11/17 07:04, James Beedy wrote:
>>> Trying to check out Juju storage capabilities on MAAS I found [0], but
>>> can't quite wrap my head around what the syntax might be to make it work,
>>> and what the extent of the capability of the Juju storage features are
>> when
>>> used with MAAS.
>>>
>>> Re-reading [0], and looking for anything else I can find on Juju storage
>>> every day for a week now thinking it may click or I might find the right
>>> doc,  but it hasn't, and I haven't.
>>>
>>> I filed a bug with juju/docs here [1] .
>>>
>>> Does anyone have an example of how to consume Juju storage using the MAAS
>>> provider?
>>>
>>> Thanks!
>>>
>>> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
>>> [1] https://github.com/juju/docs/issues/2251
>>>
>>>
>>>
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
Of the top of my head, you want to do something like:

$ juju create-storage-pool ssd-disks maas tags=ssd
$ juju deploy postgresql --storage pgdata=ssd-disks,32G

The above assumes you have tagged in MAAS any SSD disks with the "ssd" tag. You
can select whatever criteria you want and whatever tags you want to use.

The deploy command above selects a MAAS node with a disk tagged "ssd" which is
at least 32GB in size.


On 01/11/17 07:04, James Beedy wrote:
> Trying to check out Juju storage capabilities on MAAS I found [0], but
> can't quite wrap my head around what the syntax might be to make it work,
> and what the extent of the capability of the Juju storage features are when
> used with MAAS.
> 
> Re-reading [0], and looking for anything else I can find on Juju storage
> every day for a week now thinking it may click or I might find the right
> doc,  but it hasn't, and I haven't.
> 
> I filed a bug with juju/docs here [1] .
> 
> Does anyone have an example of how to consume Juju storage using the MAAS
> provider?
> 
> Thanks!
> 
> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
> [1] https://github.com/juju/docs/issues/2251
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Storage/MAAS

2017-10-31 Thread Ian Booth
Of the top of my head, you want to do something like:

$ juju create-storage-pool ssd-disks maas tags=ssd
$ juju deploy postgresql --storage pgdata=ssd-disks,32G

The above assumes you have tagged in MAAS any SSD disks with the "ssd" tag. You
can select whatever criteria you want and whatever tags you want to use.

The deploy command above selects a MAAS node with a disk tagged "ssd" which is
at least 32GB in size.


On 01/11/17 07:04, James Beedy wrote:
> Trying to check out Juju storage capabilities on MAAS I found [0], but
> can't quite wrap my head around what the syntax might be to make it work,
> and what the extent of the capability of the Juju storage features are when
> used with MAAS.
> 
> Re-reading [0], and looking for anything else I can find on Juju storage
> every day for a week now thinking it may click or I might find the right
> doc,  but it hasn't, and I haven't.
> 
> I filed a bug with juju/docs here [1] .
> 
> Does anyone have an example of how to consume Juju storage using the MAAS
> provider?
> 
> Thanks!
> 
> [0] https://jujucharms.com/docs/devel/charms-storage#maas-(maas)
> [1] https://github.com/juju/docs/issues/2251
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju development summary

2017-10-27 Thread Ian Booth
Hi folks

Here's a quick wrap up of what the Juju team has been doing lately.

A chunk of time has been spent planning what we want to work on next cycle
leading up to the 18.04 LTS. Issues/features required by the field take a high
priority, including (but not limited to):
- audit logging
- enhancements to bundle deployment
- support for Openstack with Cisco ACI
- containers inheriting properties from hosts
- space selection for controller and agent traffic

There's also other feature work planned such as providing goal state to charms
and other mechanisms to reduce message chatter and improve scalability; post
deploy management of spaces and bindings, crud for spaces and subnets etc; cloud
native functionality exposed to charms.

The main engineering focus has been polishing things for the imminent (late this
week/early next week) 2.3 beta 2 release. There's a number of great improvements
over beta 1 to look forward to, including:
- lease / leadership tracking immune to clock skew / bad ntp
- much better machine provisioning performance across the board (up to 40%
reduction in time when deploying a bundle with 16 machines on openstack)
- resolution of annoying issues like errors like "model not found" on controller
destruction
- cross model support for prometheus and nagio deployments
- lots of polish to various usability paper cuts

For those keen to try beta 2, we guarantee upgradeability from this release to
2.3.0 final. So give it a run and help provide feedback so we can make 2.3 as
awesome as possible.

We also pushed out a couple of 2.2.x point releases since 2.2.4 to fix a few
small but significant issues. We encourage everyone to upgrade to 2.2.6.


Quick links:
  Work pending: https://github.com/juju/juju/pulls
  Recent commits: https://github.com/juju/juju/commits/develop
  Recent 2.2 commits: https://github.com/juju/juju/commits/2.2


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 16:33, Ian Booth wrote:
> 
> 
> On 19/10/17 15:22, John Meinel wrote:
>> So at the moment, I don't think Juju supports what you're looking for,
>> which is cross model relations without public addresses. We've certainly
>> discussed supporting all private for cross model. The main issue is that we
>> often drive parts of the firewalls (security groups) but without
>> understanding all the routing, it is hard to be sure whether things will
>> actually work.
>>
> 
> The space to which an endpoint is bound affects the behaviour here. Having 
> said
> that, there may be a bug in Juju's cross model relations code.
> 

Actually, there may be an issue with current behaviour, but not what I first
thought.

In network-get, only if an endpoint is not bound to a space does the resulting
ingress address use the public address (if one exists). If bound to a space, the
ingress addresses are set to the machine local addresses. This is wrong because
there's absolutely no guarantee an arbitrary external workload will be able to
connect to such an address - defaulting to the public address is the best choice
for most deployments.

I think network-get needs to change such that in the absence of information to
the contrary, regardless of whether an endpoint is bound to a space, the public
address should be advertised for ingress in a cross model relation.

The above implies we would need a way for the user to specify at relation time a
different ingress address for the consuming end. But that's not necessarily easy
to determine as it requires knowledge of how both sides (incl offering side)
have been deployed, and may change per relation. We don't intend to provide a
solution for this bit of the problem in Juju 2.3.


> So in the context of this doc
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> For relation data set up by Juju when a unit enters scope of a cross model 
> relation:
> 
> Juju will use the public address for advertising ingress. We have (future) 
> plans
> to support cross model relations where, in the absence of spaces, Juju can
> determine that traffic between endpoints is able to go via cloud local
> addresses, but as stated, with all the potential routing complexity involved, 
> we
> would limit this to quite restricted scenarios where it's guaranteed to work. 
> eg
> on AWS that might be same vpc/tenant/credentials or something. But we're not
> there yet and won't be for the cross model relations release in Juju 2.3.
> 
> The relation data is of course what is available to the remote unit(s) to 
> query.
> The data set up by Juju is the default, and can be overridden by a charm in a
> relation-changed hook for example.
> 
> For network-get output:
> 
> Where there is no space binding...
> 
> ... Juju will use the public address or cloud local address as above.
> 
> Where the endpoint is bound to a space...
> 
> ... Juju will populate the ingress address info in network-get to be the local
> machine addresses in that space.
> 
> So charm could call network-get and do a relation-set to put the correct
> ingress-address value in the relation data bag.
> 
> But I think the bug here is that when a unit enters scope, the default values
> Juju puts in relation data should be calculated the same as for network-get.
> Right now, the ingress address used is not space aware - if it's a cross model
> relation, Juju always uses the public address regardless of whether the 
> endpoint
> is bound to a space. If this behaviour were to be changed to match what
> network-get does, the relation data would be set up correctly(?) and there'd 
> be
> no need for the charm to override anything.
> 
>> I do believe the intended resolution is to use juju relate --via X, and
>> then X can be a space that isn't public. I'm pretty sure we don't have
>> everything wired up for that yet, and we want to make sure we can get the
>> current steps working well.
>>
> 
> juju relate --via X works at the moment by setting the egress-subnets value in
> the relation data bucket. This supports the case where the person deploying
> knows traffic from a model will egress via specific subnets, eg for a NATed
> firewall scenario. Juju itself uses this value to set firewall rules on the
> other model. There's currently no plans to support explicitly specifying what
> ingress addresses to use for either end of a cross model relation.
> 
>> The very first thing I noticed in your first email was that charms should
>> *not* be aware of spaces. The abstractions for charms are around their
>> bindings (explicit or via binding their endpoints). The goal of spaces is
>> to provide human operators a way to tell charms about their environm

Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 16:33, Ian Booth wrote:
> 
> 
> On 19/10/17 15:22, John Meinel wrote:
>> So at the moment, I don't think Juju supports what you're looking for,
>> which is cross model relations without public addresses. We've certainly
>> discussed supporting all private for cross model. The main issue is that we
>> often drive parts of the firewalls (security groups) but without
>> understanding all the routing, it is hard to be sure whether things will
>> actually work.
>>
> 
> The space to which an endpoint is bound affects the behaviour here. Having 
> said
> that, there may be a bug in Juju's cross model relations code.
> 

Actually, there may be an issue with current behaviour, but not what I first
thought.

In network-get, only if an endpoint is not bound to a space does the resulting
ingress address use the public address (if one exists). If bound to a space, the
ingress addresses are set to the machine local addresses. This is wrong because
there's absolutely no guarantee an arbitrary external workload will be able to
connect to such an address - defaulting to the public address is the best choice
for most deployments.

I think network-get needs to change such that in the absence of information to
the contrary, regardless of whether an endpoint is bound to a space, the public
address should be advertised for ingress in a cross model relation.

The above implies we would need a way for the user to specify at relation time a
different ingress address for the consuming end. But that's not necessarily easy
to determine as it requires knowledge of how both sides (incl offering side)
have been deployed, and may change per relation. We don't intend to provide a
solution for this bit of the problem in Juju 2.3.


> So in the context of this doc
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> For relation data set up by Juju when a unit enters scope of a cross model 
> relation:
> 
> Juju will use the public address for advertising ingress. We have (future) 
> plans
> to support cross model relations where, in the absence of spaces, Juju can
> determine that traffic between endpoints is able to go via cloud local
> addresses, but as stated, with all the potential routing complexity involved, 
> we
> would limit this to quite restricted scenarios where it's guaranteed to work. 
> eg
> on AWS that might be same vpc/tenant/credentials or something. But we're not
> there yet and won't be for the cross model relations release in Juju 2.3.
> 
> The relation data is of course what is available to the remote unit(s) to 
> query.
> The data set up by Juju is the default, and can be overridden by a charm in a
> relation-changed hook for example.
> 
> For network-get output:
> 
> Where there is no space binding...
> 
> ... Juju will use the public address or cloud local address as above.
> 
> Where the endpoint is bound to a space...
> 
> ... Juju will populate the ingress address info in network-get to be the local
> machine addresses in that space.
> 
> So charm could call network-get and do a relation-set to put the correct
> ingress-address value in the relation data bag.
> 
> But I think the bug here is that when a unit enters scope, the default values
> Juju puts in relation data should be calculated the same as for network-get.
> Right now, the ingress address used is not space aware - if it's a cross model
> relation, Juju always uses the public address regardless of whether the 
> endpoint
> is bound to a space. If this behaviour were to be changed to match what
> network-get does, the relation data would be set up correctly(?) and there'd 
> be
> no need for the charm to override anything.
> 
>> I do believe the intended resolution is to use juju relate --via X, and
>> then X can be a space that isn't public. I'm pretty sure we don't have
>> everything wired up for that yet, and we want to make sure we can get the
>> current steps working well.
>>
> 
> juju relate --via X works at the moment by setting the egress-subnets value in
> the relation data bucket. This supports the case where the person deploying
> knows traffic from a model will egress via specific subnets, eg for a NATed
> firewall scenario. Juju itself uses this value to set firewall rules on the
> other model. There's currently no plans to support explicitly specifying what
> ingress addresses to use for either end of a cross model relation.
> 
>> The very first thing I noticed in your first email was that charms should
>> *not* be aware of spaces. The abstractions for charms are around their
>> bindings (explicit or via binding their endpoints). The goal of spaces is
>> to provide human operators a way to tell charms about their environm

Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 15:22, John Meinel wrote:
> So at the moment, I don't think Juju supports what you're looking for,
> which is cross model relations without public addresses. We've certainly
> discussed supporting all private for cross model. The main issue is that we
> often drive parts of the firewalls (security groups) but without
> understanding all the routing, it is hard to be sure whether things will
> actually work.
> 

The space to which an endpoint is bound affects the behaviour here. Having said
that, there may be a bug in Juju's cross model relations code.

So in the context of this doc
https://jujucharms.com/docs/master/developer-network-primitives

For relation data set up by Juju when a unit enters scope of a cross model 
relation:

Juju will use the public address for advertising ingress. We have (future) plans
to support cross model relations where, in the absence of spaces, Juju can
determine that traffic between endpoints is able to go via cloud local
addresses, but as stated, with all the potential routing complexity involved, we
would limit this to quite restricted scenarios where it's guaranteed to work. eg
on AWS that might be same vpc/tenant/credentials or something. But we're not
there yet and won't be for the cross model relations release in Juju 2.3.

The relation data is of course what is available to the remote unit(s) to query.
The data set up by Juju is the default, and can be overridden by a charm in a
relation-changed hook for example.

For network-get output:

Where there is no space binding...

... Juju will use the public address or cloud local address as above.

Where the endpoint is bound to a space...

... Juju will populate the ingress address info in network-get to be the local
machine addresses in that space.

So charm could call network-get and do a relation-set to put the correct
ingress-address value in the relation data bag.

But I think the bug here is that when a unit enters scope, the default values
Juju puts in relation data should be calculated the same as for network-get.
Right now, the ingress address used is not space aware - if it's a cross model
relation, Juju always uses the public address regardless of whether the endpoint
is bound to a space. If this behaviour were to be changed to match what
network-get does, the relation data would be set up correctly(?) and there'd be
no need for the charm to override anything.

> I do believe the intended resolution is to use juju relate --via X, and
> then X can be a space that isn't public. I'm pretty sure we don't have
> everything wired up for that yet, and we want to make sure we can get the
> current steps working well.
> 

juju relate --via X works at the moment by setting the egress-subnets value in
the relation data bucket. This supports the case where the person deploying
knows traffic from a model will egress via specific subnets, eg for a NATed
firewall scenario. Juju itself uses this value to set firewall rules on the
other model. There's currently no plans to support explicitly specifying what
ingress addresses to use for either end of a cross model relation.

> The very first thing I noticed in your first email was that charms should
> *not* be aware of spaces. The abstractions for charms are around their
> bindings (explicit or via binding their endpoints). The goal of spaces is
> to provide human operators a way to tell charms about their environment.
> But you shouldn't ever have to change the name of your space to match the
> name a charm expects.
> 
> So if you do 'network-get BINDING -r relation' that should give you the
> context you need to coordinate your network settings with the other
> application. The intent is that we give you the right data so that it works
> whether you are in a cross model relation or just related to a local app.
> 
> John
> =:->
> 
> 
> On Oct 13, 2017 19:59, "James Beedy"  wrote:
> 
> I can give a high level of what I feel is a reasonably common use case.
> 
> I have infrastructure in two primary locations; AWS, and MAAS (at the local
> datacenter). The nodes at the datacenter have a direct fiber route via
> virtual private gateway in us-west-2, and the instances in AWS/us-west-2
> have a direct route  via the VPG to the private MAAS networks at the
> datacenter. There is no charge for data transfer from the datacenter in and
> out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
> and have the AWS instances and MAAS instances talk to each other via
> private address.
> 
> At the application level, the component/config goes something like this:
> 
> The MAAS nodes at the data center have mgmt-net, cluster-net, and
> access-net, interfaces defined, all of which get ips from their respective
> address spaces from the datacenter MAAS.
> 
> I need my elasticsearch charm to configure elasticsearch such that
> elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
> instance) -> elasticsearch to talk across the 

Re: default network space

2017-10-19 Thread Ian Booth


On 19/10/17 15:22, John Meinel wrote:
> So at the moment, I don't think Juju supports what you're looking for,
> which is cross model relations without public addresses. We've certainly
> discussed supporting all private for cross model. The main issue is that we
> often drive parts of the firewalls (security groups) but without
> understanding all the routing, it is hard to be sure whether things will
> actually work.
> 

The space to which an endpoint is bound affects the behaviour here. Having said
that, there may be a bug in Juju's cross model relations code.

So in the context of this doc
https://jujucharms.com/docs/master/developer-network-primitives

For relation data set up by Juju when a unit enters scope of a cross model 
relation:

Juju will use the public address for advertising ingress. We have (future) plans
to support cross model relations where, in the absence of spaces, Juju can
determine that traffic between endpoints is able to go via cloud local
addresses, but as stated, with all the potential routing complexity involved, we
would limit this to quite restricted scenarios where it's guaranteed to work. eg
on AWS that might be same vpc/tenant/credentials or something. But we're not
there yet and won't be for the cross model relations release in Juju 2.3.

The relation data is of course what is available to the remote unit(s) to query.
The data set up by Juju is the default, and can be overridden by a charm in a
relation-changed hook for example.

For network-get output:

Where there is no space binding...

... Juju will use the public address or cloud local address as above.

Where the endpoint is bound to a space...

... Juju will populate the ingress address info in network-get to be the local
machine addresses in that space.

So charm could call network-get and do a relation-set to put the correct
ingress-address value in the relation data bag.

But I think the bug here is that when a unit enters scope, the default values
Juju puts in relation data should be calculated the same as for network-get.
Right now, the ingress address used is not space aware - if it's a cross model
relation, Juju always uses the public address regardless of whether the endpoint
is bound to a space. If this behaviour were to be changed to match what
network-get does, the relation data would be set up correctly(?) and there'd be
no need for the charm to override anything.

> I do believe the intended resolution is to use juju relate --via X, and
> then X can be a space that isn't public. I'm pretty sure we don't have
> everything wired up for that yet, and we want to make sure we can get the
> current steps working well.
> 

juju relate --via X works at the moment by setting the egress-subnets value in
the relation data bucket. This supports the case where the person deploying
knows traffic from a model will egress via specific subnets, eg for a NATed
firewall scenario. Juju itself uses this value to set firewall rules on the
other model. There's currently no plans to support explicitly specifying what
ingress addresses to use for either end of a cross model relation.

> The very first thing I noticed in your first email was that charms should
> *not* be aware of spaces. The abstractions for charms are around their
> bindings (explicit or via binding their endpoints). The goal of spaces is
> to provide human operators a way to tell charms about their environment.
> But you shouldn't ever have to change the name of your space to match the
> name a charm expects.
> 
> So if you do 'network-get BINDING -r relation' that should give you the
> context you need to coordinate your network settings with the other
> application. The intent is that we give you the right data so that it works
> whether you are in a cross model relation or just related to a local app.
> 
> John
> =:->
> 
> 
> On Oct 13, 2017 19:59, "James Beedy"  wrote:
> 
> I can give a high level of what I feel is a reasonably common use case.
> 
> I have infrastructure in two primary locations; AWS, and MAAS (at the local
> datacenter). The nodes at the datacenter have a direct fiber route via
> virtual private gateway in us-west-2, and the instances in AWS/us-west-2
> have a direct route  via the VPG to the private MAAS networks at the
> datacenter. There is no charge for data transfer from the datacenter in and
> out of us-west-2 via the fiber VPG hot route, so it behooves me to use this
> and have the AWS instances and MAAS instances talk to each other via
> private address.
> 
> At the application level, the component/config goes something like this:
> 
> The MAAS nodes at the data center have mgmt-net, cluster-net, and
> access-net, interfaces defined, all of which get ips from their respective
> address spaces from the datacenter MAAS.
> 
> I need my elasticsearch charm to configure elasticsearch such that
> elasticsearch <-> elasticsearch talk on cluster-net, web server (AWS
> instance) -> elasticsearch to talk across the 

Re: default network space

2017-10-12 Thread Ian Booth
Copying in the Juju list also

On 12/10/17 22:18, Ian Booth wrote:
> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
> 
> There's some doc here to explain things in more detail
> 
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
> 
> Depending on how the charm has been deployed, and more specifically whether it
> is in a cross model relation, the ingress address might be either the public 
> or
> private address. Juju will decide based on a number of factors (whether models
> are deployed to same region, vpc, other provider specific aspects) and 
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the 
> public
> address (if there is one) for the ingress value for cross model relations - 
> the
> algorithm to short circuit to a cloud local address is not yet finished.
> 
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is associated. 
> The
> network-get output though should not include any space information explicitly 
> -
> this is a concern which a charm should not care about.
> 
> 
> On 12/10/17 13:35, James Beedy wrote:
>> Hello all,
>>
>> In case you haven't noticed, we now have a network_get() function available
>> in charmhelpers.core.hookenv (in master, not stable).
>>
>> Just wanted to have a little discussion about how we are going to be
>> parsing network_get().
>>
>> I first want to address the output of network_get() for an instance
>> deployed to the default vpc, no spaces constraint, and related to another
>> instance in another model also default vpc, no spaces constraint.
>>
>> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
>> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
>> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
>> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
>> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
>>
>>
>> The use case I have in mind here is such that I want to provide the private
>> network interface address via relation data in the provides.py of my
>> interface to the relating appliication.
>>
>> This will be able to happen by calling
>> hookenv.network_get('') in the layer that provides the
>> interface in my charm, and passing the output to get the private interface
>> ip data, to then set that in the provides side of the relation.
>>
>> Tracking?
>>
>> The problem:
>>
>> The problem is such that its not so straight forward to just get the
>> private address from the output of network_get().
>>
>> As you can see above, I could filter for network interface name, but thats
>> about the least best way one could go about this.
>>
>> Initially, I assumed the network_get() output would look different if you
>> had specified a spaces constraint when deploying your application, but the
>> output was similar to no spaces, e.g. spaces aren't listed in the output of
>> network_get().
>>
>>
>> All in all, what I'm after is a consistent way to grep either the space an
>> interface is bound to, or to get the public vs private address from the
>> output of network_get(), I think this is true for every provider just about
>> (ones that use spaces at least).
>>
>> Instead of the dict above, I was thinking we might namespace the interfaces
>> inside of what type of interface they are to make it easier to decipher
>> when parsing the network_get().
>>
>> My idea is a schema like the following:
>>
>> {
>> 'private-networks': {
>> 'my-admin-space': {
>> 'addresses': [
>> {
>> 'cidr': '172.31.48.0/20',
>> 'address': '172.31.51.59'
>> }
>> ],
>> 'interfacename': 'eth0',
>> 'macaddress': '12:ba:53:58:9c:52'
>> }
>> 'public-networks': {
>> 'default': {
>> 'addresses': [
>> {
>> 'cidr': 'publicipaddress/32',
>> 'address': 'publicipaddress'
>> }
>> ],
>> }
>> 'fan-networks': {
>> 'fan-252': {
>> 
>> 
>> }
>> }
>>
>> Where all 

Re: default network space

2017-10-12 Thread Ian Booth
Copying in the Juju list also

On 12/10/17 22:18, Ian Booth wrote:
> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
> 
> There's some doc here to explain things in more detail
> 
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
> 
> Depending on how the charm has been deployed, and more specifically whether it
> is in a cross model relation, the ingress address might be either the public 
> or
> private address. Juju will decide based on a number of factors (whether models
> are deployed to same region, vpc, other provider specific aspects) and 
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the 
> public
> address (if there is one) for the ingress value for cross model relations - 
> the
> algorithm to short circuit to a cloud local address is not yet finished.
> 
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is associated. 
> The
> network-get output though should not include any space information explicitly 
> -
> this is a concern which a charm should not care about.
> 
> 
> On 12/10/17 13:35, James Beedy wrote:
>> Hello all,
>>
>> In case you haven't noticed, we now have a network_get() function available
>> in charmhelpers.core.hookenv (in master, not stable).
>>
>> Just wanted to have a little discussion about how we are going to be
>> parsing network_get().
>>
>> I first want to address the output of network_get() for an instance
>> deployed to the default vpc, no spaces constraint, and related to another
>> instance in another model also default vpc, no spaces constraint.
>>
>> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
>> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
>> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
>> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
>> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
>>
>>
>> The use case I have in mind here is such that I want to provide the private
>> network interface address via relation data in the provides.py of my
>> interface to the relating appliication.
>>
>> This will be able to happen by calling
>> hookenv.network_get('') in the layer that provides the
>> interface in my charm, and passing the output to get the private interface
>> ip data, to then set that in the provides side of the relation.
>>
>> Tracking?
>>
>> The problem:
>>
>> The problem is such that its not so straight forward to just get the
>> private address from the output of network_get().
>>
>> As you can see above, I could filter for network interface name, but thats
>> about the least best way one could go about this.
>>
>> Initially, I assumed the network_get() output would look different if you
>> had specified a spaces constraint when deploying your application, but the
>> output was similar to no spaces, e.g. spaces aren't listed in the output of
>> network_get().
>>
>>
>> All in all, what I'm after is a consistent way to grep either the space an
>> interface is bound to, or to get the public vs private address from the
>> output of network_get(), I think this is true for every provider just about
>> (ones that use spaces at least).
>>
>> Instead of the dict above, I was thinking we might namespace the interfaces
>> inside of what type of interface they are to make it easier to decipher
>> when parsing the network_get().
>>
>> My idea is a schema like the following:
>>
>> {
>> 'private-networks': {
>> 'my-admin-space': {
>> 'addresses': [
>> {
>> 'cidr': '172.31.48.0/20',
>> 'address': '172.31.51.59'
>> }
>> ],
>> 'interfacename': 'eth0',
>> 'macaddress': '12:ba:53:58:9c:52'
>> }
>> 'public-networks': {
>> 'default': {
>> 'addresses': [
>> {
>> 'cidr': 'publicipaddress/32',
>> 'address': 'publicipaddress'
>> }
>> ],
>> }
>> 'fan-networks': {
>> 'fan-252': {
>> 
>> 
>> }
>> }
>>
>> Where all 

Re: default network space

2017-10-12 Thread Ian Booth
I'd like to understand the use case you have in mind a little better. The
premise of the network-get output is that charms should not think about public
vs private addresses in terms of what to put into relation data - the other
remote unit should not be exposed to things in those terms.

There's some doc here to explain things in more detail

https://jujucharms.com/docs/master/developer-network-primitives

The TL;DR: is that charms need to care about:
- what address do I bind to (listen on)
- what address do external actors use to connect to me (ingress)

Depending on how the charm has been deployed, and more specifically whether it
is in a cross model relation, the ingress address might be either the public or
private address. Juju will decide based on a number of factors (whether models
are deployed to same region, vpc, other provider specific aspects) and populate
the network-get data accordingly. NOTE: for now Juju will always pick the public
address (if there is one) for the ingress value for cross model relations - the
algorithm to short circuit to a cloud local address is not yet finished.

The content of the bind-addresses block is space aware in that these are
filtered based on the space with which the specified endpoint is associated. The
network-get output though should not include any space information explicitly -
this is a concern which a charm should not care about.


On 12/10/17 13:35, James Beedy wrote:
> Hello all,
> 
> In case you haven't noticed, we now have a network_get() function available
> in charmhelpers.core.hookenv (in master, not stable).
> 
> Just wanted to have a little discussion about how we are going to be
> parsing network_get().
> 
> I first want to address the output of network_get() for an instance
> deployed to the default vpc, no spaces constraint, and related to another
> instance in another model also default vpc, no spaces constraint.
> 
> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
> 
> 
> The use case I have in mind here is such that I want to provide the private
> network interface address via relation data in the provides.py of my
> interface to the relating appliication.
> 
> This will be able to happen by calling
> hookenv.network_get('') in the layer that provides the
> interface in my charm, and passing the output to get the private interface
> ip data, to then set that in the provides side of the relation.
> 
> Tracking?
> 
> The problem:
> 
> The problem is such that its not so straight forward to just get the
> private address from the output of network_get().
> 
> As you can see above, I could filter for network interface name, but thats
> about the least best way one could go about this.
> 
> Initially, I assumed the network_get() output would look different if you
> had specified a spaces constraint when deploying your application, but the
> output was similar to no spaces, e.g. spaces aren't listed in the output of
> network_get().
> 
> 
> All in all, what I'm after is a consistent way to grep either the space an
> interface is bound to, or to get the public vs private address from the
> output of network_get(), I think this is true for every provider just about
> (ones that use spaces at least).
> 
> Instead of the dict above, I was thinking we might namespace the interfaces
> inside of what type of interface they are to make it easier to decipher
> when parsing the network_get().
> 
> My idea is a schema like the following:
> 
> {
> 'private-networks': {
> 'my-admin-space': {
> 'addresses': [
> {
> 'cidr': '172.31.48.0/20',
> 'address': '172.31.51.59'
> }
> ],
> 'interfacename': 'eth0',
> 'macaddress': '12:ba:53:58:9c:52'
> }
> 'public-networks': {
> 'default': {
> 'addresses': [
> {
> 'cidr': 'publicipaddress/32',
> 'address': 'publicipaddress'
> }
> ],
> }
> 'fan-networks': {
> 'fan-252': {
> 
> 
> }
> }
> 
> Where all interfaces bound to spaces are considered private addresses, and
> with the assumption that if you don't specify a space constraint, your
> private network interface is bound to the "default" space.
> 
> The key thing here is the schema structure grouping the interfaces bound to
> spaces inside a private-networks level in the dict, and the introduction of
> the fact that if you don't specify a space, you get an address bound to an
> artificial "default" space.
> 
> I feel this would make things easier to consume, and interface to from a
> developer standpoint.
> 
> Is this making sense? How do others feel?
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju 2.3 beta1 is here!

2017-10-05 Thread Ian Booth
After many months of effort, we're pleased to announce the release of the first
beta for the upcoming Juju 2.3 release. This release has many long requested new
features, some of which are highlighted below.

Please note that because this is a beta release (the first one at that), there
may likely be bugs or functionality that will be polished over the next betas
prior to release. But we encourage everyone to provide feedback so that we may
address any issues.

Also note that some of the documentation for the new features is also in beta
and undergoing revision and completion over the next few weeks. In particular
the cross model relations documentation is still in development.

## New and Improved

### FAN networking in containers (initial support)

A new "container-networking-method" model config attribute is introduced with 3
possible values: "local", "fan", "provider".
* local = use local bridge lxdbr0
* provider = containers get their IP address from the cloud via DHCP
* fan = use FAN

The default is to use "provider" if supported. Otherwise, if FAN is configured
use that, else "local".
On AWS, FAN works out of the box. For other clouds, a new fan-config model
option needs to be used, eg

juju model-config fan-config="= =

### Update application series

It's now possible to update the underlying OS series associated with an already
deployed application.

juju update-series  

will ensure that any new units deployed will now use the requested series.

juju update-series  

will inform the charms already deployed to the machine that the OS series has
been changed and they should re-configure accordingly. This requires charm
support and for the underlying OS to be upgraded manually beforehand.

For more detail, see the documentation
https://jujucharms.com/docs/devel/howto-updateseries

### Cross model relations

This feature allows workloads to be deployed and related across models, and even
across controllers. Note that some charms such as postgresql, prometheus (and
others) need to be updated to be cross model compatible - this work is underway.

For more detail, see the beta documentation
https://jujucharms.com/docs/devel/models-cmr/

*Note: this cross model relations documentaion is also still in beta and is
incomplete.*

### LXD storage provider

Juju storage is now supported by the LXD local cloud. The available storage
options include:
- lxd (default, directory based)
- btrfs
- zfs

For more detail, see the documentation
https://jujucharms.com/docs/devel/charms-storage#lxd-(lxd)

### Persistent storage management

Storage can be detached and reattached from/to units without losing the data on
that storage. The supported scenarios include:
- explicit detach / attach while the units are still active
- retain storage when a unit or application is destroyed
- retain storage when a model is destroyed
- deploy a charm using previously detached storage

The default behaviour now is to retain storage, unless destroy has explicitly
been requested when running the command.

Storage which is retained can then be reattached to a different unit. Filesystem
storage can be imported into a different model, from where it can be attached to
units in that model, or used when deploying a new charm.

For more detail, see the documentation
https://jujucharms.com/docs/devel/charms-storage


## Fixes

For a list of all bugs fixed in this release, see
https://launchpad.net/juju/+milestone/2.3-beta1

Some important fixes include:

* can't bootstrap openstack if nova and neutron AZs differ
https://bugs.launchpad.net/juju/+bug/1689683
* cache vSphere images in datastore to avoid repeated downloads
https://bugs.launchpad.net/juju/+bug/1711019
* juju run-action can be run on multiple units
https://bugs.launchpad.net/juju/+bug/1667213


## How can I get it?

The best way to get your hands on this release of Juju is to install it as a
snap package (see https://snapcraft.io/ for more info on snaps).

 snap install juju --beta --classic

Other packages are available for a variety of platforms. Please see the online
documentation at https://jujucharms.com/docs/stable/reference-install. Those
subscribed to a snap channel should be automatically upgraded. If you’re using
the ppa/homebrew, you should see an upgrade available.


## Feedback Appreciated!

We encourage everyone to let us know how you're using Juju. Send us a
message on Twitter using #jujucharms, join us at #juju on freenode, and
subscribe to the mailing list at juju@lists.ubuntu.com.


## More information

To learn more about juju please visit https://jujucharms.com.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Juju 2.3 beta1 is here!

2017-10-05 Thread Ian Booth
After many months of effort, we're pleased to announce the release of the first
beta for the upcoming Juju 2.3 release. This release has many long requested new
features, some of which are highlighted below.

Please note that because this is a beta release (the first one at that), there
may likely be bugs or functionality that will be polished over the next betas
prior to release. But we encourage everyone to provide feedback so that we may
address any issues.

Also note that some of the documentation for the new features is also in beta
and undergoing revision and completion over the next few weeks. In particular
the cross model relations documentation is still in development.

## New and Improved

### FAN networking in containers (initial support)

A new "container-networking-method" model config attribute is introduced with 3
possible values: "local", "fan", "provider".
* local = use local bridge lxdbr0
* provider = containers get their IP address from the cloud via DHCP
* fan = use FAN

The default is to use "provider" if supported. Otherwise, if FAN is configured
use that, else "local".
On AWS, FAN works out of the box. For other clouds, a new fan-config model
option needs to be used, eg

juju model-config fan-config="= =

### Update application series

It's now possible to update the underlying OS series associated with an already
deployed application.

juju update-series  

will ensure that any new units deployed will now use the requested series.

juju update-series  

will inform the charms already deployed to the machine that the OS series has
been changed and they should re-configure accordingly. This requires charm
support and for the underlying OS to be upgraded manually beforehand.

For more detail, see the documentation
https://jujucharms.com/docs/devel/howto-updateseries

### Cross model relations

This feature allows workloads to be deployed and related across models, and even
across controllers. Note that some charms such as postgresql, prometheus (and
others) need to be updated to be cross model compatible - this work is underway.

For more detail, see the beta documentation
https://jujucharms.com/docs/devel/models-cmr/

*Note: this cross model relations documentaion is also still in beta and is
incomplete.*

### LXD storage provider

Juju storage is now supported by the LXD local cloud. The available storage
options include:
- lxd (default, directory based)
- btrfs
- zfs

For more detail, see the documentation
https://jujucharms.com/docs/devel/charms-storage#lxd-(lxd)

### Persistent storage management

Storage can be detached and reattached from/to units without losing the data on
that storage. The supported scenarios include:
- explicit detach / attach while the units are still active
- retain storage when a unit or application is destroyed
- retain storage when a model is destroyed
- deploy a charm using previously detached storage

The default behaviour now is to retain storage, unless destroy has explicitly
been requested when running the command.

Storage which is retained can then be reattached to a different unit. Filesystem
storage can be imported into a different model, from where it can be attached to
units in that model, or used when deploying a new charm.

For more detail, see the documentation
https://jujucharms.com/docs/devel/charms-storage


## Fixes

For a list of all bugs fixed in this release, see
https://launchpad.net/juju/+milestone/2.3-beta1

Some important fixes include:

* can't bootstrap openstack if nova and neutron AZs differ
https://bugs.launchpad.net/juju/+bug/1689683
* cache vSphere images in datastore to avoid repeated downloads
https://bugs.launchpad.net/juju/+bug/1711019
* juju run-action can be run on multiple units
https://bugs.launchpad.net/juju/+bug/1667213


## How can I get it?

The best way to get your hands on this release of Juju is to install it as a
snap package (see https://snapcraft.io/ for more info on snaps).

 snap install juju --beta --classic

Other packages are available for a variety of platforms. Please see the online
documentation at https://jujucharms.com/docs/stable/reference-install. Those
subscribed to a snap channel should be automatically upgraded. If you’re using
the ppa/homebrew, you should see an upgrade available.


## Feedback Appreciated!

We encourage everyone to let us know how you're using Juju. Send us a
message on Twitter using #jujucharms, join us at #juju on freenode, and
subscribe to the mailing list at j...@lists.ubuntu.com.


## More information

To learn more about juju please visit https://jujucharms.com.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: What is the best way to work with multiple models in a controller using the cli?

2017-10-05 Thread Ian Booth
Hey

The -m argument is what you want. It accepts wither just a model name or a
controller:model for when you have multiple controllers. eg

$ juju status -m prod
$ juju status -m ctrl:prod

The first command above works on the prod model on the current controller. The
second selects a specific controller regardless of the current one. The second
way is the safest unless you really are only using one controller.

Also, you can set the JUJU_MODEL env var. That's useful for when you open
different terminal windows and want to work on a different model in each window
without using -m each time.

On 05/10/17 17:43, Akshat Jiwan Sharma wrote:
> Hi,
> 
> I have deployed a few models using the local juju controller and I want
> to execute a bunch of commands on a particular model using the juju-cli.
> 
> Lets say I have these three models defined on my controller
> 
> - model1
> - model2
> - model3
> 
> This is the sequence of commands I want to run
> 
> 1. List all the machines in model1
> 2. Add storage unit to model2
> 3. Add a relation between applications in model3
> 
> These operations may be run in any order. That is first I might run op 2
> then op 3 and then op1.
> The only constraint is that an operation must be run on a particular model.
> Right now I go about this task like so:-
> 
> juju switch model1 && juju machines
> 
>  This works fine. I get all my machines listed for model1. The problem with
> this  approach is that I'm not sure if
> another command is executing a juju switch somewhere and suddenly the model
> I'm operating changes from model1 to model2.
> 
> For instance suppose that these two commands are run one after the other
> 
> juju switch model1 && juju list machines
> juju switch model3 && juju add-relation app1 app2
> 
> Now how can I be certain that for second command I'm operating on model 3?
> As far as I understand juju switches are global.
> Meaning a `switch` makes a change "permanent" to all the other commands
> that follow.
> 
> My question is how do I "lock" the execution of a certain command to a
> particular model?
> 
> Thanks,
> Akshat
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


network-get hook tool - fixing inconsistent output plus new stuff

2017-09-14 Thread Ian Booth
Hi folks

TL;DR; I want to rename a yaml/json attribute in network-get output. I want to
see if any charmers would find this to be an issue. IIANM, we don't (yet) have a
tool to easily scrape the charm store to see what charms use network-get
directly. Charm helpers calls network-get with the --primary-address flag and
this will continue to work as before [1].

[1] --primary-address will be deprecated in Juju 2.3; --bind-address should be
used instead.

* If you see a reason not to rename the network-get yaml/json "info" attribute,
now is the time to speak up *

There's already been internal discussion with some key charmers about the new
content for network-get. But given we're looking to change output, it's time to
circulate more widely.

The network-get hook tool has been evolving over the past few cycles as a
replacement for the unit-get hook tool. The tool is not yet quite finished; as
of now in Juju 2.x releases, the following is supported:

$ network-get "binding" --primary-address
$ network-get "binding"

Without the --primary-address flag, the output is a yaml printout of all the
link layer devices and their addresses on the machine, relevant to the specified
binding according to the space to which it is associated. It's also possible to
ask for json.

Here's an example output:

$ network-get "binding"
info:
- macaddress: "00:11:22:33:44:00"
  interfacename: eth0
  addresses:
  - address: 10.10.0.23
cidr: 10.10.0.0/24
  - address: 192.168.1.111
cidr: 192.168.1.0/24
- macaddress: "00:11:22:33:44:11"
  interfacename: eth1
  addresses:
  - address: 10.10.1.23
cidr: 10.10.1.0/24
  - address: 192.168.2.111
cidr: 192.168.2.0/24

$ network-get "binding" --primary-address
10.10.0.23

Problem 1.

The json output is not consistent with the yaml. json uses "network-info"
instead of "info" as the attribute tag name.

Problem 2.

The attribute tag name itself.

Instead of "info" or "network-info", I want to rename to "bind-addresses". Or
maybe even "local-addresses"?. Here's why.

There's 3 key pieces of address information a charm needs to know, for either
the local unit and/or the remote unit:
1. what address to bind to (to listen on)
2. what address to advertise for incoming connections (ingress)
3. what subnets outbound traffic will originate from (egress)

Note: the following applies to the develop branch only. 2.x is missing all of
this new stuff.

For the remote unit, this information is in relation data as these attributes:
- ingress-address
- egress-subnets

For the local unit, network-get is the tool to use to find out. I want to rename
the "info" attribute to better reflect the semantics of what the data represents
as well as fix the yaml/json mismatch.

Here's an example

bind-addresses:
- macaddress: "00:11:22:33:44:00"
  interfacename: eth0
  addresses:
  - address: 10.10.0.23
cidr: 10.10.0.0/24
  - address: 192.168.1.111
cidr: 192.168.1.0/24
- macaddress: "00:11:22:33:44:11"
  interfacename: eth1
  addresses:
  - address: 10.10.1.23
cidr: 10.10.1.0/24
  - address: 192.168.2.111
cidr: 192.168.2.0/24
egress-subnets:
- 192.168.1.0/8
- 10.0.0.0/8
ingress-addresses:
- 100.1.2.3

You can also ask for individual values

$ network-get "binding" --bind-address
10.10.0.23

$ network-get "binding" --ingress-address
100.1.2.3

Cross Model Relations

A key driver for this work is cross model relations. When called in a relation
context, or with the -r arg to specify a relation id, the ingress and egress
information provided by network-get is adjusted so that it is correct for the
relation. The charm itself remains agnostic to whether it is a cross model
relation or not; Juju does all the work. But suffice to say, charms should
evolve to use the new semantics of network-get so that they are cross model
relations compatible. As is the case now with charm helpers and how it falls
back to unit-get for older versions of Juju, this new work will only be
available in 2.3 onwards, so charm helpers will need to deal with that.












-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Weekly Development Summary

2017-08-18 Thread Ian Booth
Hi folks

A summary of what we've been doing this week in Juju.

Two new Azure regions have been added - koreasouth and koreacentral.
To use these on an existing Juju 2.x install, simply run
$ juju update-clouds

We continue to prepare for the 2.2.3 release. We were hoping to pull the trigger
this week but a few new issues from stakeholders were added to the milestone.
https://launchpad.net/juju/+milestone/2.2.3

Some issues fixed or being finalised:
- a particularly nasty Mongo replica set issue affecting some HA deployments
- some model destruction issues
- pending resources when an application is deployed again after failing once
- cloud names with underscores
- better able to handle duplicate instance ids in MAAS when a nodel fails to
deploy and is reused later

A new command "update-series" has been added which allows the series for an
application to be updated. Any new units deployed for that application
will use the specified series. We're working on a variation of the command to
allow the series for existing units/machines to be updated also.

On the cross model relations front, juju status and juju list-offers commands
have been tweaked to improve their output. The "list-offers" (offers) command by
default shows connection details to each offer, including user and relation id.
The "remove-relation" command now accepts a relation id and so it's possible to,
on the offering side, remove a cross model relation. Work is still being done to
support temporarily revoking a relation rather than removing it outright.

We continue to expand CI test coverage of Juju features. This week the
persistent storage feature gained test coverage.

Quick links:
  Work pending: https://github.com/juju/juju/pulls
  Recent commits: https://github.com/juju/juju/commits/develop
  Recent 2.2 commits: https://github.com/juju/juju/commits/2.2

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Weekly Development Summary

2017-08-11 Thread Ian Booth
Hi folks

Here's a quick wrap up of what the Juju team has been doing this week.

We're almost ready for a new 2.2.3 release. Issues addressed are found on the
milestone:
https://launchpad.net/juju/+milestone/2.2.3

Some highlights include bundles supporting local resources, migration and
upgrade fixes, and machine placement directives ignoring constraints.

The work to allow upgrades from 1.25 continues and we're close to a working
proof of concept. Challenges have included lxc to lxd upgrades and dealing with
the significant difference between the 1.25 and 2.x data models.

On the cross model relations front, support for multi-controller relations now
includes a complete macaroon based authentication mechanism.

More usability improvements have landed, including clean up of the juju
resources commands and other papercuts.

The relations section of Juju status in tabular format has been cleaned up based
on feedback from the field. Display of bogus subordinate relations is fixed, and
the content has been enhanced to display both endpoints, ordered by the provider
application. Check it out and any additional feedback welcome.

The Jenkins infrastructure used for landing and CI continues to improve at a
rapid pace. There's been awesome work done to make everything robust and
maintainable and remove all the special case scripts and slave machines. This
has all been behind the scenes. But over the past couple of weeks the work to
integrate the Open Blue Ocean plugin means that developers gain a fantastic view
into the progress of their landing job and can easily drill down to see the
cause of any test failures.

Quick links:
  Work pending: https://github.com/juju/juju/pulls
  Recent commits: https://github.com/juju/juju/commits/develop
  Recent 2.2 commits: https://github.com/juju/juju/commits/2.2

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: call for testing: relations across Juju models

2017-07-26 Thread Ian Booth


On 25/07/17 00:54, Dmitrii Shcherbakov wrote:
> Hi Patrizio,
> 
> As far as I understand it now, if you configure it right in terms of
> networking, it will be possible for both single and multi-cloud cases.
>

Correct. You can have one application deployed to a model in a Google cloud, and
another deployed to a model in AWS for example. Juju correctly determines that
workload traffic needs to flow between the workloads' respective public
addresses, and also takes care of opening the required firewall ports to allow
workload traffic to flow from the requires side of the relation to the provides
side.

Future work will see Juju optimise the netwrok aspects to that the relation will
be set up to use the cloud local addresses if the models and relation endpoints
are deployed in a way that this is supported (eg for AWS, same region, tenant,
vpc).

I also plan to add cross model support to bundles, to make the k8s federation
story described below easier. This is not started yet, just an udea on the
fairly large cross model relations todo list.

> Having only workers on the second cloud is fairly straightforward.
> 
> However, I think the real use-case is to implement k8s federation without
> having to replicate etcd across multiple data centers and using
> latency-based load-balancing:
> 
> https://kubernetes.io/docs/concepts/cluster-administration/federation/
> https://kubernetes.io/docs/tasks/federation/set-up-cluster-federation-kubefed/
> 
> This will require charming of the federation controller manager to
> have federation control plane for multiple clouds.
> 
> This is similar to an orchestrator use-case in the ETSI NFV architecture.
> 
> Quite an interesting problem to solve with cross-controller relations.
> 
> 
> 
> Best Regards,
> Dmitrii Shcherbakov
> 
> Field Software Engineer
> IRC (freenode): Dmitrii-Sh
> 
> On Mon, Jul 24, 2017 at 4:48 PM, Patrizio Bassi <patrizio.ba...@gmail.com>
> wrote:
> 
>> Hi All
>>
>> this is very very interesting.
>>
>> Is possibile to scale out some units using cross models?
>>
>> For instance: in a onpestack tenant i deploy a kubernates cluster. Than in
>> another tenant i add k8-workers, the add-unit command will refer to the
>> parent deployment to get needed params (i.e. master IP address.. juju
>> config)
>>
>> This will be even better in a hybrid cloud environment
>> Regards
>>
>> Patrizio
>>
>>
>>
>> 2017-07-24 15:26 GMT+02:00 Ian Booth <ian.bo...@canonical.com>:
>>
>>>
>>>
>>> On 24/07/17 23:12, Ian Booth wrote:
>>>>
>>>>
>>>> On 24/07/17 20:02, Paul Gear wrote:
>>>>> On 08/07/17 03:36, Rick Harding wrote:
>>>>>> As I noted in The Juju Show [1] this week I've put together a blog
>>>>>> post around the cross model relations feature that folks can test out
>>>>>> in Juju 2.2. Please test it out and provide your feedback.
>>>>>>
>>>>>> http://mitechie.com/blog/2017/7/7/call-for-testing-shared-se
>>> rvices-with-juju
>>>>>>
>>>>>> Current known limitations:
>>>>>> Only works in the same model
>>>>>> You need to bootstrap with the feature flag to test it out
>>>>>> Does not currently work with relations to subordinates. Work is in
>>>>>> progress
>>>>>
>>>>> Hi Rick,
>>>>>
>>>>> I gave this a run this afternoon.  In my case, I just set up an haproxy
>>>>> unit in one model and a Nagios server in another, and connected the
>>>>> haproxy:reverseproxy to the nagios:website.  Everything worked exactly
>>>>> as expected.
>>>>>
>>>>> One comment about the user interface: the "juju relate" for the client
>>>>> side seems a bit redundant, since "juju add-relation" could easily work
>>>>> out which type of relation it was by looking at the form of the
>>> provided
>>>>> identifier.  If we pass a URI to an offered relation in another model,
>>>>> it could use a cross-model relation, and if we just use normal
>>>>> service:relation-id format, it could use a normal relation.
>>>>>
>>>>> Anyway, just wanted to say it's great to see some progress on this,
>>>>> because it solves some real operational problems for us.  I can't wait
>>>>> for the cross-controller, reverse-direction, highly-scalable version
>>>>> which will allow us to obsolete the glue scripts needed to

Re: Coming in 2.3: storage improvements

2017-07-13 Thread Ian Booth
Indeed. And just landing today is support for btrfs as well. So there'll be a
choice of:
lxd (the default, directory based)
lxd-zfs
lxd-btrfs

On 14/07/17 13:46, Menno Smits wrote:
> Nice work Andrew! These changes make Juju's storage support much more
> powerful.
> 
> 
> 
> On 13 July 2017 at 20:56, Andrew Wilkins 
> wrote:
> 
>> Hi folks,
>>
>> I've just published https://awilkins.id.au/post/juju-2.3-storage/, which
>> highlights some of the new bits added around storage that's coming to Juju
>> 2.3. I particularly wanted to highlight that a new LXD storage provider has
>> just landed on develop today. It should be available in the edge snap soon.
>>
>> The LXD storage provider will enable you to attach LXD storage volumes to
>> your containers, and use that for a charm's storage requirements. e.g.
>>
>> $ juju deploy postgresql --storage pgdata=1G,lxd-zfs
>>
>> This will create a LXD storage pool backed by a ZFS pool, create a 1GiB
>> ZFS volume and attach that to the container.
>>
>> I'd appreciate feedback on the new provider, and the attach/detach changes
>> described in the blog post, preferably before 2.3 comes around. In
>> particular, UX warts or functionality that you're missing or anything you
>> find broken-by-design -- stuff that can't easily be fixed after we release.
>>
>> Thanks!
>>
>> Cheers,
>> Andrew
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>> mailman/listinfo/juju
>>
>>
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: JUJU_UNIT_NAME no longer set in env

2017-05-22 Thread Ian Booth
FWIW, Juju itself still sets JUJU_UNIT_NAME

https://github.com/juju/juju/blob/develop/worker/uniter/runner/context/context.go#L582

On 23/05/17 05:59, James Beedy wrote:
> Juju 2.1.2
> 
> I'm getting this "JUJU_UNIT_NAME not in env" error on legacy-non-reactive
> xenial charm using service_name() from hookenv.
> 
> http://paste.ubuntu.com/24626263/
> 
> Did we remove this?
> 
> ~James
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Ian Booth


On 23/05/17 06:39, Stuart Bishop wrote:
> On 22 May 2017 at 20:02, roger peppe  wrote:
> 
>> not to show in the status history.  Given that the motivation behind
>> the proposal is to reduce load on the database and on controllers, I
> 
> One of the motivations was to reduce load. Another motivation, that
> I'm more interested in, was to make the status log history readable.
> Currently it is page after page of noise about update-status running
> with occasional bits of information.
> 
> (I've leave it to others to argue if it is better to fix this when
> generating the report or by not logging the noise in the first place)
> 

Since Juju 2.1.1, the juju show-status-log command no longer shows
status-history entries by default. There's a new --include-status-updates flag
which can be used if those entries are required in the output.

There's also squashing of repeated log entries. These enhancements were meant to
address the "I don't want to see it problem".

The idea to not record it was meant to address the load issue (both retrieval
and recording). As part of the ongoing performance tuning and scaling efforts,
some hard numbers are being gathered to measure the impact of keeping
update-status in the database.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Ian Booth


On 22/05/17 18:23, roger peppe wrote:
> I think it's slightly unfortunate that update-status exists at all -
> it doesn't really need to,
> AFAICS, as a charm can always do the polling itself if needed; for example:
> 
> while :; do
>  sleep 30
>  juju-run $UNIT 'status-set current-status "this is what is 
> happening"'
> done &
> 
> Or (better) use juju-run to set the status when the workload
> executable starts and exits, avoiding the need for polling at all.
>

It's not sufficient to just set the status when the workload starts and exits.
One example is a database which periodically goes offline for a short time for
maintenance. The workload executable itself should not have to know how to
communicate this to Juju. By the agent running update-status hook periodically,
it allows the charm itself to establish whether the database status should be
marked as "maintetance" for example. Using a hook provides a standard way all
charms can rely on to communicate workload status in a consistent way.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Weekly Development Summary

2017-05-19 Thread Ian Booth
Hi folks

That time of the week again - almost beer o'clock for those of us in Aus/NZ
timezones - and also time to recap on the happenings in the land of Juju
development over the past 7 days.

We're working hard to get a Juju 2.2 out the door. The week saw a release of 2.2
beta4 which included usability improvements to actions, openstack and oracle
providers. Focus this week has been on squashing a number of stakeholder bugs
and CI test failures. We aim to release an RC in a week or so all going well.

A couple of key development highlights apart from the usual fare of bug fixes
include:

- close to finishing improvements to how Juju storage operates - expect a snap
early next week to try out the feature which is targetted for Juju 2.3. You will
gain the ability to destroy a unit but leave its storage behind; this storage
can then be attached to a different unit or re-used when deploying a new
application instance.

- all of the CI and QA tools and scripts and test frameworks have been moved
across from Launchpad to live under the Juju repo on github. This is part of the
ongoing process to revamp and improve our test infrastructure to make everything
more robust and maintainable, and make writing CI tests as easy as possible.

- model migration improvements so that things play nicely together in a JAAS
world in addition to individual controllers

Quick links:
  Work Pending: https://github.com/juju/juju/pulls
  Recent commits: https://github.com/juju/juju/commits/develop

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: How to add openstack cloud to juju 2.1.2-xenial

2017-04-10 Thread Ian Booth

On 11/04/17 07:11, Daniel Bidwell wrote:
> I need to add openstack as a cloud to juju 2.1.2-xenial.  I don't seem
> to find the right howto.  What authentication method do I use?  And
> where do I get the authentication string?  User name and password for
> dashboard user?
> 

The authentication method to use is typically userpass. This will be one of the
choices if running juju add-cloud interactively. The authentication string can
typically be found by looking at your novarc file - it is the AUTH_URL value,
usually something like "https://keystone.mydomain.com:443/v2.0/;

Once the cloud itself has been added, you then need to add credential
information which can be done using juju add-credential. It will pick up that
userpass authentication has been previously specified and will prompt for things
like tenant name, domain name etc - these values depend on how the Openstack
instance itself has been set up, and whether keystone v3 authentication is being
used etc. Juju can attempt to guess the right credential values by running juju
autoload-credentials, assuming you have a ~/.novarc file or have the Openstack
client env vars set up. The novarc file usually contains the required values for
the various credential attributes.




-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju/retry take 2 - looping

2016-10-20 Thread Ian Booth
I really like where the enhancements are headed. I feel they offer the syntax
that some folks wanted, with the safety and validation of the initial
implementation. Best of both worlds.

On 20/10/16 13:09, Tim Penhey wrote:
> Hi folks,
> 
> https://github.com/juju/retry/pull/5/files
> 
> As often is the case, the pure solution is not always the best. What seemed
> initially like the best approach didn't end up that way.
> 
> Both Katherine and Roger had other retry proposals that got me thinking about
> changes to the juju/retry package. The stale-mate from the tech board made me
> want to try another approach that I thought about while walking the dog today.
> 
> I wanted the security and fall-back of validation of the various looping
> attributes, while making the call site much more obvious.
> The pull request has the result of this attempt.
> 
> It is by no means perfect, but an improvement I think. I was able to trivially
> reimplement retry.Call with the retry.Loop concept with no test changes.
> 
> The tests are probably the best way to look at the usage.
> 
> Tim
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Issue deploying a juju controller on openstack private cloud

2016-10-14 Thread Ian Booth
Unfortunately at the moment there's not good documentation on all of the
possible configuration options. It's something on the radar to improve.

If you did have a running 2.0 system (2.0 was released today), you could type
$ juju model-defaults

to see the available options.

On 14/10/16 00:26, sergio gonzalez wrote:
> Hello Ian
> 
> Thanks for your support. Where can I find all the configuration
> options to be passed during bootstrap?
> 
> Regards
> 
> Sergio
> 
> 2016-10-13 15:58 GMT+02:00 Ian Booth <ian.bo...@canonical.com>:
>> Hi Sergio
>>
>> Courtesy of Rick Harding, here's the information you need.
>>
>> The openstack provider has a network configuration attribute which needs to 
>> be
>> set to specify the network label or UUID to bring machines up on when 
>> multiple
>> networks exist.
>>
>> You pass it as an argument to bootstrap. eg
>>
>> $ juju bootstrap openstack-controller openstack-mitaka
>> --config image-metadata-url=http://10.2.1.109/simplestream/images/
>> --config network=
>>
>> On 13/10/16 06:29, sergio gonzalez wrote:
>>> Hello
>>>
>>> I am trying to deploy a juju controller, but I get the following error:
>>>
>>> juju --debug bootstrap openstack-controller openstack-mitaka --config
>>> image-metadata-url=http://10.2.1.109/simplestream/images/
>>>
>>> 2016-10-12 20:19:00 INFO juju.cmd supercommand.go:63 running juju
>>> [2.0-beta15 gc go1.6.2]
>>>
>>> 2016-10-12 20:19:00 INFO cmd cmd.go:141 Adding contents of
>>> "/home/ubuntu/.local/share/juju/ssh/juju_id_rsa.pub" to
>>> authorized-keys
>>>
>>> 2016-10-12 20:19:00 DEBUG juju.cmd.juju.commands bootstrap.go:499
>>> preparing controller with config: map[name:controller
>>> uuid:71e55928-2c38-407b-897f-94e83c60890b
>>> image-metadata-url:http://10.2.1.109/simplestream/images/
>>> authorized-keys:ssh-rsa
>>> B3NzaC1yc2EDAQABAAABAQDBoDbcBms7z/ChSG5hQyqZQYhkH6V5uA7HcINuFJH2AC9ygej6TdJ6eCdsPU77x+CgdRVLINE1PhtWsXdYFEZ11e7OV2Y4Jlt/SkMqGJK4enHNXcofIBUntbuVh90hww/yiApLxxi4/cMgHTigu4YZbkZz+QVBqCn5zZMgqPbSR55uHGsT9cbF1S+XRj/OqMpuwOkbgZ/vR880wz6lq1rUwdBOIAIblhuwXHLTT7A5y6Vck69xuqkeyjI67SUdHhxXeCDbjUkOkCqKHY9dU3LNHIH0xYsWGTB7z+FpCn8f7URfMviLQ2QX30Uda/h0KQ91/raGjYE5SHU3E/P/VWtj
>>> juju-client-key
>>>
>>>  type:openstack]
>>>
>>> 2016-10-12 20:19:00 INFO juju.provider.openstack provider.go:75
>>> opening model "controller"
>>>
>>> 2016-10-12 20:19:00 INFO cmd cmd.go:129 Creating Juju controller
>>> "openstack-controller" on openstack-mitaka/RegionOne
>>>
>>> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
>>> image datasource "image-metadata-url"
>>>
>>> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
>>> image datasource "default cloud images"
>>>
>>> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
>>> image datasource "default ubuntu cloud images"
>>>
>>> 2016-10-12 20:19:01 INFO juju.cmd.juju.commands bootstrap.go:641
>>> combined bootstrap constraints:
>>>
>>> 2016-10-12 20:19:01 INFO cmd cmd.go:129 Bootstrapping model "controller"
>>>
>>> 2016-10-12 20:19:01 DEBUG juju.environs.bootstrap bootstrap.go:214
>>> model "controller" supports service/machine networks: false
>>>
>>> 2016-10-12 20:19:01 DEBUG juju.environs.bootstrap bootstrap.go:216
>>> network management by juju enabled: true
>>>
>>> 2016-10-12 20:19:01 INFO juju.environs.bootstrap tools.go:95 looking
>>> for bootstrap tools: version=2.0-beta15
>>>
>>> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:106 finding
>>> tools in stream "devel"
>>>
>>> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:108 reading
>>> tools with major.minor version 2.0
>>>
>>> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:116 filtering
>>> tools by version: 2.0-beta15
>>>
>>> 2016-10-12 20:19:01 DEBUG juju.environs.tools urls.go:109 trying
>>> datasource "keystone catalog"
>>>
>>> 2016-10-12 20:19:02 DEBUG juju.environs.simplestreams
>>> simplestreams.go:680 using default candidate for content id
>>> "com.ubuntu.juju:devel:tools" are {20161007 mirrors:1.0
>>> content-download streams/v1/cpc-mirrors.sjson []}
>>>
>>> 2016-10-12 20:19:0

Re: Github Reviews vs Reviewboard

2016-10-13 Thread Ian Booth
-1000 :-)

On 14/10/16 08:44, Menno Smits wrote:
> We've been trialling Github Reviews for some time now and it's time to
> decide whether we stick with it or go back to Reviewboard.
> 
> We're going to have a vote. If you have an opinion on the issue please
> reply to this email with a +1, 0 or -1, optionally followed by any further
> thoughts.
> 
>- +1 means you prefer Github Reviews
>- -1 means you prefer Reviewboard
>- 0 means you don't mind.
> 
> If you don't mind which review system we use there's no need to reply
> unless you want to voice some opinions.
> 
> The voting period starts *now* and ends my* EOD next Friday (October 21)*.
> 
> As a refresher, here are the concerns raised for each option.
> 
> *Github Reviews*
> 
>- Comments disrupt the flow of the code and can't be minimised,
>hindering readability.
>- Comments can't be marked as done making it hard to see what's still to
>be taken care of.
>- There's no way to distinguish between a problem and a comment.
>- There's no summary of issues raised. You need to scroll through the
>often busy discussion page.
>- There's no indication of which PRs have been reviewed from the pull
>request index page nor is it possible to see which PRs have been approved
>or otherwise.
>- It's hard to see when a review has been updated.
> 
> *Reviewboard*
> 
>- Another piece of infrastructure for us to maintain
>- Higher barrier to entry for newcomers and outside contributors
>- Occasionally misses Github pull requests (likely a problem with our
>integration so is fixable)
>- Poor handling of deleted and renamed files
>- Falls over with very large diffs
>- 1990's looks :)
>- May make future integration of tools which work with Github into our
>process more difficult (e.g. static analysis or automated review tools)
> 
> There has been talk of evaluating other review tools such as Gerrit and
> that may still happen. For now, let's decide between the two options we
> have recent experience with.
> 
> - Menno
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Issue deploying a juju controller on openstack private cloud

2016-10-13 Thread Ian Booth
Hi Sergio

Courtesy of Rick Harding, here's the information you need.

The openstack provider has a network configuration attribute which needs to be
set to specify the network label or UUID to bring machines up on when multiple
networks exist.

You pass it as an argument to bootstrap. eg

$ juju bootstrap openstack-controller openstack-mitaka
--config image-metadata-url=http://10.2.1.109/simplestream/images/
--config network=

On 13/10/16 06:29, sergio gonzalez wrote:
> Hello
> 
> I am trying to deploy a juju controller, but I get the following error:
> 
> juju --debug bootstrap openstack-controller openstack-mitaka --config
> image-metadata-url=http://10.2.1.109/simplestream/images/
> 
> 2016-10-12 20:19:00 INFO juju.cmd supercommand.go:63 running juju
> [2.0-beta15 gc go1.6.2]
> 
> 2016-10-12 20:19:00 INFO cmd cmd.go:141 Adding contents of
> "/home/ubuntu/.local/share/juju/ssh/juju_id_rsa.pub" to
> authorized-keys
> 
> 2016-10-12 20:19:00 DEBUG juju.cmd.juju.commands bootstrap.go:499
> preparing controller with config: map[name:controller
> uuid:71e55928-2c38-407b-897f-94e83c60890b
> image-metadata-url:http://10.2.1.109/simplestream/images/
> authorized-keys:ssh-rsa
> B3NzaC1yc2EDAQABAAABAQDBoDbcBms7z/ChSG5hQyqZQYhkH6V5uA7HcINuFJH2AC9ygej6TdJ6eCdsPU77x+CgdRVLINE1PhtWsXdYFEZ11e7OV2Y4Jlt/SkMqGJK4enHNXcofIBUntbuVh90hww/yiApLxxi4/cMgHTigu4YZbkZz+QVBqCn5zZMgqPbSR55uHGsT9cbF1S+XRj/OqMpuwOkbgZ/vR880wz6lq1rUwdBOIAIblhuwXHLTT7A5y6Vck69xuqkeyjI67SUdHhxXeCDbjUkOkCqKHY9dU3LNHIH0xYsWGTB7z+FpCn8f7URfMviLQ2QX30Uda/h0KQ91/raGjYE5SHU3E/P/VWtj
> juju-client-key
> 
>  type:openstack]
> 
> 2016-10-12 20:19:00 INFO juju.provider.openstack provider.go:75
> opening model "controller"
> 
> 2016-10-12 20:19:00 INFO cmd cmd.go:129 Creating Juju controller
> "openstack-controller" on openstack-mitaka/RegionOne
> 
> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "image-metadata-url"
> 
> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "default cloud images"
> 
> 2016-10-12 20:19:00 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "default ubuntu cloud images"
> 
> 2016-10-12 20:19:01 INFO juju.cmd.juju.commands bootstrap.go:641
> combined bootstrap constraints:
> 
> 2016-10-12 20:19:01 INFO cmd cmd.go:129 Bootstrapping model "controller"
> 
> 2016-10-12 20:19:01 DEBUG juju.environs.bootstrap bootstrap.go:214
> model "controller" supports service/machine networks: false
> 
> 2016-10-12 20:19:01 DEBUG juju.environs.bootstrap bootstrap.go:216
> network management by juju enabled: true
> 
> 2016-10-12 20:19:01 INFO juju.environs.bootstrap tools.go:95 looking
> for bootstrap tools: version=2.0-beta15
> 
> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:106 finding
> tools in stream "devel"
> 
> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:108 reading
> tools with major.minor version 2.0
> 
> 2016-10-12 20:19:01 INFO juju.environs.tools tools.go:116 filtering
> tools by version: 2.0-beta15
> 
> 2016-10-12 20:19:01 DEBUG juju.environs.tools urls.go:109 trying
> datasource "keystone catalog"
> 
> 2016-10-12 20:19:02 DEBUG juju.environs.simplestreams
> simplestreams.go:680 using default candidate for content id
> "com.ubuntu.juju:devel:tools" are {20161007 mirrors:1.0
> content-download streams/v1/cpc-mirrors.sjson []}
> 
> 2016-10-12 20:19:03 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "image-metadata-url"
> 
> 2016-10-12 20:19:03 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "default cloud images"
> 
> 2016-10-12 20:19:03 DEBUG juju.environs imagemetadata.go:112 obtained
> image datasource "default ubuntu cloud images"
> 
> 2016-10-12 20:19:03 DEBUG juju.environs.bootstrap bootstrap.go:489
> constraints for image metadata lookup &{{{RegionOne
> http://10.2.1.111:35357/v3} [centos7 precise trusty win10 win2012
> win2012hv win2012hvr2 win2012r2 win2016 win2016nano win7 win8 win81
> xenial yakkety] [amd64 arm64 ppc64el s390x] released}}
> 
> 2016-10-12 20:19:03 DEBUG juju.environs.bootstrap bootstrap.go:501
> found 1 image metadata in image-metadata-url
> 
> 2016-10-12 20:19:04 DEBUG juju.environs.simplestreams
> simplestreams.go:458 index has no matching records
> 
> 2016-10-12 20:19:04 DEBUG juju.environs.bootstrap bootstrap.go:501
> found 0 image metadata in default cloud images
> 
> 2016-10-12 20:19:05 DEBUG juju.environs.simplestreams
> simplestreams.go:454 skipping index
> "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson;
> because of missing information: index file has no data for cloud
> {RegionOne http://10.2.1.111:35357/v3} not found
> 
> 2016-10-12 20:19:05 DEBUG juju.environs.bootstrap bootstrap.go:497
> ignoring image metadata in default ubuntu cloud images: index file has
> no data for cloud {RegionOne http://10.2.1.111:35357/v3} not found
> 
> 2016-10-12 20:19:05 DEBUG juju.environs.bootstrap bootstrap.go:505
> found 1 image 

upcoming change in Juju 2.0 to bootstrap arguments

2016-10-12 Thread Ian Booth
See https://bugs.launchpad.net/juju/+bug/1632919

The order of the cloud/region and controller name arguments will be swapped.

Old:

$ juju bootstrap mycontroller aws/us-east-1

New:

$ juju bootstrap aws/us-east-1 mycontroller
or now
$ juju bootstrap aws/us-east-1

Notice how controller name is optional. It will default to cloud-region.
eg

$ juju bootstrap aws
Creating Juju controller "aws-us-east-1" on aws/us-east-1
...

The only fallout I expect will be for folks like OIL who use scripts will have
to tweak their scripts to swap the arguments. The bootstrap API itself is
unaffected so Python client and other API users will see no difference. It's
just a CLI change.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-15 Thread Ian Booth

On 16/09/16 03:50, Nate Finch wrote:
> Reviewboard goes down a couple times a month, usually from lack of disk
> space or some other BS.  According to a source knowledgeable with these
> matters, the charm was rushed out, and the agent for that machine is down
> anyway, so we're kinda just waiting for the other shoe to drop.
> 
> As for the process things that Ian mentioned, most of those can be
> addressed with a sprinkling of convention.  Marking things as issues could
> just be adding :x: to the first line (github even pops up suggestions and
> auto-completes), thusly:
> 
> [image: :x:]This will cause a race condition
> 
> And if you want to indicate you're dropping a suggestion, you can use :-1:
>  which gives you a thumbs down:
> 
> [image: :-1:] I ran the race detector and it's fine.
> 
> It won't give you the cumulative "what's left to fix" at the top of the
> page, like reviewboard... but for me, I never directly read that, anyway,
> just used it to see if there were zero or non-zero comments left.
>

If we want to do a trial, and we acknowledge that there are functional gaps, and
we are prepared to work around those using convention, then we should document
what those conventions are so that everyone takes a consistent approach.


> As for the inline comments in the code - there's a checkbox to hide them
> all.  It's not quite as convenient as the gutter indicators per-comment,
> but it's sufficient, I think.
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-15 Thread Ian Booth


On 16/09/16 08:54, Anastasia Macmood wrote:
> On 16/09/16 08:02, Ian Booth wrote:
>> Another data point - in the past, when we have had PRs which touch a lot of
>> files (eg change the import path for a dependency), review board paginates 
>> the
>> diff so it's much easier to manage, whereas I've seen github actually 
>> truncate
>> what it displays because the diff is "too large". Hopefully this will no 
>> longer
>> be an issue, or else we won't be able to review such changes in the future.
> This is perfect to reduce the size of our proposals to manageable :)
>>

The point is that that's not always possible. The example given was where we
need to update import paths due to a dependency change. That has to be done all
in one go. There are other occasions as well where sometimes a mechanical change
needs to touch a lot of files in the one PR. We just need to be sure that any RB
replacement caters for those scenarios.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-15 Thread Ian Booth
Another data point - in the past, when we have had PRs which touch a lot of
files (eg change the import path for a dependency), review board paginates the
diff so it's much easier to manage, whereas I've seen github actually truncate
what it displays because the diff is "too large". Hopefully this will no longer
be an issue, or else we won't be able to review such changes in the future.

On 16/09/16 07:53, Menno Smits wrote:
> Although I share some of Ian's concerns, I think the reduced moving parts,
> improved reliability, reduced maintenance, easier experience for outside
> contributors and better handling of file moves are pretty big wins. The
> rendering of diffs on Github is a whole lot nicer as well.
> 
> I'm +1 for trialling the new review system on Github for a couple of weeks
> as per Andrew's suggestion.
> 
> On 16 September 2016 at 05:50, Nate Finch <nate.fi...@canonical.com> wrote:
> 
>> Reviewboard goes down a couple times a month, usually from lack of disk
>> space or some other BS.  According to a source knowledgeable with these
>> matters, the charm was rushed out, and the agent for that machine is down
>> anyway, so we're kinda just waiting for the other shoe to drop.
>>
>> As for the process things that Ian mentioned, most of those can be
>> addressed with a sprinkling of convention.  Marking things as issues could
>> just be adding :x: to the first line (github even pops up suggestions and
>> auto-completes), thusly:
>>
>> [image: :x:]This will cause a race condition
>>
>> And if you want to indicate you're dropping a suggestion, you can use :-1:
>>  which gives you a thumbs down:
>>
>> [image: :-1:] I ran the race detector and it's fine.
>>
>> It won't give you the cumulative "what's left to fix" at the top of the
>> page, like reviewboard... but for me, I never directly read that, anyway,
>> just used it to see if there were zero or non-zero comments left.
>>
>> As for the inline comments in the code - there's a checkbox to hide them
>> all.  It's not quite as convenient as the gutter indicators per-comment,
>> but it's sufficient, I think.
>>
>> On Wed, Sep 14, 2016 at 6:43 PM Ian Booth <ian.bo...@canonical.com> wrote:
>>
>>>
>>>
>>> On 15/09/16 08:22, Rick Harding wrote:
>>>> I think that the issue is that someone has to maintain the RB and the
>>>> cost/time spent on that does not seem commensurate with the bonus
>>> features
>>>> in my experience.
>>>>
>>>
>>> The maintenance is not that great. We have SSO using github credentials so
>>> there's really no day to day works IIANM. As a team, we do many, many
>>> reviews
>>> per day, and the features I outlined are significant and things I (and I
>>> assume
>>> others) rely on. Don't under estimate the value in knowing why a comment
>>> was
>>> rejected and being able to properly track that. Or having review comments
>>> collapsed as a gutter indicated so you can browse the code and still know
>>> that
>>> there's a comment there. With github, you can hide the comments but
>>> there's no
>>> gutter indicator. All these things add up to a lot.
>>>
>>>
>>>> On Wed, Sep 14, 2016 at 6:13 PM Ian Booth <ian.bo...@canonical.com>
>>> wrote:
>>>>
>>>>> One thing review board does better is use gutter indicators so as not
>>> to
>>>>> interrupt the flow of reading the code with huge comment blocks. It
>>> also
>>>>> seems
>>>>> much better at allowing previous commits with comments to be viewed in
>>>>> their
>>>>> entirety. And it allows the reviewer to differentiate between issues
>>> and
>>>>> comments (ie fix this vs take note of this), plus it allows the notion
>>> of
>>>>> marking stuff as fixed vs dropped, with a reason for dropping if
>>> needed.
>>>>> So the
>>>>> github improvements are nice but there's still a large and significant
>>> gap
>>>>> that
>>>>> is yet to be filled. I for one would miss all the features reviewboard
>>>>> offers.
>>>>> Unless there's a way of doing the same thing in github that I'm not
>>> aware
>>>>> of.
>>>>>
>>>>> On 15/09/16 07:22, Tim Penhey wrote:
>>>>>> I'm +1 if we can remove the extra tools and we don't get email per
>>>>> comment.
>>>>>>
>>

Re: Reviews on Github

2016-09-14 Thread Ian Booth
One thing review board does better is use gutter indicators so as not to
interrupt the flow of reading the code with huge comment blocks. It also seems
much better at allowing previous commits with comments to be viewed in their
entirety. And it allows the reviewer to differentiate between issues and
comments (ie fix this vs take note of this), plus it allows the notion of
marking stuff as fixed vs dropped, with a reason for dropping if needed. So the
github improvements are nice but there's still a large and significant gap that
is yet to be filled. I for one would miss all the features reviewboard offers.
Unless there's a way of doing the same thing in github that I'm not aware of.

On 15/09/16 07:22, Tim Penhey wrote:
> I'm +1 if we can remove the extra tools and we don't get email per comment.
> 
> On 15/09/16 08:03, Nate Finch wrote:
>> In case you missed it, Github rolled out a new review process.  It
>> basically works just like reviewboard does, where you start a review,
>> batch up comments, then post the review as a whole, so you don't just
>> write a bunch of disconnected comments (and get one email per review,
>> not per comment).  The only features reviewboard has is the edge case
>> stuff that we rarely use:  like using rbt to post a review from a random
>> diff that is not connected directly to a github PR. I think that is easy
>> enough to give up in order to get the benefit of not needing an entirely
>> separate system to handle reviews.
>>
>> I made a little test review on one PR here, and the UX was almost
>> exactly like working in reviewboard: https://github.com/juju/juju/pull/6234
>>
>> There may be important edge cases I'm missing, but I think it's worth
>> looking into.
>>
>> -Nate
>>
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


A couple of API changes coming in Juju beta18 this week

2016-09-08 Thread Ian Booth
Just a heads up, 3 APIs are moving to a different facade. There's no other
semantic changes other than the move. The only externally end user visible
difference is that the juju model-defaults command operates only on a controller
and no longer supports specifying a model using -m.

The APIs are to do with setting inherited default model values, so if you don't
care about those, don't bother reading on. These APIs are quite new so hopefully
any downstream impact will be zero or negligible.

The APIs are

ModelDefaults() (config.ModelDefaultAttributes, error)
SetModelDefaults(cloud, region string, config map[string]interface{})
UnsetModelDefaults(cloud, region string, keys ...string) error

These were on the ModelConfig facade but are now on the ModelManager facade. The
latter is a facade which is accessed via a controller endpoint rather than a
model endpoint.





-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: delayed juju beta16 until next week

2016-08-18 Thread Ian Booth
Just to provide a little more clarity on the Azure issue.

The recent Azure SDK update changed the Azure behaviour as exposed to Juju. We
were previously not waiting for machines to be marked as fully provisioned; the
SDK now does this for us. MS says this is what you must do. The effect on Juju
is that deployments take twice as long since everything is now serialised.

Andrew has an idea that we may be able to work around it but there are no
guarantees at this point. But we'll try and find a suitable workaround.


On 18/08/16 23:25, Rick Harding wrote:
> We need to delay the release of beta16 until next week as we've been busy
> breaking things and currently don't have a working Azure in our trunk.
> 
> We've updated the Azure code we use to talk to their APIs and in the
> process uncovered changes in our code that need to happen to help bring
> things back to fully functional. We've also uncovered a race in our use of
> the Azure APIs that is currently in progress. Due to this flux we've not
> had a passing build on Azure since beta15.
> 
> The team's been hard at work chasing down the updates and we are confident
> we'll have everything set for next week. We're excited because the new
> Azure tooling will allow us to improve the Azure user experience for Juju
> 2.0.
> 
> If you have a specific blocking bug that was addressed (marked
> fix-committed) this week and a daily build that does not work on Azure is
> ok for your needs please let us know and we'll force a daily build update
> for this week.
> 
> Thanks for your patience.
> 
> Rick
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-08-15 Thread Ian Booth


On 16/08/16 12:58, Tim Penhey wrote:
> 
> 
> On 16/08/16 10:50, Ian Booth wrote:
>>
>> On 16/08/16 03:09, Nate Finch wrote:
>>> Ian, can you describe how Juju decides if it's running for a developer or
>>> an end user?  I'm worried this could trip people up who are both end users
>>> and happen to have a juju development environment.
>>>
>>
>> It's not so much Juju deciding - the use cases given were from the point of 
>> view
>> of a developer or end user.
>>
>> Juju will decide that it can automatically fallback to try to find and use a
>> local jujud (so long as the version of the jujud found matches that of the 
>> Juju
>> client being used to bootstrap or upgrade) if:
>>
>> - the Juju client version is newer than the agents running
>> - the client or agents have a build number > 0
>>
>> (the build number is 0 for released Juju agents but non zero when jujud is 
>> used
>> or built locally from source).
> 
> But this isn't entirely true is it? The build number is a horrible hack
> involving a version override file.
> 
> When I build jujud locally from source there is no version override and it is
> just the version as defined in the code I'm building.
> 

My wording was sadly suboptimal.
The agent reports a version containing a non-zero build number if uploaded or
built from source. So I was trying to refer to the version that the client had
reported to it.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-08-15 Thread Ian Booth

On 16/08/16 03:09, Nate Finch wrote:
> Ian, can you describe how Juju decides if it's running for a developer or
> an end user?  I'm worried this could trip people up who are both end users
> and happen to have a juju development environment.
>

It's not so much Juju deciding - the use cases given were from the point of view
of a developer or end user.

Juju will decide that it can automatically fallback to try to find and use a
local jujud (so long as the version of the jujud found matches that of the Juju
client being used to bootstrap or upgrade) if:

- the Juju client version is newer than the agents running
- the client or agents have a build number > 0

(the build number is 0 for released Juju agents but non zero when jujud is used
or built locally from source).

The above behaviour covers the use cases previously described:

- users always deploys / upgrades released versions
- users deploy a released version and want to upgrade to a version built from
source for testing
- users deploy from source and want to hack some more and upgrade for testing
- users have a deployed from source system and then a newer released agent comes
out and they want to upgrade to that *

*generally we don't support upgrades between non-released versions, so if
there's db schema changes or whatever, you're on your own

In all the above cases, juju bootstrap or juju upgrade-juju will work without
special arguments.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Latest new about Juju master branch - upload-tools obsoleted

2016-08-15 Thread Ian Booth
So if you pull master you'll no longer need to use upload-tools.
Juju will Do the Right Thing*, when you type:

$ juju bootstrap mycontroller aws|lxd|whatever
or
$ juju upgrade-juju

*so long as your $GOPATH/bin is in your path (as a developer).

1. As a user, you bootstrap a controller using a released client (incl betas)

Juju will look for and find a packaged agent binary via simplesreams and use 
that.

2. As a user, you want to upgrade a Juju system using a newer release (incl 
betas)

Juju will look for and find a packaged agent binary via simplesreams and use 
that.

3. As a developer, you want to deploy with a Juju built locally from source

You'll first build your work, go install github.com/juju/juju/..., then

$ juju bootstrap mycontroller lxd

4. As a developer, you want to upgrade a running system using some local changes
you've been hacking on. This works for either a system running a released
version, or a system running a development version, ie either case 1, 2 or 3 
above

You'll first build your work, go install github.com/juju/juju/..., then

$ juju upgrade-juju

It should be apparent that there's now no difference in Juju commands when
running a production system or a development one.

As a developer, you also have the "advanced" option of not building the Juju
source before bootstrapping or upgrading, and asking Juju to do it for you. This
is similar to what happens currently and can be error prone, and there's no need
anyway. But if needed:

$ juju bootstrap mycontroller lxd --build-agent
$ juju upgrade-juju --build-agent

But as I said, there's no need for this really, so long as as a developer you
have your $GOPATH/bin directory early in your path so that locally built juju
gets used in preference to /usr/bin/juju

These changes also are to support running Juju for a single architecture using a
snap.

Please let me know if there's any questions. --upload-tools is still supported
but will be removed soon. You can use --show-logs to see what Juju is doing.
I must admit, not having to type --upload-tools all the time is to quote Borat,
"Nce".


TODO

We still need to add a --agent-binary option to upgrade-juju so you can point it
to the new jujud you want it to use. This is to allow developers to upgrade a
system running from a snap. ie go install and then use the resulting binary
after copying somewhere the snap can see it. There's a bit of usability to work
out there. Or it also allows you to send someone a jujud binary and they can
just easily using that, rather than messing around with tarballs and
simplestreams like they need to do today.





-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Some Juju CLI usability thoughts before we close 2.0

2016-08-11 Thread Ian Booth


On 11/08/16 17:46, Ian Booth wrote:
> 
> On 11/08/16 17:03, John Meinel wrote:
>> On Thu, Aug 11, 2016 at 9:30 AM, Ian Booth <ian.bo...@canonical.com> wrote:
>>
>>> A few things have been irking me with some aspects of Juju's CLI. Here's a
>>> few
>>> thoughts from a user perspective (well, me as user, YMMV).
>>>
>>> The following pain points mainly revolve around commands that operate on a
>>> controller rather than a model.
>>>
>>> eg
>>>
>>> $ juju login [-c controllername] fred
>>> $ juju logout [-c controllername]
>>>
>>
>> I would agree with 'juju login' as it really seems you have to supply the
>> controller, especially since we explicitly disallow logging into the same
>> controller twice. The only case is 'juju logout && juju login'. Or the 'I
>> have to refresh my macaroon' but in that case couldn't we just do "juju
>> login" with no args at all?
>>
>>
>>
>>>
>>> I really think the -c arg is not that natural here.
>>>
>>> $ juju login controllername fred
>>> $ juju logout controllername
>>>
>>> seem a lot more natural and also explicit, because
>>> I know without args, the "current" controller will be used...
>>> but it's not in your face what that is without running list-controllers
>>> first,
>>> and so it's too easy to logout of the wrong controller accidentally. Having
>>> positional args solves that.
>>>
>>
>> I'm fine with an optional positional arg and "juju logout" removes you from
>> the current controller. As it isn't destructive (you can always login
>> again), as long as the command lets you know *what* you just logged out of,
>> you can undo your mistake. Vs "destroy-controller" which is significantly
>> more permanent when it finishes.
>>
>>
>>
>>>
>>> The same would then apply to other controller commands, like eg add-model
>>>
>>> $ juju add-model mycontroller mymodel
>>>
>>> One thing that might be an issue for people is if they only have one
>>> controller,
>>> then
>>>
>>> $ juju logout
>>> or
>>> $ juju add-model
>>>
>>> would just work and requiring a controller name is more typing.
>>>
>>
>> I disagree on 'juju add-model', as I think we have a very good concept of
>> "current context" and adding another model to your current controller is a
>> natural event.
>>
> 
> Fair point.
> 
>>
>>>
>>> But 2 points there:
>>> 1. as we move forward, people reasonably have more than one controller on
>>> the go
>>> at any time, and being explicit about what controller you are wanting to
>>> use is
>>> a good thing
>>> 2. in the one controller case, we could simply make the controller name
>>> optional
>>> so juju logout just works
>>>
>>> We already use a positional arg for destroy-controller - it just seems
>>> natural
>>> to do it everywhere for all controller commands.
>>>
>>
>> destroy-controller was always a special case because it is an unrecoverable
>> operation. Almost everything else you can use current context and if it was
>> a mistake you can easily recover.
>>
>>
>>>
>>> Anyways, I'd like to see what others think, mine is just the perspective
>>> of one
>>> user. I'd be happy to do a snap and put it out there to get feedback.
>>>
>>> --
>>>
>>> Another issue - I would really, really like a "juju whoami" command. We
>>> used to
>>> use juju switch-user without args for that, but now it's gone.
>>>
>>> When you are staring at a command prompt and you know you have several
>>> controllers and different logins active, I really want to just go:
>>>
>>> $ juju whoami
>>> Currently active as fred@controller2
>>
>>
>> I'd say you'd want what your current user, controller and model is, as that
>> is the current 'context' for commands.
>>
> 
> Agreed, adding model would be necessary.
> 

And as Rick pointed out, this mirrors charm whoami nicely too. So we get that
level of consistency across our Juju commands.

>>
>>>
>>
>>
>>> Just to get a quick reminder of what controller I am operating on and who
>>> I am
>>> logged in as on the controller.  I know we have a way of d

Re: Some Juju CLI usability thoughts before we close 2.0

2016-08-11 Thread Ian Booth

On 11/08/16 17:03, John Meinel wrote:
> On Thu, Aug 11, 2016 at 9:30 AM, Ian Booth <ian.bo...@canonical.com> wrote:
>
> > A few things have been irking me with some aspects of Juju's CLI. Here's a
> > few
> > thoughts from a user perspective (well, me as user, YMMV).
> >
> > The following pain points mainly revolve around commands that operate on a
> > controller rather than a model.
> >
> > eg
> >
> > $ juju login [-c controllername] fred
> > $ juju logout [-c controllername]
> >
>
> I would agree with 'juju login' as it really seems you have to supply the
> controller, especially since we explicitly disallow logging into the same
> controller twice. The only case is 'juju logout && juju login'. Or the 'I
> have to refresh my macaroon' but in that case couldn't we just do "juju
> login" with no args at all?
>
>
>
> >
> > I really think the -c arg is not that natural here.
> >
> > $ juju login controllername fred
> > $ juju logout controllername
> >
> > seem a lot more natural and also explicit, because
> > I know without args, the "current" controller will be used...
> > but it's not in your face what that is without running list-controllers
> > first,
> > and so it's too easy to logout of the wrong controller accidentally. Having
> > positional args solves that.
> >
>
> I'm fine with an optional positional arg and "juju logout" removes you from
> the current controller. As it isn't destructive (you can always login
> again), as long as the command lets you know *what* you just logged out of,
> you can undo your mistake. Vs "destroy-controller" which is significantly
> more permanent when it finishes.
>
>
>
> >
> > The same would then apply to other controller commands, like eg add-model
> >
> > $ juju add-model mycontroller mymodel
> >
> > One thing that might be an issue for people is if they only have one
> > controller,
> > then
> >
> > $ juju logout
> > or
> > $ juju add-model
> >
> > would just work and requiring a controller name is more typing.
> >
>
> I disagree on 'juju add-model', as I think we have a very good concept of
> "current context" and adding another model to your current controller is a
> natural event.
>

Fair point.

>
> >
> > But 2 points there:
> > 1. as we move forward, people reasonably have more than one controller on
> > the go
> > at any time, and being explicit about what controller you are wanting to
> > use is
> > a good thing
> > 2. in the one controller case, we could simply make the controller name
> > optional
> > so juju logout just works
> >
> > We already use a positional arg for destroy-controller - it just seems
> > natural
> > to do it everywhere for all controller commands.
> >
>
> destroy-controller was always a special case because it is an unrecoverable
> operation. Almost everything else you can use current context and if it was
> a mistake you can easily recover.
>
>
> >
> > Anyways, I'd like to see what others think, mine is just the perspective
> > of one
> > user. I'd be happy to do a snap and put it out there to get feedback.
> >
> > --
> >
> > Another issue - I would really, really like a "juju whoami" command. We
> > used to
> > use juju switch-user without args for that, but now it's gone.
> >
> > When you are staring at a command prompt and you know you have several
> > controllers and different logins active, I really want to just go:
> >
> > $ juju whoami
> > Currently active as fred@controller2
>
>
> I'd say you'd want what your current user, controller and model is, as that
> is the current 'context' for commands.
>

Agreed, adding model would be necessary.

>
> >
>
>
> > Just to get a quick reminder of what controller I am operating on and who
> > I am
> > logged in as on the controller.  I know we have a way of doing that via
> > list
> > controllers, but if there's a few, or even if not, you still need to scan
> > your
> > eyes down a table and look for the one wit the * to see the current one
> > and then
> > scan across and get see the user etc. It's all a lot harder than just a
> > whoami
> > command IMO.
> >
> > --
> >
> > We will need a juju shares command to show who has access to a controller,
> > now
> > that we have controller permissions login, addmodel, superuser.
> >
> > For models, we suppo

Some Juju CLI usability thoughts before we close 2.0

2016-08-10 Thread Ian Booth
A few things have been irking me with some aspects of Juju's CLI. Here's a few
thoughts from a user perspective (well, me as user, YMMV).

The following pain points mainly revolve around commands that operate on a
controller rather than a model.

eg

$ juju login [-c controllername] fred
$ juju logout [-c controllername]

I really think the -c arg is not that natural here.

$ juju login controllername fred
$ juju logout controllername

seem a lot more natural and also explicit, because
I know without args, the "current" controller will be used...
but it's not in your face what that is without running list-controllers first,
and so it's too easy to logout of the wrong controller accidentally. Having
positional args solves that.

The same would then apply to other controller commands, like eg add-model

$ juju add-model mycontroller mymodel

One thing that might be an issue for people is if they only have one controller,
then

$ juju logout
or
$ juju add-model

would just work and requiring a controller name is more typing.

But 2 points there:
1. as we move forward, people reasonably have more than one controller on the go
at any time, and being explicit about what controller you are wanting to use is
a good thing
2. in the one controller case, we could simply make the controller name optional
so juju logout just works

We already use a positional arg for destroy-controller - it just seems natural
to do it everywhere for all controller commands.

Anyways, I'd like to see what others think, mine is just the perspective of one
user. I'd be happy to do a snap and put it out there to get feedback.

--

Another issue - I would really, really like a "juju whoami" command. We used to
use juju switch-user without args for that, but now it's gone.

When you are staring at a command prompt and you know you have several
controllers and different logins active, I really want to just go:

$ juju whoami
Currently active as fred@controller2

Just to get a quick reminder of what controller I am operating on and who I am
logged in as on the controller.  I know we have a way of doing that via list
controllers, but if there's a few, or even if not, you still need to scan your
eyes down a table and look for the one wit the * to see the current one and then
scan across and get see the user etc. It's all a lot harder than just a whoami
command IMO.

-- 

We will need a juju shares command to show who has access to a controller, now
that we have controller permissions login, addmodel, superuser.

For models, we support:

$ juju shares -m model
$ juju shares (for the default model)

What do we want for controller shares?

$ juju shares-controller  ?

which would support positional arg

$ juju shares-controller mycontroller   ?

--

On the subject of shares, the shares command shows all users with access to a
model (or soon a controller as per above). That's great for admins to see who
they are sharing their stuff with. What I'd like as a user is a command to tell
me what level of access I have to various controllers and models. I'd like this
in list-controllers and list-models.

$ juju list-controllers

CONTROLLER   MODELUSER CLOUD/REGION   ACCESS
fredcontroller*  foo  fred@local  addmodel
ian  default  admin@local  lxd/localhost  superuser

$ juju list-models

MODEL  OWNER   STATUS ACCESS  LAST CONNECTION
foo*   fred@local  available  write   5 minutes ago

The above would make it much easier to see if I could add a model or deploy an
application etc. And I don't get to see who else has access like with juju
shares, just my own access levels. Thoughts?











-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju and snappy implementation spike - feedback please

2016-08-09 Thread Ian Booth
I personally like the idea that the snap could use a juju-home interface to
allow access to the standard ~/.local/share/juju directory; thus allowing a snap
and regular Juju to be used interchangeably (at least initially). This will
allow thw use case "hey, try my juju snap and you can use your existing
settings" But, isn't it verboten for snaps to access dot directories in user
home in any way, regardless of what any interface says? We could provide an
import tool to copy from ~/.local/share/juju to ~/snap/blah...

But in the other case, using a personal snap and sharing settings with the
official Juju snap - do we know what the official snappy story is around this
scenario? I can't imagine this is the first time it's come up?


On 09/08/16 17:27, John Meinel wrote:
> On Aug 9, 2016 1:06 AM, "Nicholas Skaggs" 
> wrote:
>>
>>
>>
>> On Mon, Aug 8, 2016 at 11:49 AM, John Meinel 
> wrote:
>>>
>>> If we are installing in '--devmode' don't we have access to the
> unfiltered $HOME directory if we look for it? And we could ask for an
> interface which is to JUJU_HOME that would give us access to just
> $HOME/.local/share/juju
>>>
>>>
>>> We could then copy that information into the normal 'home' directory of
> the snap. That might give a better experience than having to import your
> credentials anytime you want to just evaluate a dev build of juju.
>>
>> I agree this gets more difficult with the idea of sharing builds -- as
> you say, you have to re-add your credentials for each new developer.In
> regards to your thoughts on --devmode, it does give you greater access, but
> some things are still constrained. The HOME interface doesn't allow access
> to dot files or folders by default. So it's not useful in this instance. If
> juju were to change where it stores it's configuration files (aka, not in a
> dotfolder) this technically becomes not a problem as the current home
> interface would allow this.
> 
> Sure. That's why I mention "We're in dev mode now, and can ask for a
> JUJU_HOME interface vs the existing HOME one." We can also just ask for a
> "give me the root filesystem" interface that doesn't get connected by
> default.
> 
> I think not being able to publish your version of a snap and have it work
> with the "standard" version of a snap is going to be a general issue for
> anyone using snaps for development. So maybe it's a general snap property
> that can give you access to a "named" common directory.
> 
> John
> =:->
> 
>>>
>>>
>>> AIUI, the 'common' folder for a snap is only for different versions of
> the exact same snap, which means that if I publish 'juju-jameinel' it won't
> be able to share anything with 'juju-wallyworld' nor the official 'juju',
> so there isn't any reason to use it.
>>>
>>> I don't know exactly how snap content interfaces work, but it might be
> interesting to be able to share the JUJU_HOME between snaps (like they
> intend to be able to share a "pictures" or "music" directory).
>>>
>>> If we *don't* share them, then we won't easily be able to try a new Juju
> on an existing controller. (If I just want you to see how the new 'juju
> status' is formatted, you'd rather run it on a controller that has a bunch
> of stuff running.)
>>
>> It's worth mentioning / filing a bug about our needs with the snapcraft
> folks to see what options might exist. I've started conversations a few
> weeks ago and solved or got good bugs in on other juju issues. I think they
> understand the application config limitations / issues, so we can push for
> a resolution.
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju and snappy implementation spike - feedback please

2016-08-08 Thread Ian Booth
Hi folks

The below refers to work in a branch, not committed to master.

TL:DR; I've made some changes to Juju so that a custom build can be easily
snapped and shared. The snap is still required to be run in devmode until new
interfaces are done.

TL;DR2; The way upload-tools works has been changed and this will affect our QA
scripts (but I've left the old upload-tools in place for backwards 
compatibility).

This is an experiment - I have a branch which I plan to propose for merging into
master. The main area of feedback needed is:
- the replacement of upload-tools
- how to do agent upgrades in a snappy world (see end of email).

https://github.com/wallyworld/juju/tree/snappy-support

To try it out (on amd64)

$ install juju-wallyworld --edge --devmode
$ /snap/bin/juju-wallyworld.juju bootstrap mycontroller lxd
or
$ /snap/bin/juju-wallyworld.juju bootstrap mycontroller aws
or ...

I'm just using a super simple snapcraft.yaml file (thanks for for godeps plugin
by the way, awesome). The interesting bits are the changes in my Juju branch.

Limitation: multi-arch. Using a non-released Juju from the snap does not support
bootstrapping a controller on an arch different to that on which the snap was
compiled.  This is the same as is the case now anyway with upload-tools.

Note: I have made a change so that the first time juju runs, update-clouds is
called. This ensures that when Juju is run from a snap, the latest information
is available for bootstrap.

$ juju bootstrap ...
Since Juju 2 is being run for the first time, downloading latest cloud 
information.
Fetching latest public cloud list...
Your list of public clouds is up to date, see `juju clouds`.
Creating Juju controller
...

The aims of this work
-
1. Make it easy to share a complete custom Juju build (client and agent) with
others (demo/try new features etc).

2. Allow Juju to be snapped so that an agent is included in the snap -
simplestreams is supported but not *required*.
(only a single arch right now)

3. Change the semantics and syntax of upload-tools to IMO "do the right thing".

4. Improved developer experience

Changes to upload-tools
---
- "upload-tools" is replaced by "build-agent"
- messages referring to "tools" now refer to "agent binary"
- "build-agent" is only *required* if you need to actually build the jujud agent
binary from source; the default behaviour is to use a jujud co-located with the
juju binary so long as the versions match *exactly*. This is normally what you
have as a developer anyway.

The practical implications are shown below.

Main Use Cases
--
1. As a developer, I want to share a custom Juju build with others to get 
feedback.

Developer:
hack, hack, hack on Juju
$ snapcraft
$ snapcraft push .snap --release edge

End user:
$ snap install  --edge --devmode
$ /snap/bin/.juju autoload-credentials (or add-credential, if needed)
$ /snap/bin/.juju add-cloud (if needed)
$ /snap/bin/.juju bootstrap .

If the intent is just to try stuff on LXD, then the add-credential and add-cloud
steps above can be skipped:

$ snap install  --edge --devmode
$ /snap/bin/.juju bootstrap mycontroller lxd

2. As a developer, I want to hack on Juju and try out my changes.

hack, hack, hack
$ go install github.com/juju/juju/...
$ juju bootstrap mycontroller lxd

Note: no build-agent (upload-tools) is needed.

3. Packaging released version of Juju
This need some work and consultation. It may not be feasible. How to handle
agent binaries for different os/arch etc.
Maybe we just want to officially package a juju client snap that behaves just
like bootstrap today - no jujud agent binary included in snap, the juju client
creates the controller and pulls agent binaries from simplestreams.

About upload tools
--
So, the need to specify --upload-tools is now almost eliminated. And the name
has been changed to --build-agent because that's what it does. (and because the
"tools" terminology is something we need to move away from).

When Juju bootstraps, it does the following:
- look in simplestreams for a compatible agent binary
- look in the same directory as the juju client for an agent binary with the
exact same version
- build an agent binary from source

It stops looking when it finds a suitable binary.

As a developer, you would normally hack on Juju and then run go install. And
then run the resulting juju client. So everything would be in place to Just Work
without additional bootstrap args. But if for some reason you needed the agent
actually go built, you can still do so with --build-agent.

Developers: upgrading the agent binary in a snappy world

So, as a developer, you're testing your snap and want to make a change and see
what happens. Now, one way would be to:
- hack hack hack
- make a new snap
- publish to edge
- install new snap
- jujusnap.juju upgrade-juju

which would pick up the latest 

Re: Windows and --race tests are now gating

2016-07-10 Thread Ian Booth
Turning on gating for Windows tests before all tests were passing is premature
and is now blocking us from landing critical fixes for beta12 that we need to
release this week for inclusion into xenial. With the race tests, we got all of
those passing before turning on gating. We need to do the same for the Windows
tests. We need to deactivate gating on Windows at this stage. Of course we need
to fix the tests, but turning on gating before that is done is counterproductive
given what we need to get done this week.

On 08/07/16 03:26, Aaron Bentley wrote:
> Hi Juju developers,
> 
> As requested by juju-core, we have added --race and Windows unit tests
> to gated landings.  These tests are run concurrently, not sequentially,
> but all must complete before code can be landed.
> 
> As a practical matter, this means that landings are now impossible until
> the Windows and --race tests can pass.
> 
> The output from this initial version is a bit crude-- it will tell you
> which tasks failed (e.g. "windows"), and you then need to look at the
> corresponding logs under the Jenkins artifacts.  We aim to improve this
> soon.
> 
> Aaron
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Beta9

2016-06-17 Thread Ian Booth
As well as the user visible things like the great new status output and rename
of service to application etc, beta 9 contains a lot of below the waterline
changes geared towards our future feature work. We should start to see the
benefit of that work in the next beta and upcoming release candidates.

There's also more to come on the usability front. We'll soon have a nice
interactive bootstrap experience which will guide users through the steps of
getting a controller up and running, and we're continuing the efforts to improve
error messages, CLI help, and other user facing text.

On 17/06/16 21:36, Mark Shuttleworth wrote:
> Hi all
> 
> Just to say, initial impressions of beta9 are great, the status output
> cleanup is super, thank you!
> 
> Mark
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Automatic commit squashing

2016-06-16 Thread Ian Booth


On 16/06/16 19:04, David Cheney wrote:
> Counter suggestion: the bot refuses to accept PR's that contain more
> than one commit, then it's up to the submitter to prepare it in any
> way that they feel appropriate.
>

Please no. I do not want to be forced to alter my local history.

I was hopeful that having the landing bot / github squash commits would satisfy
those people what did not want to use git log --first-parent to present a
sanitised view of commits, but allow me to retain the history in my branches
locally so I could look back on what I did and when and how (if needed).

If the default github behaviour is not sufficient, perhaps we can add some
tooling in the merge bot to do the squashing prior to merging.


> On Thu, Jun 16, 2016 at 6:44 PM, roger peppe  
> wrote:
>> Squashed commits are nice, but there's something worth watching
>> out for: currently the merge commit is committed with the text
>> that's in the github PR, but when a squashed commit is made, this
>> text is ignored and only the text in the actual proposed commit ends up
>> in the history. This surprised me (I often edit the PR description
>> as the review continues) so worth being aware of, I think.
>>
>>   cheers,
>> rog.
>>
>> On 16 June 2016 at 02:12, Menno Smits  wrote:
>>> Hi everyone,
>>>
>>> Following on from the recent thread about commit squashing and commit
>>> message quality, the idea of automatically squashing commit at merge time
>>> has been raised. The idea is that the merge bot would automatically squash
>>> commits for a pull request into a single commit, using the PR description as
>>> the commit message.
>>>
>>> With this in place, developers can commit locally using any approach they
>>> prefer. The smaller commits they make as they work won't be part of the
>>> history the team interacts with in master.
>>>
>>> When using autosquashing the quality of pull request descriptions should get
>>> even more scrutiny during reviews. The quality of PR descriptions is already
>>> important as they are used for merge commits but with autosquashing in place
>>> they will be the *only* commit message.
>>>
>>> Autosquashing can be achieved technically by either having the merge bot do
>>> the squashing itself, or by taking advantage of Github's feature to do this
>>> (currently in preview mode):
>>>
>>> https://developer.github.com/changes/2016-04-01-squash-api-preview/
>>>
>>> We need to ensure that the squashed commits are attributed to the correct
>>> author (i.e. not jujubot). I'm not sure what we do with pull requests which
>>> contain work from multiple authors. There doesn't seem to be an established
>>> approach for this.
>>>
>>> Thoughts?
>>>
>>> - Menno
>>>
>>>
>>>
>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: 
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0-beta9 ETA

2016-06-13 Thread Ian Booth


On 13/06/16 22:58, Rick Harding wrote:
> On Sat, Jun 11, 2016 at 6:32 PM Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> We are also storing any config specified in clouds.yaml separately. These
>> items,
>> such as apt-mirror, are shared between models and are used by default if
>> not
>> specified in a hosted model. But you can override any such items as well
>> simply
>> by setting them on the model. For now, the semantics of this change are
>> transparent - get-model-config will show the accumulation of shared and
>> model
>> specific settings. But we are looking to add a command to show/set shared
>> config. Thus you will be able to say update a http-proxy setting across all
>> hosted models within a controller with one command:
>>
>> juju set-shared-config http-proxy=foo
>>
>> NB command name to be decided.
>>
> 
> Ian, can we setup some time to chat on this. I'm curious if, rather than a
> command to explicitly "set everywhere" we follow the model that the config
> is inherited unless overridden for a specific model. Then by setting it on

What you say above is how it will work. You bootstrap a controller and any
config specified in clouds.yaml for that cloud becomes the default inherited
config for all hosted models add to the controller. But you can then choose to
set a config value on your hosted model, and that will override anything that
was being used as the default.

> the controller all models would get it. If you want it set on a specific
> model, you'd set it on the model. In that way there'd not be a third/new
> command for setting config.
> 

"Setting it on the controller" - that's what we are proposing. Once you have
bootstrapped the controller and the shared default config for hosted models has
been set up (by virtue of the settings in clouds.yaml), you then need a way to
alter that shared config. Is that what you mean? What command would you like for
that? We have

$ juju set-model-config foo=bar

That sets foo on the current model.  Or

$ juju -m mymodel set-model-config foo=bar

operates on model mymodel.

The above are model commands. So we need a way to set foo=bar on the controller
itself (ie update the shared controller wide config). What are you proposing?
Did you intend that setting foo on the controller model would satisfy the
requirement? That seems to be wrong for 2 reasons:

1. It's a model not a controller
2. The controller model can be used to host applications (eg nagios), and as
such the controller model settings may we be required to be set in and of
themselves, and to conflate those with default controller side config seems 
wrong.

Maybe I'm thinking wrongly, but I make a very clear distinction in my mind
between the controller and its models. There should be separate commands for
managing controller artifacts, including  ACLs, vs model artifacts.

Speaking of ACLs, the same distinction applies. You want to manage access to the
controller - who can create models, who can share models, who can delete models
not their own, who can register users etc - vs model level operations - who can
deploy applications etc. And so again, the controller model permissions are
different semantically to the controller permissions. You can manage who can
create applications in the controller model, which is different to an operation
on the controller itself like registering a user. You may grant fred access to
the controller model, but not the controller itself.

Or maybe I'm misunderstanding what you mean?






-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0-beta9 ETA

2016-06-11 Thread Ian Booth


On 12/06/16 02:30, Dean Henrichsmeyer wrote:
> On Fri, Jun 10, 2016 at 1:20 PM, Cheryl Jennings <
> cheryl.jenni...@canonical.com> wrote:
> 
> 
>> Some of the great things coming in beta9 include:
>> - Separation of controller config vs. model config
>>
> 
> Will this one have user-facing changes or is it internal?
> 

The separation of controller config is internal. Controller config includes:
- ca cert
- api port
- mongo port

These items are not used by Juju models at all but currently show up when you do
a juju get-model-config. In beta 10, this will not be the case. So from that
aspect, it's user facing but it means get-model-config will be a lot more user
friendly since you won't have a wall of text for a cert you don't care about.
There will be a separate get-controller-config command to see those items. They
are typically immutable.

We are also storing any config specified in clouds.yaml separately. These items,
such as apt-mirror, are shared between models and are used by default if not
specified in a hosted model. But you can override any such items as well simply
by setting them on the model. For now, the semantics of this change are
transparent - get-model-config will show the accumulation of shared and model
specific settings. But we are looking to add a command to show/set shared
config. Thus you will be able to say update a http-proxy setting across all
hosted models within a controller with one command:

juju set-shared-config http-proxy=foo

NB command name to be decided.

The other change for beta 10 will be to no longer store in model config
transient settings like bootstrap timeout which are not relevant once a
controller is running. This will also remove clutter from model settings.




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Model config

2016-06-08 Thread Ian Booth


On 08/06/16 23:59, roger peppe wrote:
> On 8 June 2016 at 10:41, Andrew Wilkins  wrote:
>> Hi folks,
>>
>> We're in the midst of making some changes to model configuration in Juju
>> 2.0, separating out things that are not model specific from those that are.
>> For many things this is very clear-cut, and for other things not so much.
>>
>> For example, api-port and state-port are controller-specific, so we'll be
>> moving them from model config to a new controller config collection. The end
>> goal is that you'll no longer see those when you type "juju
>> get-model-config" (there will be a separate command to get controller
>> attributes such as these), though we're not quite there yet.
> 
> Interesting - seems like a good change.
> 
> Will this change the internal and API representations too, so there
> are two groups
> of mutually-exclusive attributes? Does this also mean that the

Internally there will be three groups of mutually exclusive attributes:
- controller
- cloud
- model

Initially, we'll maintain internal API compatibility by combining all these to
produce the result of state.ModelConfig()

We'll then be able to consider things like config inheritance / overrides etc.
eg if cloud config (specified in the clouds.yaml file) defines an apt-mirror,
should we allow a model to also have that value, which will take precedence over
the cloud value.

> really-not-very-nice
> ConfigSkeleton API method will go too?
> 

I hope so. But we're rushing to get everything done for beta9 and are focusing
first on the data model since it will be harder to upgrade if that's not right
first up.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju2: format of clouds.yaml for juju add-cloud

2016-05-03 Thread Ian Booth


On 03/05/16 23:09, Andreas Hasenack wrote:
> On Wed, Apr 20, 2016 at 7:07 PM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
> 
>> On Thu, Apr 21, 2016 at 2:44 AM Andreas Hasenack 
>> wrote:
>>
>>> Hi,
>>>
>>> I was trying to add another "cloud" so that I could have multiple MAAS
>>> servers available to bootstrap on, without having to type the MAAS IP
>>> everytime in the bootstrap command line, and pass --credential.
>>>
>>> Some reading lead me to juju add-cloud, but the documentation only has
>>> examples for openstack clouds, like:
>>>
>>> clouds:
>>>   :
>>> type: 
>>> regions:
>>>   :
>>> endpoint: 
>>> auth-types: <[access-key, oauth, userpass]>
>>>
>>>
>>> That does not translate immediately to a MAAS configuration. I asked for
>>> help on IRC and mgz provided me with this syntax:
>>>
>>> clouds:
>>>   some-name:
>>> type: maas
>>> auth-types: [oauth1]
>>> endpoint: 'http:///MAAS/'
>>>
>>>
>>> Are there other options that could be used here, specific to the "maas"
>>> type? What about other cloud types, what changes in this template?
>>>
>>
>> Everything that you can use is used here:
>> http://streams.canonical.com/juju/public-clouds.syaml. So the things in
>> there of note are "storage-endpoint" and "regions".
>>
>>
> 
> What's "domain-name"?
> andreas@nsn7:~$ juju add-credential cistack
>   credential name: cistack
>   auth-type: userpass
>   username: andreas
>   password:
>   tenant-name: andreas
>   domain-name: ?
> credentials added for cloud cistack
> 
> It's not used in http://streams.canonical.com/juju/public-clouds.syaml, nor
> is it documented in
> https://jujucharms.com/docs/devel/clouds#specifying-additional-clouds. I

The syaml referred to above is for cloud definitions. The command being run is
for adding a credential. The credential data model is different to clouds. The
type of cloud though (eg openstack vs aws vs google) determine what credential
attributes are valid. "domain-name" is used for keystone v3 authentication. It
is optional and not needed for keystone v2. Whether to enter it or not depends
entirely on your openstack setup.

> can take a guess (DNS domain name), but I don't know where and how it's
> used. juju1 didn't have that, and nor does the novarc file given to me by
> horizon.
> 

Yes, only Juju v2 supports keystone 3.

As a reminder, the release notes sent out with each beta explain it:

https://jujucharms.com/docs/devel/temp-release-notes#keystone-3-support-in-openstack

### Keystone 3 support in Openstack.

Juju now supports Openstack with Keystone Identity provider V3. Keystone
3 brings a new attribute to our credentials, "domain-name"
(OS_DOMAIN_NAME) which is optional. If "domain-name" is present (and
user/password too) juju will use V3 authentication by default. In other
cases where only user and password is present, it will query Openstack
as to what identity providers are supported, and their endpoints. V3
will be tried and, if it works, set as the identity provider or else it
will settle for V2, the previous standard.

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Advance notice - removal of local repo URLs committed to master

2016-04-18 Thread Ian Booth
Hi folks

We communicated back in early March that Juju 2.0 would no longer support local
charms deployed using a local charm repository and local charm URLs like
local:trusty/mysql. The final piece has landed in master, which is support for
local bundles to declare their contained charms to as disk paths.

You can now do something like this for bundles:

series: xenial
services:
wordpress:
charm: ./wordpress
num_units: 1
series: trusty
mysql:
charm: ./mysql
num_units: 1
relations:
- ["wordpress:db", "mysql:server"]


Note the series attributes. These are required if the charm does not yet define
any default series in metadata or you want to use a series different to the
default. Either the bundle default series will be used ("xenial" for the mysql
service above) or the service specific one will be ("trusty" for the wordpress
service above).

With the above changes, the JUJU_REPOSITORY env var is no longer supported, nor
is the --repository deploy argument. Just specify everything using a local path.
And construct your local bundles also using local paths for the charms (or store
paths also).

This work is now in master if you want to try it out early, otherwise it will be
available soon with the 2.0 release candidate following beta4.




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-04 Thread Ian Booth


On 05/04/16 11:12, Andrew Wilkins wrote:
> On Mon, Apr 4, 2016 at 8:32 PM Rick Harding 
> wrote:
> 
>> On Sun, Apr 3, 2016 at 6:56 PM Andrew Wilkins <
>> andrew.wilk...@canonical.com> wrote:
>>
>>> In a non-beta release we would make sure that the config changes aren't
>>> backwards incompatible.
>>>
>>
>> I think this is the key thing. I think that kill-controller is an
>> exception to this rule. I think we should always at least give the user the
>> ability to remove their stuff and start over with the new alpha/beta/rc
>> release. I'd like to ask us to explore making kill-controller an exception
>> to this policy and that if tests prove we can't bootstrap on one beta and
>> kill with trunk that it's a blocking bug for us.
>>
> 
> Generally agreed, but in this case I made the choice of improving the
> quality of the code base overall at the cost of breaking kill-controller in
> between betas. I think it's fair to have a temporary annoyance for
> developers and early adopters (of a beta only!) to improve the quality in
> the long term. Major, breaking versions don't come around very often, so
> we're trying to wipe the slate as clean as possible. The alternative is we
> continue building up cruft forever so we could support that one edge case
> that existed for 5 minutes.
>

To backup what Andrew said, we had the choice of an annoyance between betas for
early adopters/testers, vs a much larger effort and cost to develop extra code
and tests to support a very temporary edge case. We'd rather put our at capacity
development effort to finishing features for the release. Having said that, we
should have included in the releases notes an item to inform people that any
beta2 environments could only be killed with beta2 clients. We'll do better
communicating those beta limitations next time.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-29 Thread Ian Booth
Older URL format is what is needed until the change lands (targeted for beta4).
The URL based format for bundle charms is all that is supported by the original
local bundles work. The upcoming feature drop fixes that, as well as removing
the support for local charm URLs - all local charms, whether inside bundles or
deployed using the CLI, will be required to be specified using a file path.

On 29/03/16 15:57, Rick Harding wrote:
> So this means the older format should work? Antonio, have you poked through
> that thread at the original working setup for local charms?
> 
> On Mon, Mar 28, 2016 at 9:45 PM Antonio Rosales <
> antonio.rosa...@canonical.com> wrote:
> 
>>
>>
>> On Monday, March 28, 2016, Ian Booth <ian.bo...@canonical.com> wrote:
>>
>>> Hey Antonio
>>>
>>> I must apologise - the changes didn't make beta3 due to all the work
>>> needed to
>>> migrate the CI scripts to test the new hosted model functionality; we ran
>>> out of
>>> time to be able to QA the local bundle changes.
>>>
>>> I expect this work will be done for beta4.
>>
>>
>> Completely understood. I'll retest with Beta 4. Thanks for the update.
>>
>> -Antonio
>>
>>
>>>
>>>
>>>
>> On 29/03/16 11:04, Antonio Rosales wrote:
>>>> + Juju list for others awareness
>>>>
>>>>
>>>> On Thu, Mar 10, 2016 at 1:53 PM, Ian Booth <ian.bo...@canonical.com>
>>> wrote:
>>>>> Thanks Rick. Trivial change to make. This work should be in beta3 due
>>> next week.
>>>>> The work includes dropping support for local repositories in favour of
>>> path
>>>>> based local charm and bundle deployment.
>>>>
>>>> Ian,
>>>> First thanks for working on this feature. Second, I tried this for a
>>>> local ppc64el deploy which is behind a firewall, and thus local charms
>>>> are good way forward. I may have got the syntax incorrect and thus
>>>> wanted to confirm here. What I did was is at:
>>>> http://paste.ubuntu.com/15547725/
>>>> Specifically, I set the the charm path to something like:
>>>> charm: /home/ubuntu/charms/trusty/apache-hadoop-compute-slave
>>>> However, I got the following error:
>>>> ERROR cannot deploy bundle: cannot resolve URL
>>>> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave": charm or
>>>> bundle URL has invalid form:
>>>> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave"
>>>>
>>>> This is on the latest beta3:
>>>> 2.0-beta3-xenial-ppc64el
>>>>
>>>> Any suggestions?
>>>>
>>>> -thanks,
>>>> Antonio
>>>>
>>>>
>>>>>
>>>>> On 10/03/16 23:37, Rick Harding wrote:
>>>>>> Thanks Ian, after thinking about it I think what we want to do is
>>> really
>>>>>> #2. The reasoning I think is:
>>>>>>
>>>>>> 1) we want to make things consistent. The CLI experience is present a
>>> charm
>>>>>> and override series with --series=
>>>>>> 2) more consistent, if you do it with local charms you can always do
>>> it
>>>>>> 3) we want to encourage folks to drop series from the charmstore urls
>>> and
>>>>>> worry less about series over time. Just deploy X and let the charm
>>> author
>>>>>> pick the default best series. I think we should encourage this in the
>>> error
>>>>>> message for #2. "Please remove the series section of the charm url"
>>> or the
>>>>>> like when we error on the conflict, pushing users to use the series
>>>>>> override.
>>>>>>
>>>>>> Uros, Francesco, this brings up a point that I think for multi-series
>>>>>> charms we want the deploy cli snippets to start to drop the series
>>> part of
>>>>>> the url as often as we can. If the url doesn't have the series
>>> specified,
>>>>>> e.g. jujucharms.com/mysql then the cli command should not either.
>>> Right now
>>>>>> I know we add the series/revision info and such. Over time we want to
>>> try
>>>>>> to get to as simple a command as possible.
>>>>>>
>>>>>> On Thu, Mar 10, 2016 at 7:23 AM Ian Booth <ian.bo...@canonical.com>
>>> wrote

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-29 Thread Ian Booth
Older URL format is what is needed until the change lands (targeted for beta4).
The URL based format for bundle charms is all that is supported by the original
local bundles work. The upcoming feature drop fixes that, as well as removing
the support for local charm URLs - all local charms, whether inside bundles or
deployed using the CLI, will be required to be specified using a file path.

On 29/03/16 15:57, Rick Harding wrote:
> So this means the older format should work? Antonio, have you poked through
> that thread at the original working setup for local charms?
> 
> On Mon, Mar 28, 2016 at 9:45 PM Antonio Rosales <
> antonio.rosa...@canonical.com> wrote:
> 
>>
>>
>> On Monday, March 28, 2016, Ian Booth <ian.bo...@canonical.com> wrote:
>>
>>> Hey Antonio
>>>
>>> I must apologise - the changes didn't make beta3 due to all the work
>>> needed to
>>> migrate the CI scripts to test the new hosted model functionality; we ran
>>> out of
>>> time to be able to QA the local bundle changes.
>>>
>>> I expect this work will be done for beta4.
>>
>>
>> Completely understood. I'll retest with Beta 4. Thanks for the update.
>>
>> -Antonio
>>
>>
>>>
>>>
>>>
>> On 29/03/16 11:04, Antonio Rosales wrote:
>>>> + Juju list for others awareness
>>>>
>>>>
>>>> On Thu, Mar 10, 2016 at 1:53 PM, Ian Booth <ian.bo...@canonical.com>
>>> wrote:
>>>>> Thanks Rick. Trivial change to make. This work should be in beta3 due
>>> next week.
>>>>> The work includes dropping support for local repositories in favour of
>>> path
>>>>> based local charm and bundle deployment.
>>>>
>>>> Ian,
>>>> First thanks for working on this feature. Second, I tried this for a
>>>> local ppc64el deploy which is behind a firewall, and thus local charms
>>>> are good way forward. I may have got the syntax incorrect and thus
>>>> wanted to confirm here. What I did was is at:
>>>> http://paste.ubuntu.com/15547725/
>>>> Specifically, I set the the charm path to something like:
>>>> charm: /home/ubuntu/charms/trusty/apache-hadoop-compute-slave
>>>> However, I got the following error:
>>>> ERROR cannot deploy bundle: cannot resolve URL
>>>> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave": charm or
>>>> bundle URL has invalid form:
>>>> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave"
>>>>
>>>> This is on the latest beta3:
>>>> 2.0-beta3-xenial-ppc64el
>>>>
>>>> Any suggestions?
>>>>
>>>> -thanks,
>>>> Antonio
>>>>
>>>>
>>>>>
>>>>> On 10/03/16 23:37, Rick Harding wrote:
>>>>>> Thanks Ian, after thinking about it I think what we want to do is
>>> really
>>>>>> #2. The reasoning I think is:
>>>>>>
>>>>>> 1) we want to make things consistent. The CLI experience is present a
>>> charm
>>>>>> and override series with --series=
>>>>>> 2) more consistent, if you do it with local charms you can always do
>>> it
>>>>>> 3) we want to encourage folks to drop series from the charmstore urls
>>> and
>>>>>> worry less about series over time. Just deploy X and let the charm
>>> author
>>>>>> pick the default best series. I think we should encourage this in the
>>> error
>>>>>> message for #2. "Please remove the series section of the charm url"
>>> or the
>>>>>> like when we error on the conflict, pushing users to use the series
>>>>>> override.
>>>>>>
>>>>>> Uros, Francesco, this brings up a point that I think for multi-series
>>>>>> charms we want the deploy cli snippets to start to drop the series
>>> part of
>>>>>> the url as often as we can. If the url doesn't have the series
>>> specified,
>>>>>> e.g. jujucharms.com/mysql then the cli command should not either.
>>> Right now
>>>>>> I know we add the series/revision info and such. Over time we want to
>>> try
>>>>>> to get to as simple a command as possible.
>>>>>>
>>>>>> On Thu, Mar 10, 2016 at 7:23 AM Ian Booth <ian.bo...@canonical.com>
>>> wrote

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-28 Thread Ian Booth
Hey Antonio

I must apologise - the changes didn't make beta3 due to all the work needed to
migrate the CI scripts to test the new hosted model functionality; we ran out of
time to be able to QA the local bundle changes.

I expect this work will be done for beta4.

On 29/03/16 11:04, Antonio Rosales wrote:
> + Juju list for others awareness
> 
> 
> On Thu, Mar 10, 2016 at 1:53 PM, Ian Booth <ian.bo...@canonical.com> wrote:
>> Thanks Rick. Trivial change to make. This work should be in beta3 due next 
>> week.
>> The work includes dropping support for local repositories in favour of path
>> based local charm and bundle deployment.
> 
> Ian,
> First thanks for working on this feature. Second, I tried this for a
> local ppc64el deploy which is behind a firewall, and thus local charms
> are good way forward. I may have got the syntax incorrect and thus
> wanted to confirm here. What I did was is at:
> http://paste.ubuntu.com/15547725/
> Specifically, I set the the charm path to something like:
> charm: /home/ubuntu/charms/trusty/apache-hadoop-compute-slave
> However, I got the following error:
> ERROR cannot deploy bundle: cannot resolve URL
> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave": charm or
> bundle URL has invalid form:
> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave"
> 
> This is on the latest beta3:
> 2.0-beta3-xenial-ppc64el
> 
> Any suggestions?
> 
> -thanks,
> Antonio
> 
> 
>>
>> On 10/03/16 23:37, Rick Harding wrote:
>>> Thanks Ian, after thinking about it I think what we want to do is really
>>> #2. The reasoning I think is:
>>>
>>> 1) we want to make things consistent. The CLI experience is present a charm
>>> and override series with --series=
>>> 2) more consistent, if you do it with local charms you can always do it
>>> 3) we want to encourage folks to drop series from the charmstore urls and
>>> worry less about series over time. Just deploy X and let the charm author
>>> pick the default best series. I think we should encourage this in the error
>>> message for #2. "Please remove the series section of the charm url" or the
>>> like when we error on the conflict, pushing users to use the series
>>> override.
>>>
>>> Uros, Francesco, this brings up a point that I think for multi-series
>>> charms we want the deploy cli snippets to start to drop the series part of
>>> the url as often as we can. If the url doesn't have the series specified,
>>> e.g. jujucharms.com/mysql then the cli command should not either. Right now
>>> I know we add the series/revision info and such. Over time we want to try
>>> to get to as simple a command as possible.
>>>
>>> On Thu, Mar 10, 2016 at 7:23 AM Ian Booth <ian.bo...@canonical.com> wrote:
>>>
>>>> I've implemented option 1:
>>>>
>>>>  error if Series attribute is used at all with a store charm URL
>>>>
>>>> Trivial to change if needed.
>>>>
>>>> On 10/03/16 12:58, Ian Booth wrote:
>>>>> Yeah, agreed having 2 ways to specify store series can be suboptimal.
>>>>> So we have 2 choices:
>>>>>
>>>>> 1. error if Series attribute is used at all with a store charm URL
>>>>> 2. error if the Series attribute is used and conflicts
>>>>>
>>>>> Case 1
>>>>> --
>>>>>
>>>>> Errors:
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>> Ok:
>>>>>
>>>>> Series: trusty
>>>>> Charm: ./mysql
>>>>>
>>>>>
>>>>> Case 2
>>>>> --
>>>>>
>>>>> Ok:
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: ./mysql
>>>>>
>>>>> Errors:
>>>>>
>>>>> Series: xenial
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>>
>>>>> On 10/03/16 12:51, Rick Harding wrote:
>>>>>> Bah maybe you're right. I want to sleep on it. It's kind of ugh either
>>>> way.
>>>>>>
>>>>>> On Wed, Mar

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-28 Thread Ian Booth
Hey Antonio

I must apologise - the changes didn't make beta3 due to all the work needed to
migrate the CI scripts to test the new hosted model functionality; we ran out of
time to be able to QA the local bundle changes.

I expect this work will be done for beta4.

On 29/03/16 11:04, Antonio Rosales wrote:
> + Juju list for others awareness
> 
> 
> On Thu, Mar 10, 2016 at 1:53 PM, Ian Booth <ian.bo...@canonical.com> wrote:
>> Thanks Rick. Trivial change to make. This work should be in beta3 due next 
>> week.
>> The work includes dropping support for local repositories in favour of path
>> based local charm and bundle deployment.
> 
> Ian,
> First thanks for working on this feature. Second, I tried this for a
> local ppc64el deploy which is behind a firewall, and thus local charms
> are good way forward. I may have got the syntax incorrect and thus
> wanted to confirm here. What I did was is at:
> http://paste.ubuntu.com/15547725/
> Specifically, I set the the charm path to something like:
> charm: /home/ubuntu/charms/trusty/apache-hadoop-compute-slave
> However, I got the following error:
> ERROR cannot deploy bundle: cannot resolve URL
> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave": charm or
> bundle URL has invalid form:
> "/home/ubuntu/charms/trusty/apache-hadoop-compute-slave"
> 
> This is on the latest beta3:
> 2.0-beta3-xenial-ppc64el
> 
> Any suggestions?
> 
> -thanks,
> Antonio
> 
> 
>>
>> On 10/03/16 23:37, Rick Harding wrote:
>>> Thanks Ian, after thinking about it I think what we want to do is really
>>> #2. The reasoning I think is:
>>>
>>> 1) we want to make things consistent. The CLI experience is present a charm
>>> and override series with --series=
>>> 2) more consistent, if you do it with local charms you can always do it
>>> 3) we want to encourage folks to drop series from the charmstore urls and
>>> worry less about series over time. Just deploy X and let the charm author
>>> pick the default best series. I think we should encourage this in the error
>>> message for #2. "Please remove the series section of the charm url" or the
>>> like when we error on the conflict, pushing users to use the series
>>> override.
>>>
>>> Uros, Francesco, this brings up a point that I think for multi-series
>>> charms we want the deploy cli snippets to start to drop the series part of
>>> the url as often as we can. If the url doesn't have the series specified,
>>> e.g. jujucharms.com/mysql then the cli command should not either. Right now
>>> I know we add the series/revision info and such. Over time we want to try
>>> to get to as simple a command as possible.
>>>
>>> On Thu, Mar 10, 2016 at 7:23 AM Ian Booth <ian.bo...@canonical.com> wrote:
>>>
>>>> I've implemented option 1:
>>>>
>>>>  error if Series attribute is used at all with a store charm URL
>>>>
>>>> Trivial to change if needed.
>>>>
>>>> On 10/03/16 12:58, Ian Booth wrote:
>>>>> Yeah, agreed having 2 ways to specify store series can be suboptimal.
>>>>> So we have 2 choices:
>>>>>
>>>>> 1. error if Series attribute is used at all with a store charm URL
>>>>> 2. error if the Series attribute is used and conflicts
>>>>>
>>>>> Case 1
>>>>> --
>>>>>
>>>>> Errors:
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>> Ok:
>>>>>
>>>>> Series: trusty
>>>>> Charm: ./mysql
>>>>>
>>>>>
>>>>> Case 2
>>>>> --
>>>>>
>>>>> Ok:
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>> Series: trusty
>>>>> Charm: ./mysql
>>>>>
>>>>> Errors:
>>>>>
>>>>> Series: xenial
>>>>> Charm: cs:trusty/mysql
>>>>>
>>>>>
>>>>> On 10/03/16 12:51, Rick Harding wrote:
>>>>>> Bah maybe you're right. I want to sleep on it. It's kind of ugh either
>>>> way.
>>>>>>
>>>>>> On Wed, Mar

Re: Go 1.6 is now in trusty-proposed

2016-03-24 Thread Ian Booth

On 24/03/16 22:01, Nate Finch wrote:
> Does this mean we can assume 1.6 for everything from now on, or is there
> some other step we're waiting on?  I have some code that only needs to
> exist while we support 1.2, and I'd be happy to just delete it.
>

Not yet. The builders and test infrastructure all need to be updated, and the
package needs a week to transition out of proposed.

We're also waiting on this to commit an Azure provider fix.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-24 Thread Ian Booth
OMFG that is the best news. We can finally get the Juju LXD provider working
properly on trusty :-D
And first class support for all architectures etc :-D
And no more chasing gccgo issues :-D

Thanks Michael and whoever else helped make this possible.

On 24/03/16 16:03, Michael Hudson-Doyle wrote:
> Hi,
> 
> As of a few minutes ago, there is now a golang-1.6 package in
> trusty-proposed:
> https://launchpad.net/ubuntu/trusty/+source/golang-1.6 (thanks for the
> review and copy, Steve).
> 
> One difference between this and the package I prepared earlier is that
> it does not install /usr/bin/go but rather /usr/lib/go-1.6/bin/go so
> Makefiles and such will need to be adjusted to invoke that directly or
> put /usr/lib/go-1.6/bin on $PATH or whatever. (This also means it can
> be installed alongside the golang packages that are already in
> trusty).
> 
> Cheers,
> mwh
> (Hoping that we can now really properly ignore gccgo-4.9 ppc64el bugs!)
> 
> On 17 February 2016 at 07:58, Michael Hudson-Doyle
>  wrote:
>> I have approval for the idea but also decided to wait for 1.6 and upload
>> that instead. I'm also on leave currently so hopefully this can all happen
>> in early March.
>>
>> Cheers,
>> mwh
>>
>> On 17/02/2016 1:17 am, "John Meinel"  wrote:
>>>
>>> To start with, thanks for working on this. However, doesn't this also
>>> require changing the CI builds to use your ppa?
>>>
>>> What is the current state of this? I was just looking around and noticed
>>> golang1.5-go isn't in anything specific for Trusty that I can see. I realize
>>> if its going into an SRU it requires a fair amount of negotiation with other
>>> teams, so I'm not  surprised to see it take a while. I just wanted to check
>>> how it was going.
>>>
>>> Thanks,
>>>
>>> John
>>> =:->
>>>
>>> On Mon, Jan 18, 2016 at 7:32 AM, Michael Hudson-Doyle
>>>  wrote:

 Hi all,

 As part of the plan for getting Go 1.5 into trusty (see here
 https://wiki.ubuntu.com/MichaelHudsonDoyle/Go15InTrusty) I've built
 packages (called golang1.5-go rather than golang-go) for trusty in my
 ppa:

 https://launchpad.net/~mwhudson/+archive/ubuntu/go15-trusty/+packages

 (assuming 3:1.5.3-0ubuntu4 actually builds... I seem to be having a
 "make stupid packaging mistakes" day)

 I'll write up a SRU bug to start the process of getting this into
 trusty tomorrow but before it does end up in trusty it would seem like
 a good idea to run the CI tests using juju-core packages built with
 this version of the go compiler. Is that something that's feasible to
 arrange

 The only packaging requirement should be to change the build-depends
 to be on golang1.5-go rather than golang-go or gccgo.

 Cheers,
 mwh

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>>
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju 2.0 beta3 push this week

2016-03-20 Thread Ian Booth
Another feature which will be in the next beta is support for keystone 3 in
Openstack.

On 18/03/16 04:51, Rick Harding wrote:
> tl;dr
> Juju 2.0 beta3 will not be out this week.
> 
> The team is fighting a backlog of getting work landed. Rather than get the
> partial release out this week with the handful of current features and
> adding to the backlog while getting that beta release out, the decision was
> made to focus on getting the current work that’s ready landed. This will
> help us get our features in before the freeze exception deadline of the
> 23rd.
> 
> We have several new things currently in trunk (such as enhanced support for
> MAAS spaces, machine provisioning status monitoring, Juju GUI embedded CLI
> commands into Juju Core), but we have important things to get landed. These
> include:
> 
> - Updating controller model to be called “admin” and a “default” initial
> working model on bootstrap that’s safely removable
> - Minimum Juju version support for charms
> - juju read-only mode
> - additional resources work with version numbers and bundles support
> - additional work in the clouds and credentials management work
> - juju add-user and juju register to sign in the new user
> 
> The teams will work together and focus on landing these and we’ll get a
> beta with the full set of updates for everyone to try out next week. If you
> have any questions or concerns, please let me know.
> 
> Thanks
> 
> Rick
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Usability issues with status-history

2016-03-20 Thread Ian Booth


On 20/03/16 22:44, John Meinel wrote:
>>
>> ...
>>
>> For the second bug, where a downloading % spams the history, it seems like
>> the easy answer is "don't do that".  For resources, we specifically avoided
>> putting download progress in status history for that very reason.  In the
>> future, it seems like it could be useful to have some mechanism to show
>> transient messages like downloading % etc, but status history does not seem
>> like the appropriate place to do that, especially given how it is currently
>> configured... and it seems way too late to be adding a new feature for 2.0
>>
>> Just my 2 cents.
>>
>> -Nate
>>
> 
> The one aspect here is that it has been a consistent problem, especially
> with the local provider, of people wanting to know why things haven't
> started yet. Being able to give them concrete progress is a huge boon here.

+1 But do we need to persist these transient progress messages. We can still
report progress to the user each time they run juju status, but why save such
data when its value is of limited or minimal value once the download of an image
has finished.

> I really think we want to be putting more status for machine pending
> progress. Now, as for 100 progress messages, it turns out the rate limiting
> on status updates means we can drop some of them. (we always get 100 events
> from LXD, but if we only update Juju with one every 1s, then we generally
> get a lot fewer if your download speed is fast.)
> 
> But regardless, having genuine feedback as to what is going on outweighs a
> minor thing about having too much information in the backlog.
> 

Why not have both? Report progress but not persist such data.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Usability issues with status-history

2016-03-19 Thread Ian Booth

Machines, services and units all now support recording status history. Two
issues have come up:

1. https://bugs.launchpad.net/juju-core/+bug/1530840

For units, especially in steady state, status history is spammed with
update-status hook invocations which can obscure the hooks we really care about

2. https://bugs.launchpad.net/juju-core/+bug/1557918

We now have the concept of recording a machine provisioning status. This is
great because it gives observability to what is happening as a node is being
allocated in the cloud. With LXD, this feature has been used to give visibility
to progress of the image downloads (finally, yay). But what happens is that the
machine status history gets filled with lots of "Downloading x%" type messages.

We have a pruner which caps the history to 100 entries per entity. But we need a
way to deal with the spam, and what is displayed when the user asks for juju
status-history.

Options to solve bug 1

A.
Filter out duplicate status entries when presenting to the user. eg say
"update-status (x43)". This still allows the circular buffer for that entity to
fill with "spam" though. We could make the circular buffer size much larger. But
there's still the issue of UX where a user ask for the X most recent entries.
What do we give them? The X most recent de-duped entries?

B.
If the we go to record history and the current previous entry is the same as
what we are about to record, just update the timestamp. For update status, my
view is we don't really care how many times the hook was run, but rather when
was the last time it ran.

Options to solve bug 2

A.
Allow a flag when setting status to say "this status value is transient" and so
it is recorded in status but not logged in history.

B.
Do not record machine provisioning status in history. It could be argued this
info is more or less transient and once the machine comes up, we don't care so
much about it anymore. It was introduced to give observability to machine
allocation.

Any other options?
Opinions on preferred solutions?

I really want to get this fixed before Juju 2.0






-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju 2.0 beta3 push this week

2016-03-18 Thread Ian Booth


On 18/03/16 05:12, Adam Stokes wrote:
> Hi!
> 
> Could I get this bug added to the list too?
> 
> https://bugs.launchpad.net/juju-core/+bug/1554721
>

That bug is on the list for sure. We're aiming for beta3 but it could likely
slip. It will be fixed before 2.0. The priority is the feature backlog. One of
the other features we're aiming for not included in Rick's list is mongo3
support om Xenial.


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Usability issues with status-history

2016-03-18 Thread Ian Booth


On 17/03/16 19:51, William Reade wrote:
> I see this as a combination of two problems:
> 
> 1) We're spamming the end user with "whatever's in the status-history
> collection" rather than presenting a digest tuned for their needs.
> 2) Important messages get thrown away way too early, because we don't know
> which messages are important.
> 
> I think the pocket/transient/expiry solutions boil down to "let's make the
> charmer decide what's important", and I don't think that will help. The
> charmer is only sending those messages *because she believes they're
> important*; even if we had "perfect" trimming heuristics for the end user,
> we do the *charmer* a disservice by leaving them no record of what their
> charm actually did.
> 
> And, more generally: *every* message we throw away makes it hard to
> correctly analyse any older message. This applies within a given entity's
> domain, but also across entities: if you're trying to understand the
> interactions between 2 units, but one of those units is generating many
> more messages, you'll have 200 messages to inspect; but the 100 for the
> faster unit will only cover (say) the last 30 for the slower one, leaving
> 70 slow-unit messages that can't be correlated with the other unit's
> actions. At best, those messages are redundant; at worst, they're actively
> misleading.
> 
> So: I do not believe that any approach that can be summed up as "let's
> throw away *more* messages" is going to help either. We need to fix (2) so
> that we have raw status data that extends reasonably far back in time; and
> then we need to fix (1) so that we usefully precis that data for the user
> (...and! leave a path that makes the raw data observable, for the cases
> where our heuristics are unhelpful).
> 

I mostly agree but still believe there's a case for transient messages. The case
where Juju is downloading an image and emits progress updates which go into
status history is to me clearly a case where we needn't persist every single one
(or any). In that case, it's not a charmer deciding but Juju. And with status
updates like X% complete, as soon as a new message arrives, the old one is
superseded anyway. The user is surely just interested to know the current status
and when it completes they don't care anymore. And Juju agent can still decide
to say make every 10% of download progress messages non-transient to they go to
history for future reference.

> Cheers
> William
> 
> PS re: UX of asking for N entries... I can see end-user stories for
> timespans, and for "the last N *significant* changes". What's the scenario
> where a user wants to see exactly 50 message atoms?
> 

No one would say they want to see exactly 50 - it's an estimate. It's like when
you git log and you ask for the last 20 commits. If that's not enough to see
what you want, you just run again with an increased number.

I do think allowing for a timespan to be specified may be useful.

John's suggestion for adding a lifetime does sounds more complicated than we
want right now.

Would this work s an initial improvement for 2.0:

1. Increase limit of stored messages per entity so say 500 (from 100)
2. Allow messages emitted from Juju to be marked as transient
eg for download progress
3. Do smarter filtering of what is displayed with status-history
eg if we see the same tuple of messages over and over, consolidate

TIMETYPESTATUS  MESSAGE
26 Dec 2015 13:51:59Z   agent   executing   running config-changed hook
26 Dec 2015 13:51:59Z   agent   idle
26 Dec 2015 13:56:57Z   agent   executing   running update-status hook
26 Dec 2015 13:56:59Z   agent   idle
26 Dec 2015 14:01:57Z   agent   executing   running update-status hook
26 Dec 2015 14:01:59Z   agent   idle
26 Dec 2015 14:01:57Z   agent   executing   running update-status hook
26 Dec 2015 14:01:59Z   agent   idle

becomes

TIME TYPE STATUS MESSAGE
26 Dec 2015 13:51:59Z agent executing running config-changed hook
26 Dec 2015 13:51:59Z agent idle
>> Repeated 3 times, last occurence:
26 Dec 2015 14:01:57Z agent executing running update-status hook
26 Dec 2015 14:01:59Z agent idle





> On Thu, Mar 17, 2016 at 6:30 AM, John Meinel <j...@arbash-meinel.com> wrote:
> 
>>
>>
>> On Thu, Mar 17, 2016 at 8:41 AM, Ian Booth <ian.bo...@canonical.com>
>> wrote:
>>
>>>
>>> Machines, services and units all now support recording status history. Two
>>> issues have come up:
>>>
>>> 1. https://bugs.launchpad.net/juju-core/+bug/1530840
>>>
>>> For units, especially in steady state, status history is spammed with
>>> update-status hook invocations which can obscure the hooks we really care
>>> about
>>>
>>&

Re: Do we still need juju upgrade-charm --switch ... ?

2016-03-11 Thread Ian Booth
> 
> We use switch a lot, and customers use this as well. The primary use case
> is "I have a bug in production charm that is not available upstream yet". I
> expect future 2.0 uses to look like this:
> 
> charm pull 
> 
> juju upgrade-charm --switch ./ 
> 
> Another example, esp because of how the charmstore is structured now
> 
> juju deploy trusty/wordpress
> # hackity hack
> juju deploy --switch cs:~marcoceppi/trusty/wordpress wordpress
> 
> 
>>
>> What would folks lose if --switch were to be dropped for 2.0? Any
>> objections to
>> doing this?
> 
> 
> I object. Switch should be updated to support ./local/directory/charm
> instead of local:
> 

Thanks Marco, we'll have to ensure 2.0 is updated to allow --switch with
upgrade-charm --path which it currently does not.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Do we still need juju upgrade-charm --switch ... ?

2016-03-10 Thread Ian Booth
So we have a feature of upgrade-charm which allows you to crossgrade to a
different charm than the one originally deployed.

>From the upgrade-charm help docs:

The new charm's URL and revision are inferred as they would be when running a
deploy command.
Please note that --switch is dangerous, because juju only has limited
information with which to determine compatibility; the operation will succeed,
regardless of potential havoc.

What is the use case for this functionality? I seemed to get the impression it
was used mainly with local repos? But given local repos are going away in 2.0,
do we still need it? And given the potential for users getting things wrong, do
we even want to keep it regardless? Note also --switch is not allowed with
--path which is how local charms are upgraded.

What would folks lose if --switch were to be dropped for 2.0? Any objections to
doing this?




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju GUI 2.1.0 released – Now with Juju 2.0 support

2016-03-10 Thread Ian Booth
This is awesome news. I just wanted to acknowledge the tonne of extra work done
by the GUI folks for the GUI to support all of the API changes introduced by
Juju 2.0. Can't wait to try out the new GUI with the next Juju beta2 due out
this week (next day or so).

On 11/03/16 07:06, Jeff Pihach wrote:
> Hi All,
> 
> We are excited to announce a new major release of the Juju GUI with support
> for Juju 2.0 (currently in beta). Juju 2.0 brings with it a ton of
> improvements, but one we’d like to highlight is the ability to create new
> models without needing to bootstrap them one by one. I run over all of the
> features in this new version of the GUI in this video:
> 
> 
> 
> https://www.youtube.com/watch?v=RsA2vNbKU5o
> 
> 
> 
> Along with Juju 2.0 support comes these fine additions:
> 
>   * A new user profile page which shows your models, bundles and charms
> after logging into the Charmstore.
> 
>   * Added support for syntax highlighting in the charm details pages in the
> charmbrowser when the charm author provides a GitHub Flavored Markdown
> README file. You can find more information on this here:
> https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#code
> 
>   * Added the ability to drag uncommitted units between machines in the
> machine view.
> 
>   * Unit statuses are now also shown in the machine view.
> 
>   * Fixed – when subordinates are deployed extra empty machines are no
> longer created.
> 
>   * Fixed – websockets are now closed properly when switching models.
> 
>   * Fixed – On logging out all cookies are now deleted.
> 
> 
> 
> 
> To upgrade an existing deployment:
> 
>   juju upgrade-charm juju-gui
> 
> To deploy this release in your model:
> 
>   juju deploy juju-gui
> 
> 
> 
> We hope you will enjoy this release and welcome any feedback you may have.
> Please let us know here or in our github repository
> https://github.com/juju/juju-gui/issues and we’ll be sure to get back to
> you.
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-10 Thread Ian Booth
Thanks Rick. Trivial change to make. This work should be in beta3 due next week.
The work includes dropping support for local repositories in favour of path
based local charm and bundle deployment.

On 10/03/16 23:37, Rick Harding wrote:
> Thanks Ian, after thinking about it I think what we want to do is really
> #2. The reasoning I think is:
> 
> 1) we want to make things consistent. The CLI experience is present a charm
> and override series with --series=
> 2) more consistent, if you do it with local charms you can always do it
> 3) we want to encourage folks to drop series from the charmstore urls and
> worry less about series over time. Just deploy X and let the charm author
> pick the default best series. I think we should encourage this in the error
> message for #2. "Please remove the series section of the charm url" or the
> like when we error on the conflict, pushing users to use the series
> override.
> 
> Uros, Francesco, this brings up a point that I think for multi-series
> charms we want the deploy cli snippets to start to drop the series part of
> the url as often as we can. If the url doesn't have the series specified,
> e.g. jujucharms.com/mysql then the cli command should not either. Right now
> I know we add the series/revision info and such. Over time we want to try
> to get to as simple a command as possible.
> 
> On Thu, Mar 10, 2016 at 7:23 AM Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> I've implemented option 1:
>>
>>  error if Series attribute is used at all with a store charm URL
>>
>> Trivial to change if needed.
>>
>> On 10/03/16 12:58, Ian Booth wrote:
>>> Yeah, agreed having 2 ways to specify store series can be suboptimal.
>>> So we have 2 choices:
>>>
>>> 1. error if Series attribute is used at all with a store charm URL
>>> 2. error if the Series attribute is used and conflicts
>>>
>>> Case 1
>>> --
>>>
>>> Errors:
>>>
>>> Series: trusty
>>> Charm: cs:mysql
>>>
>>> Series: trusty
>>> Charm: cs:trusty/mysql
>>>
>>> Ok:
>>>
>>> Series: trusty
>>> Charm: ./mysql
>>>
>>>
>>> Case 2
>>> --
>>>
>>> Ok:
>>>
>>> Series: trusty
>>> Charm: cs:mysql
>>>
>>> Series: trusty
>>> Charm: cs:trusty/mysql
>>>
>>> Series: trusty
>>> Charm: ./mysql
>>>
>>> Errors:
>>>
>>> Series: xenial
>>> Charm: cs:trusty/mysql
>>>
>>>
>>> On 10/03/16 12:51, Rick Harding wrote:
>>>> Bah maybe you're right. I want to sleep on it. It's kind of ugh either
>> way.
>>>>
>>>> On Wed, Mar 9, 2016, 9:50 PM Rick Harding <rick.hard...@canonical.com>
>>>> wrote:
>>>>
>>>>> I think there's already rules for charmstore charms. it uses the
>> default
>>>>> if not specified. I totally agree that for local charms we have to have
>>>>> this. For remote charms though this is providing the user two ways to
>> do
>>>>> the same thing
>>>>>
>>>>> On Wed, Mar 9, 2016, 9:46 PM Ian Booth <ian.bo...@canonical.com>
>> wrote:
>>>>>
>>>>>> If the charm store charm defines a series in the URL, then we will
>>>>>> consider it
>>>>>> an error to specify a different series using the attribute. But charm
>>>>>> store URLs
>>>>>> are not required to have a series, so we can use the attribute in that
>>>>>> case. It
>>>>>> also allows users to easily switch between store and local charms
>> during
>>>>>> development just by replacing "./" with "cs:"
>>>>>>
>>>>>>  nova-compute:
>>>>>>series: xenial
>>>>>>charm: ./nova-compute
>>>>>>
>>>>>>  nova-compute:
>>>>>>series: xenial
>>>>>>charm: cs:nova-compute
>>>>>>
>>>>>>
>>>>>> On 10/03/16 12:21, Rick Harding wrote:
>>>>>>> I'm not sure we want to make this attribute apply to charmstore
>> charms.
>>>>>>> We've an established practice of the charmstore url being the series
>>>>>>> information. It gives the user a chance to have conflicting
>> information
>>>>>> if
>>>>&g

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-10 Thread Ian Booth
I've implemented option 1:

 error if Series attribute is used at all with a store charm URL

Trivial to change if needed.

On 10/03/16 12:58, Ian Booth wrote:
> Yeah, agreed having 2 ways to specify store series can be suboptimal.
> So we have 2 choices:
> 
> 1. error if Series attribute is used at all with a store charm URL
> 2. error if the Series attribute is used and conflicts
> 
> Case 1
> --
> 
> Errors:
> 
> Series: trusty
> Charm: cs:mysql
> 
> Series: trusty
> Charm: cs:trusty/mysql
> 
> Ok:
> 
> Series: trusty
> Charm: ./mysql
> 
> 
> Case 2
> --
> 
> Ok:
> 
> Series: trusty
> Charm: cs:mysql
> 
> Series: trusty
> Charm: cs:trusty/mysql
> 
> Series: trusty
> Charm: ./mysql
> 
> Errors:
> 
> Series: xenial
> Charm: cs:trusty/mysql
> 
> 
> On 10/03/16 12:51, Rick Harding wrote:
>> Bah maybe you're right. I want to sleep on it. It's kind of ugh either way.
>>
>> On Wed, Mar 9, 2016, 9:50 PM Rick Harding <rick.hard...@canonical.com>
>> wrote:
>>
>>> I think there's already rules for charmstore charms. it uses the default
>>> if not specified. I totally agree that for local charms we have to have
>>> this. For remote charms though this is providing the user two ways to do
>>> the same thing
>>>
>>> On Wed, Mar 9, 2016, 9:46 PM Ian Booth <ian.bo...@canonical.com> wrote:
>>>
>>>> If the charm store charm defines a series in the URL, then we will
>>>> consider it
>>>> an error to specify a different series using the attribute. But charm
>>>> store URLs
>>>> are not required to have a series, so we can use the attribute in that
>>>> case. It
>>>> also allows users to easily switch between store and local charms during
>>>> development just by replacing "./" with "cs:"
>>>>
>>>>  nova-compute:
>>>>series: xenial
>>>>charm: ./nova-compute
>>>>
>>>>  nova-compute:
>>>>series: xenial
>>>>charm: cs:nova-compute
>>>>
>>>>
>>>> On 10/03/16 12:21, Rick Harding wrote:
>>>>> I'm not sure we want to make this attribute apply to charmstore charms.
>>>>> We've an established practice of the charmstore url being the series
>>>>> information. It gives the user a chance to have conflicting information
>>>> if
>>>>> the charmstore url is cs:trusty/nova-compute and the series attribute is
>>>>> set to xenial. I think we should toss an error to a bundle that has
>>>> series:
>>>>> specified for a charmstore based charm value (or non-local value
>>>> whichever
>>>>> way you want to think about it)
>>>>>
>>>>> On Wed, Mar 9, 2016 at 6:29 PM Ian Booth <ian.bo...@canonical.com>
>>>> wrote:
>>>>>
>>>>>> One additional enhancement we need for bundles concerns specifying
>>>> series
>>>>>> for
>>>>>> multi-series charms, in particular local charms now that the local repo
>>>>>> will be
>>>>>> going away.
>>>>>>
>>>>>> Consider:
>>>>>>
>>>>>> A new multi-series charm may have a URL which does not specify the
>>>> series.
>>>>>> In
>>>>>> that case, the series used will be the default specified in the charm
>>>>>> metadata
>>>>>> or the latest LTS. But we want to allow people to choose their own
>>>> series
>>>>>> also.
>>>>>>
>>>>>> So we need a new (optional) Series attribute in the bundle metadata.
>>>>>>
>>>>>> bundle.yaml
>>>>>>   series: trusty
>>>>>>   services:
>>>>>> nova-compute:
>>>>>>   series: xenial <-- new
>>>>>>   charm: ./nova-compute
>>>>>>   num_units: 2
>>>>>>
>>>>>> or with a charm store charm
>>>>>>
>>>>>> bundle.yaml
>>>>>>   series: trusty
>>>>>>   services:
>>>>>> nova-compute:
>>>>>>   series: xenial<-- new
>>>>>>   charm: cs:nova-compute
>>>>>>   num_units: 2
>>>>

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-09 Thread Ian Booth
Yeah, agreed having 2 ways to specify store series can be suboptimal.
So we have 2 choices:

1. error if Series attribute is used at all with a store charm URL
2. error if the Series attribute is used and conflicts

Case 1
--

Errors:

Series: trusty
Charm: cs:mysql

Series: trusty
Charm: cs:trusty/mysql

Ok:

Series: trusty
Charm: ./mysql


Case 2
--

Ok:

Series: trusty
Charm: cs:mysql

Series: trusty
Charm: cs:trusty/mysql

Series: trusty
Charm: ./mysql

Errors:

Series: xenial
Charm: cs:trusty/mysql


On 10/03/16 12:51, Rick Harding wrote:
> Bah maybe you're right. I want to sleep on it. It's kind of ugh either way.
> 
> On Wed, Mar 9, 2016, 9:50 PM Rick Harding <rick.hard...@canonical.com>
> wrote:
> 
>> I think there's already rules for charmstore charms. it uses the default
>> if not specified. I totally agree that for local charms we have to have
>> this. For remote charms though this is providing the user two ways to do
>> the same thing
>>
>> On Wed, Mar 9, 2016, 9:46 PM Ian Booth <ian.bo...@canonical.com> wrote:
>>
>>> If the charm store charm defines a series in the URL, then we will
>>> consider it
>>> an error to specify a different series using the attribute. But charm
>>> store URLs
>>> are not required to have a series, so we can use the attribute in that
>>> case. It
>>> also allows users to easily switch between store and local charms during
>>> development just by replacing "./" with "cs:"
>>>
>>>  nova-compute:
>>>series: xenial
>>>charm: ./nova-compute
>>>
>>>  nova-compute:
>>>series: xenial
>>>charm: cs:nova-compute
>>>
>>>
>>> On 10/03/16 12:21, Rick Harding wrote:
>>>> I'm not sure we want to make this attribute apply to charmstore charms.
>>>> We've an established practice of the charmstore url being the series
>>>> information. It gives the user a chance to have conflicting information
>>> if
>>>> the charmstore url is cs:trusty/nova-compute and the series attribute is
>>>> set to xenial. I think we should toss an error to a bundle that has
>>> series:
>>>> specified for a charmstore based charm value (or non-local value
>>> whichever
>>>> way you want to think about it)
>>>>
>>>> On Wed, Mar 9, 2016 at 6:29 PM Ian Booth <ian.bo...@canonical.com>
>>> wrote:
>>>>
>>>>> One additional enhancement we need for bundles concerns specifying
>>> series
>>>>> for
>>>>> multi-series charms, in particular local charms now that the local repo
>>>>> will be
>>>>> going away.
>>>>>
>>>>> Consider:
>>>>>
>>>>> A new multi-series charm may have a URL which does not specify the
>>> series.
>>>>> In
>>>>> that case, the series used will be the default specified in the charm
>>>>> metadata
>>>>> or the latest LTS. But we want to allow people to choose their own
>>> series
>>>>> also.
>>>>>
>>>>> So we need a new (optional) Series attribute in the bundle metadata.
>>>>>
>>>>> bundle.yaml
>>>>>   series: trusty
>>>>>   services:
>>>>> nova-compute:
>>>>>   series: xenial <-- new
>>>>>   charm: ./nova-compute
>>>>>   num_units: 2
>>>>>
>>>>> or with a charm store charm
>>>>>
>>>>> bundle.yaml
>>>>>   series: trusty
>>>>>   services:
>>>>> nova-compute:
>>>>>   series: xenial<-- new
>>>>>   charm: cs:nova-compute
>>>>>   num_units: 2
>>>>>
>>>>>
>>>>> Note: the global series in the bundle still applies if series is not
>>>>> otherwise
>>>>> known.
>>>>> The new series attribute is per charm.
>>>>>
>>>>> So in the case above, cs:nova-compute may ordinarily be deployed on
>>> trusty
>>>>> (the
>>>>> default series in that charm's metadata). But the bundle requires the
>>>>> xenial
>>>>> version. With the charm store URL, we can currently use
>>>>> cs:xenial/nova-compute
>>>>> but that's not the case for local charms deployed out of a directory.
>

Re: Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-09 Thread Ian Booth
If the charm store charm defines a series in the URL, then we will consider it
an error to specify a different series using the attribute. But charm store URLs
are not required to have a series, so we can use the attribute in that case. It
also allows users to easily switch between store and local charms during
development just by replacing "./" with "cs:"

 nova-compute:
   series: xenial
   charm: ./nova-compute

 nova-compute:
   series: xenial
   charm: cs:nova-compute


On 10/03/16 12:21, Rick Harding wrote:
> I'm not sure we want to make this attribute apply to charmstore charms.
> We've an established practice of the charmstore url being the series
> information. It gives the user a chance to have conflicting information if
> the charmstore url is cs:trusty/nova-compute and the series attribute is
> set to xenial. I think we should toss an error to a bundle that has series:
> specified for a charmstore based charm value (or non-local value whichever
> way you want to think about it)
> 
> On Wed, Mar 9, 2016 at 6:29 PM Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> One additional enhancement we need for bundles concerns specifying series
>> for
>> multi-series charms, in particular local charms now that the local repo
>> will be
>> going away.
>>
>> Consider:
>>
>> A new multi-series charm may have a URL which does not specify the series.
>> In
>> that case, the series used will be the default specified in the charm
>> metadata
>> or the latest LTS. But we want to allow people to choose their own series
>> also.
>>
>> So we need a new (optional) Series attribute in the bundle metadata.
>>
>> bundle.yaml
>>   series: trusty
>>   services:
>> nova-compute:
>>   series: xenial <-- new
>>   charm: ./nova-compute
>>   num_units: 2
>>
>> or with a charm store charm
>>
>> bundle.yaml
>>   series: trusty
>>   services:
>> nova-compute:
>>   series: xenial<-- new
>>   charm: cs:nova-compute
>>   num_units: 2
>>
>>
>> Note: the global series in the bundle still applies if series is not
>> otherwise
>> known.
>> The new series attribute is per charm.
>>
>> So in the case above, cs:nova-compute may ordinarily be deployed on trusty
>> (the
>> default series in that charm's metadata). But the bundle requires the
>> xenial
>> version. With the charm store URL, we can currently use
>> cs:xenial/nova-compute
>> but that's not the case for local charms deployed out of a directory. We
>> need a
>> way to allow the series to be specified in that latter case.
>>
>> We'll look to make the changes in core initially and can followup later
>> with the
>> GUI etc. The attribute is optional and only really affects bundles with
>> local
>> charms.
>>
>>
>>
>> On 09/03/16 09:53, Ian Booth wrote:
>>> So to clarify what we'll do. We'll support the same syntax in bundle
>> files as we
>>> do for deploy.
>>>
>>> Deploys charm store charms:
>>>
>>> $ juju deploy cs:wordpress
>>> $ juju deploy wordpress
>>>
>>> Deploys a local charm from a directory:
>>>
>>> $ juju deploy ./charms/wordpress
>>> $ juju deploy ./wordpress
>>>
>>> So below deploys a local nova-compute charm in a directory co-located
>> with the
>>> bundle.yaml file.
>>>
>>>  series: trusty
>>>  services:
>>>nova-compute:
>>>  charm: ./nova-compute
>>>  num_units: 2
>>>
>>> This one deploys a charm store charm:
>>>
>>>  series: trusty
>>>  services:
>>>nova-compute:
>>>charm: nova-compute
>>>num_units: 2
>>>
>>>
>>>
>>> On 09/03/16 03:59, Rick Harding wrote:
>>>> Long term we want to have a pattern when the bundle is a directory with
>>>> local charms in a directory next to the bundles.yaml file. We could not
>> do
>>>> this cleanly before the multi-series charms that are just getting out
>> the
>>>> door. I think that bundles with local charms will be suboptimal until we
>>>> can get those bits to line up.
>>>>
>>>> I don't think we want to be doing the file based urls, but to build a
>>>> pattern that's reusable and makes sense across systems. Creating a
>> standard
>>>> pattern I think is the best path forward.
>>>>
>&g

Charm series in bundles (Was Re: Juju 2.0 and local charm deployment)

2016-03-09 Thread Ian Booth
One additional enhancement we need for bundles concerns specifying series for
multi-series charms, in particular local charms now that the local repo will be
going away.

Consider:

A new multi-series charm may have a URL which does not specify the series. In
that case, the series used will be the default specified in the charm metadata
or the latest LTS. But we want to allow people to choose their own series also.

So we need a new (optional) Series attribute in the bundle metadata.

bundle.yaml
  series: trusty
  services:
nova-compute:
  series: xenial <-- new
  charm: ./nova-compute
  num_units: 2

or with a charm store charm

bundle.yaml
  series: trusty
  services:
nova-compute:
  series: xenial<-- new
  charm: cs:nova-compute
  num_units: 2


Note: the global series in the bundle still applies if series is not otherwise
known.
The new series attribute is per charm.

So in the case above, cs:nova-compute may ordinarily be deployed on trusty (the
default series in that charm's metadata). But the bundle requires the xenial
version. With the charm store URL, we can currently use cs:xenial/nova-compute
but that's not the case for local charms deployed out of a directory. We need a
way to allow the series to be specified in that latter case.

We'll look to make the changes in core initially and can followup later with the
GUI etc. The attribute is optional and only really affects bundles with local
charms.



On 09/03/16 09:53, Ian Booth wrote:
> So to clarify what we'll do. We'll support the same syntax in bundle files as 
> we
> do for deploy.
> 
> Deploys charm store charms:
> 
> $ juju deploy cs:wordpress
> $ juju deploy wordpress
> 
> Deploys a local charm from a directory:
> 
> $ juju deploy ./charms/wordpress
> $ juju deploy ./wordpress
> 
> So below deploys a local nova-compute charm in a directory co-located with the
> bundle.yaml file.
> 
>  series: trusty
>  services:
>nova-compute:
>  charm: ./nova-compute
>  num_units: 2
> 
> This one deploys a charm store charm:
> 
>  series: trusty
>  services:
>nova-compute:
>charm: nova-compute
>num_units: 2
> 
> 
> 
> On 09/03/16 03:59, Rick Harding wrote:
>> Long term we want to have a pattern when the bundle is a directory with
>> local charms in a directory next to the bundles.yaml file. We could not do
>> this cleanly before the multi-series charms that are just getting out the
>> door. I think that bundles with local charms will be suboptimal until we
>> can get those bits to line up.
>>
>> I don't think we want to be doing the file based urls, but to build a
>> pattern that's reusable and makes sense across systems. Creating a standard
>> pattern I think is the best path forward.
>>
>> On Tue, Mar 8, 2016 at 12:26 PM Martin Packman <martin.pack...@canonical.com>
>> wrote:
>>
>>> On 05/03/2016, Ian Booth <ian.bo...@canonical.com> wrote:
>>>>>
>>>>> How will bundles work which reference local charms? Will this work as
>>>>> expected where nova-compute is a directory at the same level as a bundle
>>>>> file?
>>>>>
>>>>> ```
>>>>> series: trusty
>>>>> services:
>>>>>   nova-compute:
>>>>> charm: ./nova-compute
>>>>> num_units: 2
>>>>> ```
>>>>>
>>>>
>>>> The above will work but not until a tweak is made to bundle deployment to
>>>> interpret a path on disk rather than a url. It's a small change. This
>>> would
>>>> be done as part of the work to remove the local repo support.
>>>
>>> Can we keep interpreting the the reference in the bundle as a url, but
>>> start supporting file urls? That seems neater than treating the cs:
>>> prefix as magic not-a-filename.
>>>
>>> The catch is that there's no sane way of referencing locations outside
>>> a base url.
>>>
>>> charm: file:nova-compute
>>>
>>> Works as a reference to a dir inside the base location, but:
>>>
>>> charm: file:../nova-compute
>>>
>>> Will not work as a reference to a sibling directory. And absolute file
>>> paths are pretty useless across machines.
>>>
>>> Martin
>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>
>>
>>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0 and local charm deployment

2016-03-08 Thread Ian Booth
So to clarify what we'll do. We'll support the same syntax in bundle files as we
do for deploy.

Deploys charm store charms:

$ juju deploy cs:wordpress
$ juju deploy wordpress

Deploys a local charm from a directory:

$ juju deploy ./charms/wordpress
$ juju deploy ./wordpress

So below deploys a local nova-compute charm in a directory co-located with the
bundle.yaml file.

 series: trusty
 services:
   nova-compute:
 charm: ./nova-compute
 num_units: 2

This one deploys a charm store charm:

 series: trusty
 services:
   nova-compute:
   charm: nova-compute
   num_units: 2



On 09/03/16 03:59, Rick Harding wrote:
> Long term we want to have a pattern when the bundle is a directory with
> local charms in a directory next to the bundles.yaml file. We could not do
> this cleanly before the multi-series charms that are just getting out the
> door. I think that bundles with local charms will be suboptimal until we
> can get those bits to line up.
> 
> I don't think we want to be doing the file based urls, but to build a
> pattern that's reusable and makes sense across systems. Creating a standard
> pattern I think is the best path forward.
> 
> On Tue, Mar 8, 2016 at 12:26 PM Martin Packman <martin.pack...@canonical.com>
> wrote:
> 
>> On 05/03/2016, Ian Booth <ian.bo...@canonical.com> wrote:
>>>>
>>>> How will bundles work which reference local charms? Will this work as
>>>> expected where nova-compute is a directory at the same level as a bundle
>>>> file?
>>>>
>>>> ```
>>>> series: trusty
>>>> services:
>>>>   nova-compute:
>>>> charm: ./nova-compute
>>>> num_units: 2
>>>> ```
>>>>
>>>
>>> The above will work but not until a tweak is made to bundle deployment to
>>> interpret a path on disk rather than a url. It's a small change. This
>> would
>>> be done as part of the work to remove the local repo support.
>>
>> Can we keep interpreting the the reference in the bundle as a url, but
>> start supporting file urls? That seems neater than treating the cs:
>> prefix as magic not-a-filename.
>>
>> The catch is that there's no sane way of referencing locations outside
>> a base url.
>>
>> charm: file:nova-compute
>>
>> Works as a reference to a dir inside the base location, but:
>>
>> charm: file:../nova-compute
>>
>> Will not work as a reference to a sibling directory. And absolute file
>> paths are pretty useless across machines.
>>
>> Martin
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0 and local charm deployment

2016-03-05 Thread Ian Booth
> 
> Does this mean it won't be possible to deploy old single-series
> charms with Juju without modifying metadata.yaml to add the supported
> series?
> 

You can use the --series argument

$ juju deploy ./trusty/mysql --series trusty

We could look at pulling the series out of the path if it's an old single-series
charm without series defined in metadata. Would that be an approach we'd be
willing to adopt?


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0 and local charm deployment

2016-03-05 Thread Ian Booth
Hey Marco

> 
> I'm a +1
> 
> How will bundles work which reference local charms? Will this work as
> expected where nova-compute is a directory at the same level as a bundle
> file?
> 
> ```
> series: trusty
> services:
>   nova-compute:
> charm: ./nova-compute
> num_units: 2
> ```
> 

The above will work but not until a tweak is made to bundle deployment to
interpret a path on disk rather than a url. It's a small change. This would be
done as part of the work to remove the local repo support.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: admin is dead, long live $USER

2016-03-03 Thread Ian Booth
Hey Tim

The new bootstrap UX has not removed any --admin-user flag.
I can see that the server jujud bootstrap command has an --admin-user argument
but it appears this is never set anywhere in the cloud init scripts. Or not that
I can see. I've checked older version of the relevant files and can't see where
we've ever used this.

So maybe we have a capability to bootstrap the controller agent with a specified
admin-user but have not hooked it up yet?

On 04/03/16 08:11, Tim Penhey wrote:
> Ah... it used to be there :-) At least it is on my feature branch, but I
> don't think I have merged the most recent master updates that has the
> work to re-work bootstrap for the new cloud credentials stuff.
> 
> Tim
> 
> On 04/03/16 10:09, Rick Harding wrote:
>> If we do that we need to also make it configurable on bootstrap as an
>> option.
>>
>> +1 overall
>>
>>
>> On Thu, Mar 3, 2016, 4:07 PM Tim Penhey > > wrote:
>>
>> Hi folks,
>>
>> I was thinking that with the upcoming big changes with 2.0, we should
>> tackle a long held issue where we have the initial user called "admin".
>>
>> There was a request some time back that we should use the current user's
>> name. The reason it wasn't implemented at that time was due to logging
>> into the GUI issues. These have been resolved some time back with the
>> multiple user support that was added.
>>
>> All the server side code handles the ability to define the initial user
>> for the controller model, and we do this in all the tests, so the
>> default test user is actually called "test-admin".
>>
>> I *think* that all we need to do is change the default value we use in
>> the bootstrap command for the AdminUserName (--admin-user flag) from
>> "admin" to something we derive from the current user.
>>
>> Probably worth doing now.
>>
>> Thoughts?
>>
>> Tim
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com 
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Juju 2.0 and local charm deployment

2016-03-03 Thread Ian Booth
Hi folks

TL;DR we want to remove support for old style local charm repositories in Juju 
2.0

Hopefully everyone is aware that Juju 2.0 and the charm store will support
multi-series charms. To recap, a multi-series charm is one which can declare
that it supports more than just the one series; you no longer need to have a
separate copy of the charm for precise vs trusty vs xenial. Note that all series
must be for the same OS so you'll still need separate charm sources for Windows
vs Ubuntu vs Centos.

Here's a link to the release notes
https://jujucharms.com/docs/devel/temp-release-notes#multi-series-charms

Juju 2.0 will also support deploying bundles natively
https://jujucharms.com/docs/devel/temp-release-notes#native-support-for-charm-bundles

So, with multi-series charm support, local charm deployment is now also a lot
easier. Back in Juju 1.x, to deploy local charms you needed to set up a
so-called charm repository, with a proscribed directory layout. The directory
layout has one directory per series.

_ mycharms
 |_precise
  |_mysql
 |_trusty
  |_mysql
 |_bundle
  |_openstack

You deployed using a local URL syntax:

$ juju deploy --repository ~/mycharms local:trusty/mysql

$ juju deploy --repository ~/mycharms local:bundle/openstack

The above structure was fine for when charms were duplicated for each series.
But one of the limitations is that you can't easily git checkout mycharm and
deploy straight from the vcs source on disk.

Juju 2.0 supports deploying charms and bundles straight from any directory,
including where you've checked out your launchpad/github charm source.

$ juju deploy ~/mygithubstuff/mysql

$ juju deploy ~/mygithubstuff/openstack/bundle.yaml

So the above combined with the consolidation of charms for many series into the
one source tree means that the old local repo support is not needed.

Will anyone complain if we drop local repos in Juju 2.0? Is there a use case
where it's absolutely required to retain this?






-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Logging into the API on Juju 2.0

2016-02-29 Thread Ian Booth
No, you are right.

$ juju list-controllers --format yaml

is better.

On 01/03/16 14:49, John Meinel wrote:
> Is there a reason to tell people to look at "controllers.yaml" rather than
> having the official mechanism be something like "juju list-controllers
> --format=yaml" ? I'd really like to avoid tying 3rd party scripts to our
> on-disk configuration. We can keep CLI compatibility, but on-disk
> structures aren't something we really want to commit to forever.
> 
> John
> =:->
> 
> On Tue, Mar 1, 2016 at 8:22 AM, Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> Just to be clear, the remote APi for listing models for a given controller
>> exists. But you do need to look at controllers.yaml to see what
>> controllers you
>> have bootstrapped or have access to in order to make the remote list
>> models api
>> call.
>>
>> On 01/03/16 13:14, Adam Stokes wrote:
>>> Got it squared away, being able to replicate `juju list-controllers`
>> didn't
>>> have a remote api. So I will continue to read from
>>> ~/.local/share/juju/controllers.yaml. My intention was to basically see
>>> what controllers were already bootstrapped and gather the models for
>> those
>>> controllers using the remote juju api. But that doesn't exist so I will
>>> mimic what `juju list-controllers` does and read from the yaml file for
>>> controllers that are local to my admin and users.
>>>
>>> On Mon, Feb 29, 2016 at 9:40 PM, Tim Penhey <tim.pen...@canonical.com>
>>> wrote:
>>>
>>>> It is the controller that you have logged into for the API.
>>>>
>>>> What are you wanting?
>>>>
>>>> You need a different API connection for each controller.
>>>>
>>>> Tim
>>>>
>>>> On 01/03/16 15:05, Adam Stokes wrote:
>>>>> Right, but how do you specify which controller you want to list the
>>>>> models for? The only way I can see is to manually `juju switch
>>>>> ` then re-login to the API and run the AllModels method. Is
>>>>> there a way (as an administrator) to specify which controller you want
>>>>> to list the models for?
>>>>>
>>>>> On Mon, Feb 29, 2016 at 8:46 PM, Ian Booth <ian.bo...@canonical.com
>>>>> <mailto:ian.bo...@canonical.com>> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 01/03/16 11:25, Adam Stokes wrote:
>>>>> > On Mon, Feb 29, 2016 at 7:24 PM, Tim Penhey
>>>>> <tim.pen...@canonical.com <mailto:tim.pen...@canonical.com>>
>>>>> > wrote:
>>>>> >
>>>>> >> On 01/03/16 03:48, Adam Stokes wrote:
>>>>> >>> Is there a way to list all models for a specific controller?
>>>>> >>
>>>>> >> Yes.
>>>>> >
>>>>> >
>>>>> > Mind pointing me to the api docs that has that capability?
>>>>> >
>>>>>
>>>>>
>>>> https://godoc.org/github.com/juju/juju/api/controller#Client.AllModels
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Logging into the API on Juju 2.0

2016-02-29 Thread Ian Booth
Just to be clear, the remote APi for listing models for a given controller
exists. But you do need to look at controllers.yaml to see what controllers you
have bootstrapped or have access to in order to make the remote list models api
call.

On 01/03/16 13:14, Adam Stokes wrote:
> Got it squared away, being able to replicate `juju list-controllers` didn't
> have a remote api. So I will continue to read from
> ~/.local/share/juju/controllers.yaml. My intention was to basically see
> what controllers were already bootstrapped and gather the models for those
> controllers using the remote juju api. But that doesn't exist so I will
> mimic what `juju list-controllers` does and read from the yaml file for
> controllers that are local to my admin and users.
> 
> On Mon, Feb 29, 2016 at 9:40 PM, Tim Penhey <tim.pen...@canonical.com>
> wrote:
> 
>> It is the controller that you have logged into for the API.
>>
>> What are you wanting?
>>
>> You need a different API connection for each controller.
>>
>> Tim
>>
>> On 01/03/16 15:05, Adam Stokes wrote:
>>> Right, but how do you specify which controller you want to list the
>>> models for? The only way I can see is to manually `juju switch
>>> ` then re-login to the API and run the AllModels method. Is
>>> there a way (as an administrator) to specify which controller you want
>>> to list the models for?
>>>
>>> On Mon, Feb 29, 2016 at 8:46 PM, Ian Booth <ian.bo...@canonical.com
>>> <mailto:ian.bo...@canonical.com>> wrote:
>>>
>>>
>>>
>>> On 01/03/16 11:25, Adam Stokes wrote:
>>> > On Mon, Feb 29, 2016 at 7:24 PM, Tim Penhey
>>> <tim.pen...@canonical.com <mailto:tim.pen...@canonical.com>>
>>> > wrote:
>>> >
>>> >> On 01/03/16 03:48, Adam Stokes wrote:
>>> >>> Is there a way to list all models for a specific controller?
>>> >>
>>> >> Yes.
>>> >
>>> >
>>> > Mind pointing me to the api docs that has that capability?
>>> >
>>>
>>>
>> https://godoc.org/github.com/juju/juju/api/controller#Client.AllModels
>>>
>>>
>>
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Logging into the API on Juju 2.0

2016-02-29 Thread Ian Booth


On 01/03/16 11:25, Adam Stokes wrote:
> On Mon, Feb 29, 2016 at 7:24 PM, Tim Penhey 
> wrote:
> 
>> On 01/03/16 03:48, Adam Stokes wrote:
>>> Is there a way to list all models for a specific controller?
>>
>> Yes.
> 
> 
> Mind pointing me to the api docs that has that capability?
> 

https://godoc.org/github.com/juju/juju/api/controller#Client.AllModels

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: LXD support (maybe)

2016-02-25 Thread Ian Booth
> 
>> I personally encourage us to use heterogeneous versions of go as much as
>> we can. Because we should be compatible as much as possible. But it does
>> look like our dependencies are going to force our hand.
>>
> 
> Agreed. I think it's healthy for Juju's devs to be using a range of Go
> versions (within reason). It helps to ensure we not relying on version
> specific behaviour.
> 

+1. And I like life on the edge, so 1.6 for me :-D


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD support (maybe)

2016-02-25 Thread Ian Booth
FWIW, go 1.6 works just fine with Juju on my system

On 26/02/16 08:34, Menno Smits wrote:
> On 26 February 2016 at 04:59, Horacio Duran 
> wrote:
> 
>> be aware though, iirc that ppa replaces your go version with 1.6 (or used
>> to) which can mess your env if you are using go from ubuntu.
>>
> 
> With a bit of apt configuration you can use the lxd stable PPA without
> pulling in its Go 1.6 packages.
> 
> Here's what I did:
> 
> $ cat /etc/apt/preferences.d/lxd-stable-pin
> Package:  *
> Pin: release o=LP-PPA-ubuntu-lxc-lxd-stable
> Pin-Priority: 200
> 
> Package: lxd lxd-tools lxd-client lxcfs lxc-templates lxc cgmanager
> libcgmanager0 libseccomp2
> Pin: release o=LP-PPA-ubuntu-lxc-lxd-stable
> Pin-Priority: 500
> 
> The main problem with this approach is that you have to explicitly specify
> the package names you do want to use, which will be a problem if package
> names change or extra packages are added. Maybe someone with more apt foo
> than me knows a better way.
> 
> - Menno
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju devel 2.0-beta1 is available for testing

2016-02-20 Thread Ian Booth
To specify a different LXD host:

$ juju bootstrap mycontroller lxd/

For now, just localhost (the default) has been fully tested and is guaranteed to
work with this beta1.

There's no need to edit any clouds.yaml file for the LXD cloud. It's meant to be
really easy to use!


On 21/02/16 09:21, Marco Ceppi wrote:
> Won't the user be able to create different LXD clouds by specifying a
> remote LXD host though?
> 
> On Sun, Feb 21, 2016, 12:19 AM Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> The lxd cloud works on Juju 2.0 beta1 out of the box.
>>
>> $ juju bootstrap mycontroller lxd
>>
>> There is no need to edit any clouds.yaml. It Just Works.
>>
>> It seems the confusion comes from not seeing lxd in the output of juju
>> list-clouds. list-clouds ostensibly shows available public clouds (aws,
>> azure
>> etc) and any private clouds (maas, openstack etc) added by the user. The
>> lxd
>> cloud is just built-in to Juju. But from a usability perspective, it's
>> seems we
>> should include lxd in the output of list-clouds.
>>
>> NOTE: the latest lxd 2.0.0 beta3 release recently added to the archives
>> has an
>> api change that is not compatible with Juju. You will need to ensure that
>> you're
>> still using the lxd 2.0.0 beta2 release to test with Juju.
>>
>>
>> On 21/02/16 08:26, Jorge O. Castro wrote:
>>> Awesome, a nice weekend present!
>>>
>>> I updated and LXD is not listed when I `juju list-clouds`. Rick and I
>>> were guessing that maybe because the machine I am testing on is on
>>> trusty that we exclude that cloud on purpose. If I was on a xenial
>>> machine I would assume lxd would be available?
>>>
>>> What's an example clouds.yaml look like for a lxd local provider? I
>>> tried manually adding a lxd cloud via `add-cloud` but I'm unsure of
>>> what the formatting would look like for a local provider.
>>>
>>>> Development releases use the "devel" simple-streams. You must configure
>>>> the `agent-stream` option in your environments.yaml to use the matching
>>>> juju agents.
>>>
>>> I am confused, I no longer have an environments.yaml so is this
>>> leftover from a previous release?
>>>
>>> Thanks!
>>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju devel 2.0-beta1 is available for testing

2016-02-20 Thread Ian Booth
The lxd cloud works on Juju 2.0 beta1 out of the box.

$ juju bootstrap mycontroller lxd

There is no need to edit any clouds.yaml. It Just Works.

It seems the confusion comes from not seeing lxd in the output of juju
list-clouds. list-clouds ostensibly shows available public clouds (aws, azure
etc) and any private clouds (maas, openstack etc) added by the user. The lxd
cloud is just built-in to Juju. But from a usability perspective, it's seems we
should include lxd in the output of list-clouds.

NOTE: the latest lxd 2.0.0 beta3 release recently added to the archives has an
api change that is not compatible with Juju. You will need to ensure that you're
still using the lxd 2.0.0 beta2 release to test with Juju.


On 21/02/16 08:26, Jorge O. Castro wrote:
> Awesome, a nice weekend present!
> 
> I updated and LXD is not listed when I `juju list-clouds`. Rick and I
> were guessing that maybe because the machine I am testing on is on
> trusty that we exclude that cloud on purpose. If I was on a xenial
> machine I would assume lxd would be available?
> 
> What's an example clouds.yaml look like for a lxd local provider? I
> tried manually adding a lxd cloud via `add-cloud` but I'm unsure of
> what the formatting would look like for a local provider.
> 
>> Development releases use the "devel" simple-streams. You must configure
>> the `agent-stream` option in your environments.yaml to use the matching
>> juju agents.
> 
> I am confused, I no longer have an environments.yaml so is this
> leftover from a previous release?
> 
> Thanks!
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Please merge master into your feature branches

2016-02-18 Thread Ian Booth
FYI for folks developing feature branches for juju-core.

juju-core master has been updated to include the first round of functionality to
improve the bootstrap experience. The consequence of this is that CI scripts
needed to be updated to match. This means that any feature branch which has not
had master commit 294388 or later merged in will not work with CI and so will
not be blessed for release.



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: EC2 VPC firewall rules

2016-02-18 Thread Ian Booth
Login was bumped to v3 to prevent accidental logins from older Juju clients
which may appear to connect successfully but then fail later depending on what
operations are performed.

It also allows the "this version is incompatible message". This was done for 1.x
clients logging into Juju 2.0 servers, but the other way around was missed out.
We'll fix that for beta2.

On 18/02/16 20:51, John Meinel wrote:
> Shouldn't we at least be giving a "juju 2.0 cannot operate with a juju 1.X
> API server, please install juju-1.25 if you want to use this system", or
> something along tohse lines. Admin(3).Login is not implemented sounds like
> a poor way for them to discover that.
> 
> John
> =:->
> 
> 
> On Thu, Feb 18, 2016 at 2:49 PM, John Meinel  wrote:
> 
>> Looks like the changes to Login broke compatibility. We are adding a Login
>> v3, but it looks like the new code will refuse to try to Login to v2. I'm a
>> bit surprised, but it means you'll need to bootstrap again if you want to
>> test it out with current trunk.
>>
>> John
>> =:->
>>
>>
>> On Thu, Feb 18, 2016 at 2:47 PM, Tom Barber 
>> wrote:
>>
>>> Hey Dimiter,
>>>
>>> Thanks for that. As am running trunk I wanted to make sure I was fully up
>>> to date before progressing further. I pulled trunk locally and ran juju
>>> upgrade-juju --upload-tools
>>>
>>> That gives me:
>>>
>>> WARNING no addresses found in space "default"
>>> WARNING using all API addresses (cannot pick by space "default"):
>>> [public:52.30.224.20 local-cloud:172.31.2.38]
>>> WARNING discarding API open error: no such request - method
>>> Admin(3).Login is not implemented (not implemented)
>>> ERROR no such request - method Admin(3).Login is not implemented (not
>>> implemented)
>>>
>>>
>>> I assume the ERROR portion is pretty critical. So here's a slightly off
>>> topic question, which I suspect has a very simple yes/no answer. Can I
>>> either a) force a bootstrapped environment upgrade b) manually upgrade an
>>> environment by passing the error but making the bootstrap node up to date
>>> c) export the existing nodes it manages and import them back into a new
>>> bootstrap node without having to recreate them as well?
>>>
>>> Thanks
>>>
>>> Tom
>>>
>>> --
>>>
>>> Director Meteorite.bi - Saiku Analytics Founder
>>> Tel: +44(0)5603641316
>>>
>>> (Thanks to the Saiku community we reached our Kickstart
>>> 
>>> goal, but you can always help by sponsoring the project
>>> )
>>>
>>> On 18 February 2016 at 10:42, Dimiter Naydenov <
>>> dimiter.nayde...@canonical.com> wrote:
>>>
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 18.02.2016 12:01, Tom Barber wrote:
> Hello folks
>
> I'm not sure if my tinkering has broken something, the fact I'm
> running trunk has broken something or I just don't understand
> something.
>
> Until last week we've been running EC2 classic, but we have now
> switched to EC2-VPC and have launched a few machines.
>
> juju ssh to these machines works fine and I've been configuring
> them to suit our needs.
>
> Then I came to look at external access, `juju expose mysqldb` for
> example, I would then expect to be able to access it from the
> outside world, but can't unless go into my VPC settings and open
> the port in one of the juju security groups, at which point
> external access works fine.
>
> Am I missing something?
>
> Thanks
>
> Tom
>
>
 Hey Tom,

 What you're describing sounds like a bug, as "juju expose "
 should trigger the firewaller worker to open the ports the service has
 declared (with open-ports within the charm) using the security group
 assigned to the host machine for all units of that service.

 Have you changed the "firewall-mode" setting by any chance?
 Can you provide some logs from /var/log/juju/*.log on the bootstrap
 instance (machine 0)?

 Cheers,
 - --
 Dimiter Naydenov 
 Juju Core Sapphire team 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJWxaAXAAoJENzxV2TbLzHwGgEIAIuj0sPzh7S/4jvTQ6aA/dwP
 i7WkSZ586JkNbEFeCBjDavO6oZFOwIAEW+EpGuy1C0O8BJr5Y2YJBMR96pdf3Rj/
 Y6xS4Byt0HrwCWixt7ut6zu7BsT+nv6YFO7fNQvNYLyroufzpqUKaALJp5xwedkJ
 JIx1iyLnAZ4ZC1/0VkoBM/UjbZN7xQIteNvChBCZSSk8RvbqXCKhbXZKuUKMAw5g
 R+D3wIwLEyZHb5SATcSSdE6nidv4A0F2waac1/3lOvFebeOsnapnRKkIDp3Y9v19
 /zDiDLWSJJvMDau8iIzSQ4STK/sLEmA78iRNkfDRWRifv0z1KkY6ppnhaS+jrj4=
 =kPA7
 -END PGP SIGNATURE-

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju

>>>
>>>
>>> --
>>> Juju mailing list

Re: Breaking news - New Juju 2.0 home^H^H^H^H data location

2016-02-08 Thread Ian Booth
Yes

>> Very, very soon, the need for an environments.yaml file will be no more,

Hopefully in time for the 2.0 beta due sometime next week.

On 09/02/16 01:33, Adam Stokes wrote:
> Does this mean the environments.yaml file is going away at some point?
> 
> On Mon, Feb 8, 2016 at 2:16 AM, Ian Booth <ian.bo...@canonical.com> wrote:
> 
>> As advance notice, the next alpha release of Juju 2.0 (due this week) will
>> use a
>> new default home location. Juju will now adhere to the the XDG desktop
>> standard
>> and use this directory (by default):
>>
>> ~/.local/share/juju
>>
>> to store its working files (%APPDATA%/Juju on Windows). This is partly to
>> allow
>> Juju 2.0 to be installed alongside 1.x.
>>
>> Very, very soon, the need for an environments.yaml file will be no more,
>> meaning
>> there will be no need for the user to edit any files in that directory. As
>> a
>> sneak peak of what is coming, you will be able to, out of the box:
>>
>> $ juju bootstrap mycontroller aws/us-west-2
>>
>> Note that there's no need to "$ juju init" or edit any environment.yaml to
>> use
>> the public clouds and regions supported by Juju. Adding support for new
>> regions
>> or cloud information is a simple matter of running "$juju update-clouds".
>> There's more to come, but you get the idea.
>>
>> Anyway, the point of the above is to say the location of the home/data
>> directory
>> doesn't really matter as there will be no need to poke around inside it.
>>
>> As an interim measure, if you run off master, just:
>>
>> mkdir ~/.local/share/juju
>> cp -r ~/.juju/* ~/.local/share/juju
>>
>> if you want to use existing models with the latest build from source.
>>
>>
>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Breaking news - New Juju 2.0 home^H^H^H^H data location

2016-02-07 Thread Ian Booth
As advance notice, the next alpha release of Juju 2.0 (due this week) will use a
new default home location. Juju will now adhere to the the XDG desktop standard
and use this directory (by default):

~/.local/share/juju

to store its working files (%APPDATA%/Juju on Windows). This is partly to allow
Juju 2.0 to be installed alongside 1.x.

Very, very soon, the need for an environments.yaml file will be no more, meaning
there will be no need for the user to edit any files in that directory. As a
sneak peak of what is coming, you will be able to, out of the box:

$ juju bootstrap mycontroller aws/us-west-2

Note that there's no need to "$ juju init" or edit any environment.yaml to use
the public clouds and regions supported by Juju. Adding support for new regions
or cloud information is a simple matter of running "$juju update-clouds".
There's more to come, but you get the idea.

Anyway, the point of the above is to say the location of the home/data directory
doesn't really matter as there will be no need to poke around inside it.

As an interim measure, if you run off master, just:

mkdir ~/.local/share/juju
cp -r ~/.juju/* ~/.local/share/juju

if you want to use existing models with the latest build from source.




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


  1   2   3   >