Re: Problem with editing file on container

2017-12-10 Thread Andrew Wilkins
On Sun, Dec 10, 2017 at 3:00 AM Naas Si Ahmed 
wrote:

> Hi,
>
> I'm trying to modify a file using the command vi or nano to edit a file on
> a juju container, but I'm receiving an error such : terminal unknown.
>

In recent versions of Juju, "juju run" does not offer the ability to run
interactive commands.

Is there any way that I can use to edit a file graphically from another
> juju container ?
>

You can use "juju ssh" to SSH to the machine: "juju ssh 8 vi
/etc/conf.rules".

The juju command I used is :
> Juju run "vi /etc/conf.rules" --machine =8
>
> Thank you,
> Ahmed.
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.3.0 is here!

2017-12-08 Thread Andrew Wilkins
On Fri, Dec 8, 2017 at 6:59 AM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> The Juju team are extremely pleased to announce the release of Juju 2.3.
> Juju is now more versatile, more efficient, and more configurable than ever.
>
> Cross Model Relations deliver a new way of organising your software stack.
> Deploy a database in one model and connect it to an application running
> another, even one running on a different controller, or even a different
> cloud.
>
> For containers at scale, Juju now integrates Canonical's Fan overlay
> network system. This allows containers to map network traffic to any other
> container on the fan network without distributed databases, consensus
> protocols, or any extra overhead.
>
> Juju's support for bundles has made it possible to quickly deploy
> connected sets of applications for some time now, but no two use cases are
> the same. That's why we have introduced the concept of an 'overlay' bundle
> - now you can easily add your own configuration and tweaks to a bundle at
> deploy time. See below for links to more information on this and other key
> features.
>

Hi folks,

Unfortunately a critical bug [0] has escaped to the field. This bug affects
existing relations in upgraded models. Models created after upgrading, or
with a fresh bootstrap, or where relations are created after upgrading,
will not be affected.

I would not recommend upgrading from 2.x. to 2.3.0. We will be working on a
fix for 2.3.1, and I expect this issue will bring that release forward much
sooner. If you have already upgraded and are affected, then you can fix it
by adding a document to the "statuses" collection in Mongo, as described in
the bug.

Cheers,
Andrew

[0] https://bugs.launchpad.net/juju/+bug/1737107


>
>
> ## How can I get it?
>
>
> The best way to get your hands on this release of Juju is to install it
> via snap packages (see https://snapcraft.io/for more info on snaps).
>
>
>snap install juju --classic
>
>
> Other packages are available for a variety of platforms. Please see the
> online documentation at https://jujucharms.com/docs/2.3/reference-install
>  . Those subscribed
> to a snap channel should be automatically upgraded. If you’re using the
> ppa/homebrew, you should see an upgrade available.
>
>
> For highlights of this release, please see the documentation at
>
> https://jujucharms.com/docs/2.3/whats-new. Further details are below.
>
>
>
> ## New
>
>
> * Cross Model Relations:
>
>  - see https://jujucharms.com/docs/2.3/models-cmr
>
>
> * Persistent Storage:
>
>  - see https://jujucharms.com/docs/2.3/charms-storage
>
>
> * FAN:
>
>  - see https://jujucharms.com/docs/2.3/charms-fan
>
>
> * Bundle deployments:
>
>  - Changed flags for deploying bundles to existing machines
>
>  - Bundle deploy flag --bundle-config replaced with --overlay
>
>  - Deploying bundles now supports --dry-run
>
>  - Deploying bundles can now target existing machines
>
>
> * Update Application Series:
>
>  - see https://jujucharms.com/docs/2.3/howto-updateseries
>
>
> * Parallelization of the Machine Provisioner:
>
> - Groups of machines will now be provisioned in parallel reducing
> deployment time, especially on large bundles.
>
>
> * open_port and close_port hook tools now support ICMP
>
> - The open_port and close_port hook tools now support opening firewall
> access for ICMP. The syntax is:
>
> open_port icmp
>
>
> * LXD Storage Provider:
>
>  - see https://jujucharms.com/docs/2.3/charms-storage#lxd-(lxd)
>
>
> ## Fixes
>
>
> * Listing of Juju models is more efficient and can now handle more models
> gracefully
>
> * Leadership coordinations is no longer tied to local time which avoids
> problems with clock skew and reduces overall load on the database
>
> * Models are now more reliably destroyed and several fixes to avoid
> negative impacts while they are being removed
>
>
> You can check the milestones for a detailed breakdown of the Juju bugs we
> have fixed:
>
>
> https://launchpad.net/juju/+milestone/2.3.0
>
> https://launchpad.net/juju/+milestone/2.3-rc2
>
> https://launchpad.net/juju/+milestone/2.3-rc1
>
> https://launchpad.net/juju/+milestone/2.3-beta3
>
> https://launchpad.net/juju/+milestone/2.3-beta2
>
> https://launchpad.net/juju/+milestone/2.3-beta1
>
>
>
> ## Known issues
>
> The issues are targeted to be addressed in the upcoming 2.3.1 release.
>
>
> * Firewaller issues on vmware vsphere
> https://bugs.launchpad.net/juju/+bug/1732665
>
>
> * LXD broken on vmware
> https://bugs.launchpad.net/juju/+bug/1733882
>
>
> * Can't deploy bundle with map-machines=existing and subordinates
> https://bugs.launchpad.net/juju/+bug/1736592
>
>
> * load spike on controller following remove-application
> https://bugs.launchpad.net/juju/+bug/1733708
>
>
>
> ## Feedback Appreciated!
>
>
> We encourage everyone to let us know how you're using Juju.
>
>
> Join us at regular Juju shows - subscribe to our Youtube channel
> 

Re: Juju 2.3.0 is here!

2017-12-08 Thread Andrew Wilkins
On Fri, Dec 8, 2017 at 6:59 AM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> The Juju team are extremely pleased to announce the release of Juju 2.3.
> Juju is now more versatile, more efficient, and more configurable than ever.
>
> Cross Model Relations deliver a new way of organising your software stack.
> Deploy a database in one model and connect it to an application running
> another, even one running on a different controller, or even a different
> cloud.
>
> For containers at scale, Juju now integrates Canonical's Fan overlay
> network system. This allows containers to map network traffic to any other
> container on the fan network without distributed databases, consensus
> protocols, or any extra overhead.
>
> Juju's support for bundles has made it possible to quickly deploy
> connected sets of applications for some time now, but no two use cases are
> the same. That's why we have introduced the concept of an 'overlay' bundle
> - now you can easily add your own configuration and tweaks to a bundle at
> deploy time. See below for links to more information on this and other key
> features.
>

Hi folks,

Unfortunately a critical bug [0] has escaped to the field. This bug affects
existing relations in upgraded models. Models created after upgrading, or
with a fresh bootstrap, or where relations are created after upgrading,
will not be affected.

I would not recommend upgrading from 2.x. to 2.3.0. We will be working on a
fix for 2.3.1, and I expect this issue will bring that release forward much
sooner. If you have already upgraded and are affected, then you can fix it
by adding a document to the "statuses" collection in Mongo, as described in
the bug.

Cheers,
Andrew

[0] https://bugs.launchpad.net/juju/+bug/1737107


>
>
> ## How can I get it?
>
>
> The best way to get your hands on this release of Juju is to install it
> via snap packages (see https://snapcraft.io/for more info on snaps).
>
>
>snap install juju --classic
>
>
> Other packages are available for a variety of platforms. Please see the
> online documentation at https://jujucharms.com/docs/2.3/reference-install
>  . Those subscribed
> to a snap channel should be automatically upgraded. If you’re using the
> ppa/homebrew, you should see an upgrade available.
>
>
> For highlights of this release, please see the documentation at
>
> https://jujucharms.com/docs/2.3/whats-new. Further details are below.
>
>
>
> ## New
>
>
> * Cross Model Relations:
>
>  - see https://jujucharms.com/docs/2.3/models-cmr
>
>
> * Persistent Storage:
>
>  - see https://jujucharms.com/docs/2.3/charms-storage
>
>
> * FAN:
>
>  - see https://jujucharms.com/docs/2.3/charms-fan
>
>
> * Bundle deployments:
>
>  - Changed flags for deploying bundles to existing machines
>
>  - Bundle deploy flag --bundle-config replaced with --overlay
>
>  - Deploying bundles now supports --dry-run
>
>  - Deploying bundles can now target existing machines
>
>
> * Update Application Series:
>
>  - see https://jujucharms.com/docs/2.3/howto-updateseries
>
>
> * Parallelization of the Machine Provisioner:
>
> - Groups of machines will now be provisioned in parallel reducing
> deployment time, especially on large bundles.
>
>
> * open_port and close_port hook tools now support ICMP
>
> - The open_port and close_port hook tools now support opening firewall
> access for ICMP. The syntax is:
>
> open_port icmp
>
>
> * LXD Storage Provider:
>
>  - see https://jujucharms.com/docs/2.3/charms-storage#lxd-(lxd)
>
>
> ## Fixes
>
>
> * Listing of Juju models is more efficient and can now handle more models
> gracefully
>
> * Leadership coordinations is no longer tied to local time which avoids
> problems with clock skew and reduces overall load on the database
>
> * Models are now more reliably destroyed and several fixes to avoid
> negative impacts while they are being removed
>
>
> You can check the milestones for a detailed breakdown of the Juju bugs we
> have fixed:
>
>
> https://launchpad.net/juju/+milestone/2.3.0
>
> https://launchpad.net/juju/+milestone/2.3-rc2
>
> https://launchpad.net/juju/+milestone/2.3-rc1
>
> https://launchpad.net/juju/+milestone/2.3-beta3
>
> https://launchpad.net/juju/+milestone/2.3-beta2
>
> https://launchpad.net/juju/+milestone/2.3-beta1
>
>
>
> ## Known issues
>
> The issues are targeted to be addressed in the upcoming 2.3.1 release.
>
>
> * Firewaller issues on vmware vsphere
> https://bugs.launchpad.net/juju/+bug/1732665
>
>
> * LXD broken on vmware
> https://bugs.launchpad.net/juju/+bug/1733882
>
>
> * Can't deploy bundle with map-machines=existing and subordinates
> https://bugs.launchpad.net/juju/+bug/1736592
>
>
> * load spike on controller following remove-application
> https://bugs.launchpad.net/juju/+bug/1733708
>
>
>
> ## Feedback Appreciated!
>
>
> We encourage everyone to let us know how you're using Juju.
>
>
> Join us at regular Juju shows - subscribe to our Youtube channel
> 

Re: Side effects of resizing a juju machine

2017-11-15 Thread Andrew Wilkins
On Thu, Nov 16, 2017 at 11:38 AM Akshat Jiwan Sharma 
wrote:

> Hi everyone,
>
> A couple of times I've noticed that the capacity of a machine provisioned
> by juju is much more than what I require for my workload. I was wondering
> that  if I were to manually resize the machine would it break any of juju
> services?
>

Juju won't care, you will just have some incorrect information in "juju
status". We record hardware characteristics for each machine, but it's used
only for describing machines to the user.

FWIW, I've just tested this:
 - juju bootstrap azure --bootstrap-constraints
instance-type=instance-type=Standard_DS12_v2
 - juju switch controller && juju deploy ubuntu --to 0
Then resized the machine to Standard_D4s_v3 via the Azure Portal. Juju came
back up fine, as did the unit.

Some charms might take a snapshot of hardware details when they're
installed, but I'm not aware of which if any would do that. But Juju itself
doesn't care.

Cheers,
Andrew


> Thanks,
> Akshat
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Effect of lease tweaks

2017-11-01 Thread Andrew Wilkins
On Wed, Nov 1, 2017 at 10:43 PM John A Meinel 
wrote:

> So I wanted to know if Andrew's changes in 2.3 are going to have a
> noticeable affect at scale on Leadership. So I went and set up a test with
> HA controllers running 10 machines each with 3 containers, and then
> distributing ~500 applications each with 3 units across everything.
> I started at commit 2e50e5cf4c3 which is just before Andrew's Lease patch
> landed.
>
> juju bootstrap aws/eu-west-2 --bootstrap-constraint
> instance-type=m4.xlarge --config vpc-id=
> juju enable-ha -n3
> # Wait for things to stablize
> juju deploy -B cs:~jameinel/ubuntu-lite -n10 --constraints
> instance-type=m4.xlarge
> # wait
>
> #set up the containers
> for i in `seq 0 9`; do
>   juju deploy -n3 -B cs:~jameinel/ubuntu-leader ul --to
> lxd:${i},lxd:${i},lxd:${i}
> done
>
> # scale up. I did this more in batches of a few at a time, but slowly grew
> all the way up
> for j in `seq 1 49`; do
>   echo $j
>   for i in `seq 0 9`; do
> juju deploy -B -n3 cs:~jameinel/ubuntu-leader ul${i}{$j} --to
> ${i}/lxd/0,${i}/lxd/1,${i}/lxd/2 &
>   done
>   time wait
> done
>
> I let it go for a while until "juju status" was happy that everything was
> up and running. Note that this was 1500 units, 500 applications in a single
> model.
> time juju status was around 4-10s.
>
> I was running 'mongotop' and watching 'top' while it was running.
>
> I then upgraded to the latest juju dev (c49dd0d88a).
> Now, the controller immediately started thrashing, with bad lease
> documents in the database, and eventually got to the point that it ran out
> of open file descriptors. Theoretically upgrading 2.2 => 2.3 won't have the
> same problem because the actual upgrade step should run.
>

The upgrade steps does run for upgrades of 2.3-beta2.M -> 2.3-beta2.N,
because the build number is changing. I've tested that.

It appears that the errors are preventing Juju from getting far enough
along to run the upgrade steps. I'll continue looking today.

Thanks very much for the load testing, that's very encouraging.


> However, if I just did "db.leases.remove({})" it recovered.
> I ended up having to restart mongo and jujud to recover from the open file
> handles, but it did eventually recover.
>
> At this point, I waited again for everything to look happy, and watch
> mongotop and top again.
>
> These aren't super careful results, where I would want to run things for
> like an hour each and check the load over that whole time. Really I should
> have set up prometheus monitoring. But as a quick check, these are the top
> values for mongotop before:
>
>   nstotal readwrite
>   local.oplog.rs181ms181ms  0ms
>juju.txns120ms 10ms110ms
>  juju.leases 80ms 34ms 46ms
>juju.txns.log 24ms  4ms 19ms
>
>   nstotal readwrite
>   local.oplog.rs208ms208ms  0ms
>juju.txns140ms 12ms128ms
>  juju.leases 98ms 42ms 56ms
>  juju.charms 43ms 43ms  0ms
>
>   nstotal readwrite
>   local.oplog.rs220ms220ms  0ms
>juju.txns161ms 14ms146ms
>  juju.leases115ms 52ms 63ms
> presence.presence.beings 69ms 68ms  0ms
>
>   nstotal readwrite
>   local.oplog.rs213ms213ms  0ms
>juju.txns164ms 15ms149ms
>  juju.leases 82ms 35ms 47ms
> presence.presence.beings 79ms 78ms  0ms
>
>   nstotal readwrite
>   local.oplog.rs221ms221ms  0ms
>juju.txns168ms 13ms154ms
>  juju.leases 95ms 40ms 55ms
>juju.statuses 33ms 16ms 17ms
>
> totals:
> 1043 local.oplog.rs
> juju.txns 868
> juju.leases 470
>
> and after
>
>   nstotalreadwrite
>   local.oplog.rs 95ms95ms  0ms
>juju.txns 68ms 6ms 61ms
>  juju.leases 33ms13ms 19ms
>juju.txns.log 13ms 3ms 10ms
>
>   nstotal readwrite
>   local.oplog.rs200ms200ms  0ms
>juju.txns160ms 10ms150ms
>  juju.leases 78ms 35ms 42ms
>juju.txns.log 29ms  4ms 24ms
>
>   nstotal readwrite
>   local.oplog.rs151ms151ms  0ms
>juju.txns103ms  6ms 97ms
>  juju.leases 45ms 20ms 25ms
>juju.txns.log 21ms  6ms 15ms
>
>   nstotal readwrite
>   local.oplog.rs138ms138ms  0ms
>

Re: Disk ID for a provisioned instance

2017-10-30 Thread Andrew Wilkins
On Tue, Oct 31, 2017 at 1:40 PM Akshat Jiwan Sharma <akshatji...@gmail.com>
wrote:

> Thanks Andrew. I'm using google cloud platform. But also planning to use
> aws in the future.
>

For AWS, we tag the root disk EBS volume with:

   key=Name
   value=${INST_ID}-root

So if you add a machine in Juju and it gets assigned the instance ID
"inst-foo", then the root disk EBS volume will have a Name tag with the
value "inst-foo-root".

I don't know if we can guarantee that this will remain true forever, but it
hasn't changed in a long time.

HTH,
Andrew


> On Tue, Oct 31, 2017 at 8:23 AM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> On Tue, Oct 31, 2017 at 10:37 AM Akshat Jiwan Sharma <
>> akshatji...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I'd like to automate backups for my server provisioned via juju. For
>>> that I'm planning to use terraform. To take a snapshot of the disk,
>>> terraform needs a disk id.
>>>
>>> Is there a way I can get disk ID using juju commands? Juju show machine
>>> only gives the capacity of the disk not its id.
>>>
>>
>> Not at the moment. We'll need to extend the data model and update the
>> providers to support this. We do record information about volumes that Juju
>> provisions, but that excludes the root/OS disk.
>>
>> On a machine that I deployed on google cloud platform the disk id is same
>>> as the instance id. Can I assume that the disk id is same as the instance
>>> id everywhere?
>>>
>>
>> No, that's not a safe assumption in general. Which cloud providers are
>> you using?
>>
>> Thanks,
>>> Akshat
>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>
>>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Disk ID for a provisioned instance

2017-10-30 Thread Andrew Wilkins
On Tue, Oct 31, 2017 at 10:37 AM Akshat Jiwan Sharma 
wrote:

> Hi,
>
> I'd like to automate backups for my server provisioned via juju. For that
> I'm planning to use terraform. To take a snapshot of the disk, terraform
> needs a disk id.
>
> Is there a way I can get disk ID using juju commands? Juju show machine
> only gives the capacity of the disk not its id.
>

Not at the moment. We'll need to extend the data model and update the
providers to support this. We do record information about volumes that Juju
provisions, but that excludes the root/OS disk.

On a machine that I deployed on google cloud platform the disk id is same
> as the instance id. Can I assume that the disk id is same as the instance
> id everywhere?
>

No, that's not a safe assumption in general. Which cloud providers are you
using?

Thanks,
> Akshat
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: open-port: command not found

2017-10-23 Thread Andrew Wilkins
On Mon, Oct 23, 2017 at 5:12 PM Marco Ceppi <marco.ce...@canonical.com>
wrote:

> On Mon, Oct 23, 2017, 03:59 Andrew Wilkins <andrew.wilk...@canonical.com>
> wrote:
>
>> On Mon, Oct 23, 2017 at 4:20 AM Akshat Jiwan Sharma <
>> akshatji...@gmail.com> wrote:
>>
>>> HI,
>>>
>>> I'm trying to manually expose a port on a juju machine. According to this
>>> answer
>>> <https://askubuntu.com/questions/808176/how-to-manually-open-a-port-in-juju>
>>> I should be able to do something like this:-
>>>
>>>  juju run  "open-port 443" --all
>>>
>>> However when I type this in my shell it throws an error
>>>
>>> open-port: command not found
>>>
>>
>> The different between the command you're running, and the one on
>> AskUbuntu, is that you're not passing --unit. When you pass --unit, it runs
>> the command in the context of a unit on the machine. You must be running in
>> the context of a unit to use "hook tools", such as open-port.
>>
>
> It seems weird that `juju run` behaves differently when using --unit and
> --all, is there a particular reason for that? I wouldn't expect the above
> command to fail.
>

`juju run` supports running commands in either a machine or unit context.
Only if you run within a unit context do you get access to hook tools.
Hooks require a unit context.

"--all" means "all machines", so the command runs in a machine context, for
each machine. You could have multiple units on a machine, so we can't
automatically choose a unit. Even if we could, what would be the use case
for doing that? Different machines will run different applications, which
will each have their own firewall requirements.

Cheers,
Andrew
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: open-port: command not found

2017-10-23 Thread Andrew Wilkins
On Mon, Oct 23, 2017 at 5:12 PM Marco Ceppi <marco.ce...@canonical.com>
wrote:

> On Mon, Oct 23, 2017, 03:59 Andrew Wilkins <andrew.wilk...@canonical.com>
> wrote:
>
>> On Mon, Oct 23, 2017 at 4:20 AM Akshat Jiwan Sharma <
>> akshatji...@gmail.com> wrote:
>>
>>> HI,
>>>
>>> I'm trying to manually expose a port on a juju machine. According to this
>>> answer
>>> <https://askubuntu.com/questions/808176/how-to-manually-open-a-port-in-juju>
>>> I should be able to do something like this:-
>>>
>>>  juju run  "open-port 443" --all
>>>
>>> However when I type this in my shell it throws an error
>>>
>>> open-port: command not found
>>>
>>
>> The different between the command you're running, and the one on
>> AskUbuntu, is that you're not passing --unit. When you pass --unit, it runs
>> the command in the context of a unit on the machine. You must be running in
>> the context of a unit to use "hook tools", such as open-port.
>>
>
> It seems weird that `juju run` behaves differently when using --unit and
> --all, is there a particular reason for that? I wouldn't expect the above
> command to fail.
>

`juju run` supports running commands in either a machine or unit context.
Only if you run within a unit context do you get access to hook tools.
Hooks require a unit context.

"--all" means "all machines", so the command runs in a machine context, for
each machine. You could have multiple units on a machine, so we can't
automatically choose a unit. Even if we could, what would be the use case
for doing that? Different machines will run different applications, which
will each have their own firewall requirements.

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: open-port: command not found

2017-10-22 Thread Andrew Wilkins
On Mon, Oct 23, 2017 at 11:09 AM Akshat Jiwan Sharma <akshatji...@gmail.com>
wrote:

> Thanks Andrew. Just one more question how does the open-port command
> behave with respect to the  firewalls with cloud providers. Specifically
> I'm asking in context of google cloud platform which by default only allows
> port 80 and 443(IIRC). So after running this command will I have to adjust
> firewall rules there as well?
>

That's exactly what open-port/expose is controlling :)

When you run open-port (or close-port), you're updating Juju's database to
say which ports should be open for the unit. When you run "juju expose", it
updates Juju's database to say that the "open" ports for the units of the
specified application should now be exposed. Juju will then update the
cloud firewall to come in line with what's in the Juju database.

Cheers,
Andrew


> Thanks,
> Akshat
>
> On Mon, Oct 23, 2017 at 7:28 AM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> On Mon, Oct 23, 2017 at 4:20 AM Akshat Jiwan Sharma <
>> akshatji...@gmail.com> wrote:
>>
>>> HI,
>>>
>>> I'm trying to manually expose a port on a juju machine. According to this
>>> answer
>>> <https://askubuntu.com/questions/808176/how-to-manually-open-a-port-in-juju>
>>> I should be able to do something like this:-
>>>
>>>  juju run  "open-port 443" --all
>>>
>>> However when I type this in my shell it throws an error
>>>
>>> open-port: command not found
>>>
>>
>> The different between the command you're running, and the one on
>> AskUbuntu, is that you're not passing --unit. When you pass --unit, it runs
>> the command in the context of a unit on the machine. You must be running in
>> the context of a unit to use "hook tools", such as open-port.
>>
>> I can verify that the application on this particular controller is
>>> already exposed and it thus satisfies the requirement for running this
>>> command.
>>>
>>> >"The port range will only be open while the application is exposed."
>>>
>>> Can you help me understand what I'm doing wrong?
>>>
>>
>> Ports are managed on a per-unit basis, so you need to execute the "run"
>> command against a unit or application, using --unit or --application
>> respectively.
>>
>> Once you've run open-port, you'll need to run "juju expose "
>> for the ports to actually be opened up.
>>
>> Thanks,
>>> Akshat
>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>
>>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: open-port: command not found

2017-10-22 Thread Andrew Wilkins
On Mon, Oct 23, 2017 at 4:20 AM Akshat Jiwan Sharma 
wrote:

> HI,
>
> I'm trying to manually expose a port on a juju machine. According to this
> answer
> 
> I should be able to do something like this:-
>
>  juju run  "open-port 443" --all
>
> However when I type this in my shell it throws an error
>
> open-port: command not found
>

The different between the command you're running, and the one on AskUbuntu,
is that you're not passing --unit. When you pass --unit, it runs the
command in the context of a unit on the machine. You must be running in the
context of a unit to use "hook tools", such as open-port.

I can verify that the application on this particular controller is already
> exposed and it thus satisfies the requirement for running this command.
>
> >"The port range will only be open while the application is exposed."
>
> Can you help me understand what I'm doing wrong?
>

Ports are managed on a per-unit basis, so you need to execute the "run"
command against a unit or application, using --unit or --application
respectively.

Once you've run open-port, you'll need to run "juju expose "
for the ports to actually be opened up.

Thanks,
> Akshat
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: vsphere support

2017-10-05 Thread Andrew Wilkins
On Wed, Oct 4, 2017 at 12:09 PM Serge E. Hallyn  wrote:

> Hi,
>
> I've inherited a few vsphere machines where I was hoping to use juju
> to fire off openstack etc.  I've been trying to use the vsphere
> provider (which I was excited to see existed), but am seeing a few
> inconsistencies, so had a few questions.
>
> The first blunt question - is this going to be a maintained driver,
> or am I in 6 months likely to find the vsphere driver dropped?
>

We are actively working on making the vsphere provider to be a great
experience in the Juju 2.3 release. There's some more work to be done for
mirroring VMDKs from cloud-images, but otherwise I think everything's
working well now.

I don't have any reason to believe that we would drop the provider, or to
reduce focus on quality. People seem to want it :)

The second is motivating the first - when I looked at
> https://jujucharms.com/docs/2.1/help-vmware it says hardware version
> 8 (ESX 5.0) should be sufficient.  But in fact juju has a ubuntu.ovf
> hardcoded in it that specifies 'vmx-10', which is ESX 5.5..  I currently
> have 5 ESX 5.1 machines and one 6.0.
>

To my knowledge, Juju has only been tested with ESXi 5.5 and 6.0. In terms
of hardware support, version 8 seems like it should be sufficient. I'm not
sure about the APIs we're using, but we don't use a lot, so it wouldn't
surprise me if ESXi 5.1 just works.

If you're able to tweak the OVF and build Juju from source, that would be a
great help. Otherwise I will check what we require in terms of APIs when I
return from vacation next week - and try it out and report back. If it
works, then we can look to update the OVF for Juju 2.3.

Even with a custom DC using only the 6.0 host, I have some (probably
> proxy) issues bootstrapping, which I'm sure are surmountable - but the
> answer to Q1 will decide whether it's worth pursuing that further, or
> whether I should spend my time elsewhere :)
>

Please file a bug if you're still having issues. Juju 2.3 beta 1 is
imminent, so please use that if you can.

Cheers,
Andrew

thanks,
> -serge
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: juju hangs during bootstrapping

2017-09-30 Thread Andrew Wilkins
On Fri, Sep 29, 2017 at 11:55 PM <dave.c...@dell.com> wrote:

>
>
>
>
> *From:* Andrew Wilkins [mailto:andrew.wilk...@canonical.com]
> *Sent:* Saturday, September 30, 2017 12:23 AM
> *To:* Chen2, Dave <dave_ch...@dell.com>; juju@lists.ubuntu.com
> *Subject:* Re: juju hangs during bootstrapping
>
>
>
> On Fri, Sep 29, 2017 at 10:43 AM <dave.c...@dell.com> wrote:
>
> Hi All,
>
>
>
> I am trying to bootstrap a MAAS cloud based on juju’s official guide (
> https://jujucharms.com/docs/2.2/clouds-maas), everything seems correct
> but after the Operation System (Ubuntu 16.04 or Ubuntu14.0) has been
> installed, juju hangs when attempting to connect to the MAAS node, here is
> what I can see from the terminal,
>
>
>
> $ juju bootstrap maas-cloud
>
> Creating Juju controller "maas-cloud" on maas-cloud
>
> Looking for packaged Juju agent version 2.2.4 for amd64
>
> Launching controller instance(s) on maas-cloud...
>
> - cka68p (arch=amd64 mem=32G cores=12)
>
> Fetching Juju GUI 2.9.2
>
> Waiting for address
>
> Attempting to connect to 10.20.3.254:22 (JUJU hangs here!)
>
>
>
> And it’s pending here forever, so I tried it again with the debug mode,
>
> $ juju bootstrap --show-log --debug --bootstrap-series=trusty maas-cloud
> maas-cloud-controller
>
>
>
> I saw some detail information like below,
>
> Attempting to connect to 10.20.3.254:22
>
> 19:33:11 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: ssh: connect to host 10.20.3.254 port 22:
> Connection refused
>
> 19:33:16 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: ssh: connect to host 10.20.3.254 port 22:
> Connection refused
>
> 19:33:21 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: ssh: connect to host 10.20.3.254 port 22:
> Connection refused
>
> 19:33:56 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: /var/lib/juju/nonce.txt does not exist
>
> 19:34:32 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: /var/lib/juju/nonce.txt does not exist
>
> 19:35:08 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: /var/lib/juju/nonce.txt does not exist
>
> 19:35:43 INFO  juju.cloudconfig userdatacfg_unix.go:410 Fetching agent:
> curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time
> %{time_total}s; size %{size_download} bytes; speed %{speed_download}
> bytes/s ' --retry 10 -o $bin/tools.tar.gz <[
> https://streams.canonical.com/juju/tools/agent/2.2.4/juju-2.2.4-ubuntu-amd64.tgz]>
>
>
>
> Is this the last thing logged? Try running that curl command on the
> machine manually. Perhaps there's an issue getting out to the internet.
>
> *[Dave] Yes, this is the last line I saw, our network topology is MAAS
> server can access internet (dual NIC with one NIC can access outer
> network), but each node that is deployed by MAAS/JUJU only get the IP from
> internal DHCP service, do you mean each deployed node also need access
> outer internet?*
>

The bootstrap machine requires access to the Juju agent repository, which
is on  "streams.canonical.com". It's possible to point Juju at a private
mirror to avoid the need to go to the internet.

*And… Do you have any idea about the  ssh connection refused error?*
>

That's just log spam, we should clean that up. Juju requests the MAAS
server, and then immediately starts attempting to connect via SSH. When the
server is still starting, we expect to see connection refused errors.


>
> I have no idea what’s going wrong since I can telnet to the node and ssh
> to that node is also possible, I just need type “yes” then I can login to
> the node,
>
> $ ssh ubuntu@10.20.3.254
>
> The authenticity of host ' 10.20.3.254 (10.20.3.254)' can't be established.
>
> ECDSA key fingerprint is
> SHA256:4FVm21s4dx7gc0/yDgz0+QAMGK4qWODoIqeoWtZg9RI.
>
> Are you sure you want to continue connecting (yes/no)?
>
>
>
> From the console of that node, I can find the controller’s public key has
> been injected to the node,
>
> -BEGIN SSH HOST KEY KEYS---
>
> …
>
> -END SSH KEY FINGERPRINTS
>
> …
>
> Cloud-init v. 0.7.9 finished at … Datasource DataSourceMAAS 
> [http://...:5240/MAAS/metadata/].
> Up 153.77 seconds.   (cloud-init hangs here!)
>
>
>
>
>
> I googled it and found someone said it is because “authorized-keys-path”
> is commented out in the “environments.yaml” [1], but the juju version I am
> using is “2.2.4-xenial-amd64”, the MAAS ver

Re: juju hangs during bootstrapping

2017-09-30 Thread Andrew Wilkins
On Sat, Sep 30, 2017 at 5:55 AM  wrote:

> Hi Narinder,
>
>
>
> Here is the log from the deployed node, actually, I can deploy Operation
> System successfully with MAAS or “juju bootstrap” but failed at some final
> steps, Our external network is exactly broken for some reason yesterday and
> won’t be recovered in short term, but I guess the network broken is
>  happened after I saw the below error message during bootstrapping.
>
> “Waiting for address
>
> Attempting to connect to 10.20.3.254:22”
>
>
>
> I can see apt-get update works from the log at the beginning, and the
> network is broken couple of hours after I saw those error message.
>
>
>
> Does the error message like below has any connection with connection
> refused error message? I am not quite sure.
>

No, they're not connected, but the failing curl command is what is
preventing bootstrap from proceeding. That's the server trying to download
the Juju agent.


>
>
> “0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
>
> Fetching Juju agent version 2.2.4 for amd64
>
> Attempt 1 to download tools from
> https://streams.canonical.com/juju/tools/agent/2.2.4/juju-2.2.4-ubuntu-amd64.tgz.
> ..
>
> curl: (6) Could not resolve host: streams.canonical.com
>
> “
>
>
>
> Anyway, I will try it again when our network back to normal, it would be
> great if you can see any other issue beside the network, thanks again for
> your help!
>
>
>
> Best Regards,
>
> Dave Chen
>
>
>
> *From:* Narinder Gupta [mailto:narinder.gu...@canonical.com]
> *Sent:* Friday, September 29, 2017 10:52 PM
> *To:* Chen2, Dave 
> *Cc:* juju 
> *Subject:* Re: juju hangs during bootstrapping
>
>
>
> Hi Dave,
>
> May I know which division of Dell you are working on? As i have setup
> Openstack deployed t Dell multiple time with MAAS and have not seen this
> issue so far.
>
>
>
> So please send me log /var/log/cloud-init-output.log which will let us
> know what is wrong. Also try sudo apt-get update on the bootstrap node to
> confirm you have external access.
>
>
>
> In MAAS you can always add the ssh keys to land into the installed nodes
> though.
>
>
>
>
> Thanks and Regards,
>
> Narinder Gupta (PMP)   narinder.gu...@canonical.com
>
> Canonical, Ltd.narindergupta [irc.freenode.net]
>
> +1.281.736.5150 <(281)%20736-5150>
> narindergupta2007[skype]
>
>
>
> Ubuntu- Linux for human beings | www.ubuntu.com | www.canonical.com
>
>
>
> On Fri, Sep 29, 2017 at 9:42 AM,  wrote:
>
> Hi All,
>
>
>
> I am trying to bootstrap a MAAS cloud based on juju’s official guide (
> https://jujucharms.com/docs/2.2/clouds-maas), everything seems correct
> but after the Operation System (Ubuntu 16.04 or Ubuntu14.0) has been
> installed, juju hangs when attempting to connect to the MAAS node, here is
> what I can see from the terminal,
>
>
>
> $ juju bootstrap maas-cloud
>
> Creating Juju controller "maas-cloud" on maas-cloud
>
> Looking for packaged Juju agent version 2.2.4 for amd64
>
> Launching controller instance(s) on maas-cloud...
>
> - cka68p (arch=amd64 mem=32G cores=12)
>
> Fetching Juju GUI 2.9.2
>
> Waiting for address
>
> Attempting to connect to 10.20.3.254:22 (JUJU hangs here!)
>
>
>
> And it’s pending here forever, so I tried it again with the debug mode,
>
> $ juju bootstrap --show-log --debug --bootstrap-series=trusty maas-cloud
> maas-cloud-controller
>
>
>
> I saw some detail information like below,
>
> Attempting to connect to 10.20.3.254:22
>
> 19:33:11 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: ssh: connect to host 10.20.3.254 port 22:
> Connection refused
>
> 19:33:16 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: ssh: connect to host 10.20.3.254 port 22:
> Connection refused
>
> 19:33:21 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: ssh: connect to host 10.20.3.254 port 22:
> Connection refused
>
> 19:33:56 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: /var/lib/juju/nonce.txt does not exist
>
> 19:34:32 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: /var/lib/juju/nonce.txt does not exist
>
> 19:35:08 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: /var/lib/juju/nonce.txt does not exist
>
> 19:35:43 INFO  juju.cloudconfig userdatacfg_unix.go:410 Fetching agent:
> curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time
> %{time_total}s; size %{size_download} bytes; speed %{speed_download}
> bytes/s ' --retry 10 -o $bin/tools.tar.gz <[
> https://streams.canonical.com/juju/tools/agent/2.2.4/juju-2.2.4-ubuntu-amd64.tgz]>
>
>
>
> I have no idea what’s going wrong since I can telnet to the node and ssh
> to that node is also possible, I just need type “yes” then I 

Re: juju hangs during bootstrapping

2017-09-29 Thread Andrew Wilkins
On Fri, Sep 29, 2017 at 10:43 AM  wrote:

> Hi All,
>
>
>
> I am trying to bootstrap a MAAS cloud based on juju’s official guide (
> https://jujucharms.com/docs/2.2/clouds-maas), everything seems correct
> but after the Operation System (Ubuntu 16.04 or Ubuntu14.0) has been
> installed, juju hangs when attempting to connect to the MAAS node, here is
> what I can see from the terminal,
>
>
>
> $ juju bootstrap maas-cloud
>
> Creating Juju controller "maas-cloud" on maas-cloud
>
> Looking for packaged Juju agent version 2.2.4 for amd64
>
> Launching controller instance(s) on maas-cloud...
>
> - cka68p (arch=amd64 mem=32G cores=12)
>
> Fetching Juju GUI 2.9.2
>
> Waiting for address
>
> Attempting to connect to 10.20.3.254:22 (JUJU hangs here!)
>
>
>
> And it’s pending here forever, so I tried it again with the debug mode,
>
> $ juju bootstrap --show-log --debug --bootstrap-series=trusty maas-cloud
> maas-cloud-controller
>
>
>
> I saw some detail information like below,
>
> Attempting to connect to 10.20.3.254:22
>
> 19:33:11 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: ssh: connect to host 10.20.3.254 port 22:
> Connection refused
>
> 19:33:16 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: ssh: connect to host 10.20.3.254 port 22:
> Connection refused
>
> 19:33:21 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: ssh: connect to host 10.20.3.254 port 22:
> Connection refused
>
> 19:33:56 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: /var/lib/juju/nonce.txt does not exist
>
> 19:34:32 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: /var/lib/juju/nonce.txt does not exist
>
> 19:35:08 DEBUG juju.provider.common bootstrap.go:497 connection attempt
> for 10.20.3.254 failed: /var/lib/juju/nonce.txt does not exist
>
> 19:35:43 INFO  juju.cloudconfig userdatacfg_unix.go:410 Fetching agent:
> curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time
> %{time_total}s; size %{size_download} bytes; speed %{speed_download}
> bytes/s ' --retry 10 -o $bin/tools.tar.gz <[
> https://streams.canonical.com/juju/tools/agent/2.2.4/juju-2.2.4-ubuntu-amd64.tgz]>
>

Is this the last thing logged? Try running that curl command on the machine
manually. Perhaps there's an issue getting out to the internet.


>
> I have no idea what’s going wrong since I can telnet to the node and ssh
> to that node is also possible, I just need type “yes” then I can login to
> the node,
>
> $ ssh ubuntu@10.20.3.254
>
> The authenticity of host ' 10.20.3.254 (10.20.3.254)' can't be established.
>
> ECDSA key fingerprint is
> SHA256:4FVm21s4dx7gc0/yDgz0+QAMGK4qWODoIqeoWtZg9RI.
>
> Are you sure you want to continue connecting (yes/no)?
>
>
>
> From the console of that node, I can find the controller’s public key has
> been injected to the node,
>
> -BEGIN SSH HOST KEY KEYS---
>
> …
>
> -END SSH KEY FINGERPRINTS
>
> …
>
> Cloud-init v. 0.7.9 finished at … Datasource DataSourceMAAS 
> [http://...:5240/MAAS/metadata/].
> Up 153.77 seconds.   (cloud-init hangs here!)
>
>
>
>
>
> I googled it and found someone said it is because “authorized-keys-path”
> is commented out in the “environments.yaml” [1], but the juju version I am
> using is “2.2.4-xenial-amd64”, the MAAS version is 2.2.2,
>
> Initially, I installed juju 1.25 and configured environments.yaml, but now
> I have uninstalled juju 1.25, removed all those file in $home/.juju/ and
> start it over again with juju 2.2.4.
>
> I really cannot figure out why it always hangs at this step, is there any
> cache persisted anywhere that masked the  “authorized-keys-path” even after
> the uninstallation of juju1.25? or there is any step I missed with juju
> 2.2.4?
>
>
>
> Where is user-data of cloud-init persisted on the filesystem? Any more
> detail logs I can refer to?
>
>
>
>
>
> I feel frustration after trying several days without any progress, pls
> help me out, many many thanks for any inputs!
>
>
>
>
>
> [1]
> https://serverfault.com/questions/588967/juju-bootstrap-fails-connection-refused-port-22
>
>
>
>
>
> Best Regards,
>
> Dave Chen
>
>
>
> Best Regards,
>
> Dave Chen
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: developer commands

2017-08-28 Thread Andrew Wilkins
On Mon, Aug 28, 2017 at 5:17 PM Tim Penhey  wrote:

> FYI, the developer commands were originally designed like the controller
> commands.
>
> You don't say "juju destroy-model -m foo".
>

I think that holds for "dump-model", but I'm not so sure about "dump-db".
It feels natural enough to me to say "juju dump-db -m foo", or "juju
dump-db".

I'm wondering about the future of "dump-model". Since you've been looking
at using the model description to feed "juju status", I wonder if it
wouldn't make sense to update the "juju status" YAML format to be exactly
the model description. Then "dump-model" becomes redundant. We'd have to
make it an option, though, because backwards compatibility.

Cheers,
Andrew


> Tim
>
> On 28/08/17 19:48, Anastasia Macmood wrote:
> > Hi
> >
> > Just a quick note for developers that use developer commands.
> >
> > 'juju dump-model' and 'juju dump-db' are changing on develop tip [1],
> > from 2.3.x.
> > These commands are now fully-fledged model commands as they were
> > originally designed.
> > This means that they will now accept model name as an option instead of
> > as a positional argument.
> >
> > i.e.
> > 'juju dump-model -m modelName' NOT 'juju dump-model modelName'
> > 'juju dump-db -m modelName' NOT 'juju dump-db modelName'
> >
> > Happy juju-ing!
> >
> > Sincerely Yours,
> > Anastasia
> >
> > [1]
> > https://github.com/juju/juju/pull/7797
> >
> >
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Kubernetes with Juju

2017-08-15 Thread Andrew Wilkins
On Wed, Aug 16, 2017 at 3:50 AM Micheal B  wrote:

> How can I use local images when doing juju bootstrap vsphere/myregions
> --bootstrap-constraints "cores=2 mem=4G root-disk=32G"  rather than the
> downloading of the ova? Same thing for the images used in bundles ..
>

For now, I think the only thing you can do is to create a local mirror of
the relevant parts of http://cloud-images.ubuntu.com/. You can use the
"simplestreams" utility to make this easier, e.g.:

$ sudo apt install simplestreams
$ sudo apt install ubuntu-cloudimage-keyring
$ sstream-mirror
--keyring=/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg --max=1
--progress http://cloud-images.ubuntu.com/releases path/to/mirror
'arch=amd64' 'release~xenial' 'ftype=ova'

Once you've downloaded the image(s) and metadata, you'll need to serve it
over HTTP, and point Juju at it with the "image-metadata-url" config.

We really should be caching the images, making this all seamless. I've just
filed a bug, to make things better:
https://bugs.launchpad.net/juju/+bug/1711019.

Cheers,
Andrew


>
>
> Trying to cut down on the downloading times ..
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: HELP! Can I run Juju on localhost?

2017-07-30 Thread Andrew Wilkins
On Mon, Jul 31, 2017 at 11:32 AM 葛光乐  wrote:

> Hi, guys,
>

Hi leoge,

I had downloaded Juju、Juju-gui、GuiProxy, and read the docs. But I can run
> them on localhost looks like jujucharms.com.
>

Sorry, I don't quite understand. Do you mean you *can't* get it to run?

I want to figure out how it work, then debug it and study the code.
>
> What can I  do  to achieve it.
>

Which operating system are you using? If you're on Ubuntu, then you can use
the LXD provider to run things locally. See the instructions here:
https://jujucharms.com/docs/stable/tut-lxd.

Cheers,
Andrew


> Thank you very much!
>
> Best,
> leoge
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Weekly Development Summary

2017-07-20 Thread Andrew Wilkins
On Fri, Jul 21, 2017 at 10:01 AM Anastasia Macmood <
anastasia.macm...@canonical.com> wrote:

> Hi
>
> A quick update on what keeps us, Juju team, busy...
>
> This week the team has been busy with an important task of improving
> developer experience in addition to improving the product.
>
> Of course, we have continued highly desired work on persistent storage
> with this week's focus on storage import [1] as well non-destructive
> storage removal [2] aspects. This effort also led to identifying
> improvements in model destruction that are now under way.
>
> We have made considerable progress in improving actions' footprint with
> work that prunes action results periodically [3].
>
> As usual, with a week that follows a new release of Juju, we have
> provided support to our existing users upgrading to a newer release.
>
> In addition, to excite and expedite developer experience, the team has
> put in place an improved merge job! Now developers can with greater ease
> track running tests when merging code.
>
> This week, the team has also worked on increasing functional coverage
> for persistent storage, relations as well as model migration.
>
> Last but not least, in a "call for arms", we are working on enabling
> users to specify primary network on VSphere overwriting the default VM
> network specified in OVF files shipped with Ubuntu images [4]. The
> nature of VSphere deployments, and the variety of networking
> combinations that are useful in the production environments, means that
> we need a hand from Juju community to verify our current approach. If
> you are interested in the ability to specify network in your VSphere,
> try the patch [5] linked in the bug and reach out to us with your feedback!
>

I've just managed to track down a vCenter suitable for testing, but it only
has standard switches, and they're in use so can't be changed. I tested
with a distributed vswitch/portgroup with a disabled NIC (just to see that
the code would work). It would be great if someone could test with a
working distributed vswitch/porgroup. I don't think we need to block
releasing the fix on that, though.

Cheers,
Andrew

Quick links:
>   Work pending: https://github.com/juju/juju/pulls
>   Recent commits: https://github.com/juju/juju/commits/develop
>
> Sincerely Yours,
>
> Anastasia
>
> [1] https://github.com/juju/juju/pull/7653
>
> [2] https://github.com/juju/juju/pull/7648 and
> https://github.com/juju/juju/pull/7649
>
> [3] https://github.com/juju/juju/pull/7645
>
> [4] https://bugs.launchpad.net/juju/+bug/1619812
>
> [5] https://github.com/juju/juju/pull/7660
>
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Coming in 2.3: storage improvements

2017-07-13 Thread Andrew Wilkins
Hi folks,

I've just published https://awilkins.id.au/post/juju-2.3-storage/, which
highlights some of the new bits added around storage that's coming to Juju
2.3. I particularly wanted to highlight that a new LXD storage provider has
just landed on develop today. It should be available in the edge snap soon.

The LXD storage provider will enable you to attach LXD storage volumes to
your containers, and use that for a charm's storage requirements. e.g.

$ juju deploy postgresql --storage pgdata=1G,lxd-zfs

This will create a LXD storage pool backed by a ZFS pool, create a 1GiB ZFS
volume and attach that to the container.

I'd appreciate feedback on the new provider, and the attach/detach changes
described in the blog post, preferably before 2.3 comes around. In
particular, UX warts or functionality that you're missing or anything you
find broken-by-design -- stuff that can't easily be fixed after we release.

Thanks!

Cheers,
Andrew
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Organising apiserver facades

2017-07-06 Thread Andrew Wilkins
On Thu, Jul 6, 2017 at 7:09 PM John Meinel <j...@arbash-meinel.com> wrote:

> I'd really like to see us split apart the facades-by-purpose. So we'd
> collect the facades for Agents separately from facades for Users (and
> possibly also facades for Controller).
> I'm not sure if moving things just into 'facades' just moves the problem
> around and leaves us with just a *different* directory that is a bit
> cluttered.  But I'm +1 on things that would help organize the layout.
>

Cool. I was considering controller vs. agent already, separating client off
sounds good to me too. I'll send a PR soon.


> John
> =:->
>
> On Thu, Jul 6, 2017 at 1:55 PM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> The juju/apiserver package currently has a whole lot of facade packages
>> within it, as well as some other packages related to authentication,
>> logging, and other bits and bobs. I find it difficult to navigate and tell
>> what's what a lot of the time.
>>
>> I'd like to move the apiserver facade packages into a common "facades"
>> sub-directory:
>>   apiserver/facades/application
>>   apiserver/facades/client
>>   apiserver/facades/controller
>>   etc.
>>
>> Any objections? Or alternative suggestions?
>>
>> Cheers,
>> Andrew
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Organising apiserver facades

2017-07-06 Thread Andrew Wilkins
The juju/apiserver package currently has a whole lot of facade packages
within it, as well as some other packages related to authentication,
logging, and other bits and bobs. I find it difficult to navigate and tell
what's what a lot of the time.

I'd like to move the apiserver facade packages into a common "facades"
sub-directory:
  apiserver/facades/application
  apiserver/facades/client
  apiserver/facades/controller
  etc.

Any objections? Or alternative suggestions?

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is it possible to return management to Juju after manual provision of a machine?

2017-06-29 Thread Andrew Wilkins
On Fri, Jun 30, 2017 at 7:39 AM N. S.  wrote:

> Hi,
>
> Excuse me if this seems a strange scenario.
>
> My scenario is as follows:
>
> I have a charm that has lots of problems in its install script, needs
> massive change (NOT SURE how to fix it)
>
> So,
>
> I added an empty machine by
> Juju add-machine --series Xenial
>
> And then I logged into it by
> Juju SSH 9
> And
> I provisioned it (installed some software on it).
>
>
> Now,
> Is it possible to get it back to Juju among the units and the applications?
> How?
>

You can "juju deploy  --to 9" to deploy the application unit to the
"empty" machine. It'll be up to your charm to know how to deal with the
existing software.

Other hooks are probably fine, how to integrate them in the machine?
>

Hooks are expected to be idempotent, because they may run more than once
(Juju guarantees at-least-once). So the install hook should be written in a
way that it won't fail if the software is already failed.


> Thanks,
> BR
> NAZ
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Running KVM in addition to LXC on local LXD CLOUD

2017-06-25 Thread Andrew Wilkins
On Sat, Jun 24, 2017 at 9:14 PM N. S.  wrote:

> Hi,
>
>
> I am running 10 machines on local LXD cloud, and it's fine.
>
> My host is Ubuntu 16.04, kernel 4.4.0-81.
>
> However, I have the following challenge:
> One of the machines (M0) stipulates a kernel 4.7+
>
>
> As it's known, unlike KVM, LXC makes use of same kernel of the host system
> and in this case (4.4.0-81) breaching thus the stipulation of M0 (4.7+).
>
>
> I have read that starting Juju 2.0, KVM is no more supported.
>

Juju still supports kvm, but the old "local" provider which supported
lxc/kvm is gone.

You could run a kvm container from within a lxd machine with the right
apparmor settings. Probably the most straight forward thing to do, though,
would be to create a KVM VM yourself, install Ubuntu on it, and then
manually provision it using "juju add-machine ssh:".


> How could I prepare the stipulation of M0?
>
> Thanks for your help
> BR,
> Nazih
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Consuming MongoDB from a Snap

2017-06-22 Thread Andrew Wilkins
On Fri, Jun 23, 2017 at 7:47 AM Menno Smits 
wrote:

> We've had some discussion this week about whether Juju could use MongoDB
> from snap instead of a deb. This would make it easier for Juju to stay up
> to date with the latest MongoDB releases, avoiding the involved process
> getting each update into Ubuntu's update repository, as well as giving us
> all the other advantages of snaps.
>
> Two concerns were raised in this week's tech board meeting.
>
> *1. Does snapd work on all architectures that Juju supports?*
>
> The answer appears to be "yes with some caveats". For xenial onwards there
> are snapd packages for all the architectures the Juju team cares about.
>

Ah, I thought the question was rather whether or not the mongo snap existed
for all of those architectures. I don't think it does. IIANM, the snap
comes from
https://github.com/niemeyer/snaps/blob/master/mongodb/mongo32/snapcraft.yaml,
which (if you look at the "mongodb" part, appears to only exist for
x86_64). So we would need to do some work on that first.


>https://packages.ubuntu.com/xenial/snapd
>
> For trusty only amd64, armhf and i386 appear to be supported.
>
>https://packages.ubuntu.com/trusty-updates/snapd
>
> This is probably ok. I think it's probably fine to start saying that new
> Juju controllers, on some architectures at least, need to be based on
> xenial or later.
>

Since the controller machine isn't designed for workloads, it seems fine to
me to restrict them to latest LTS.

One issue would be upgrades: we would either have to continue supporting
both snaps and debs for mongodb, or we would have to disallow upgrading
from a system that doesn't support snaps. That would OK as long as there
are no workloads on the controller, as we could use migration.

*2. Does snapd work inside LXD containers?*
>
> Although it's rarely done, it's possible to set up a Juju HA cluster where
> some nodes are running inside LXD containers so this is something we'd need
> to consider.
>

It would suck if we couldn't test using the lxd provider, though.

>From xenial onwards, snapd does indeed work inside LXD containers. I
> followed Stephane's instructions using a xenial container and successfully
> installed a number of non-trivial, working snaps including Gustavo's
> mongo32 snap.
>
>   https://stgraber.org/2016/12/07/running-snaps-in-lxd-containers/
>
>
> There is of course more testing to be done but it seems like having Juju's
> MongoDB in a snap is certainly doable.
>
> - Menno
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju deploy with a series

2017-06-15 Thread Andrew Wilkins
On Fri, Jun 16, 2017 at 1:36 AM John Meinel  wrote:

> "juju show-machine 10" is likely to tell you why we are failing to
> provision the machine.
>
> My guess is that we acctually need the alias to be "juju/centos7/amd64"
> for Juju to recognize that it is the container image we want to be starting.
>

Also, the centos7 image from linuxcontainers.org is not suitable for Juju
to use. See https://bugs.launchpad.net/juju/+bug/1495978/comments/21.

(We really need to publish the image, so people don't have to do this.)


> John
> =:->
>
>
> On Thu, Jun 15, 2017 at 8:37 PM, Daniel Bidwell 
> wrote:
>
>> I am trying to deploy a charm that I am writing for both ubuntu and
>> centos.  "lxc image alias list" produces:
>>
>> lxc image alias list
>> +---+--+---+
>> |   ALIAS   | FINGERPRINT  |DESCRIPTION|
>> +---+--+---+
>> | centos7   | 41c7bb494bbd | centos7   |
>> +---+--+---+
>> | juju/xenial/amd64 | 1e59027d1d58 | juju/xenial/amd64 |
>> +---+--+---+
>> | ubuntu-xenial | 1e59027d1d58 | ubuntu-xenial |
>> +---+--+---+
>>
>> "juju deploy ~/charms/xenial/aubase1 --series centos7 aubasecentos"
>> looks like it is starting, but a "juju status" produces:
>>
>> juju status
>> ModelController  Cloud/Region Version
>> default  lxd-testlocalhost/localhost  2.1.2
>>
>> App   Version  Status   Scale  CharmStore  Rev  OS  Notes
>> aubase1active   1  aubase1  local5  ubuntu
>> aubasecentos   waiting0/1  aubase1  local4  centos
>>
>> UnitWorkload  Agent   Machine  Public
>> address  Ports  Message
>> aubase1/4*  activeidle910.130.54.192
>> aubasecentos/4  waiting   allocating  10  waiting
>> for machine
>>
>> Machine  StateDNSInst idSeries   AZ
>> 9started  10.130.54.192  juju-a0c4c9-9  xenial
>> 10   downpendingcentos7
>>
>> What do I need to do to deploy a centos lxd container with my charm?
>> --
>> Daniel Bidwell 
>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: OS X VMS on JAAS

2017-06-03 Thread Andrew Wilkins
On Sat, Jun 3, 2017 at 2:56 PM John Meinel  wrote:

> You can add a manually provisioned machine to any model, as long as there
> is connectivity from the machine to the controller. Now, I would have
> thought initial setup was initiated by the Controller, but its possible
> that initial setup is actually initiated from the client.
>

Given the command:

$ juju add-machine ssh:

it goes something like this:

1. client connects to  via SSH, and performs basic hardware/OS
discovery
2. client asks controller to add a machine entry, and controller returns a
script to be executed on the target machine, using the discovered details,
in order to fetch and install jujud
3. client executes that script over the SSH connection

Once initial setup is complete, then it is definitely true that all
> connections are initiated from the agent running on the controlled machine
> to the controller. The controller no longer tries to socket.connect to the
> machine. (In 1.X 'actions' were initiated via ssh from the controller, but
> in 2.X the agents listen to see if there are any actions to run like they
> do for all other changes.)
>
> Now, given that he added a model into "us-east-1" if he ever did just a
> plain "juju add-machine" or "juju deploy" (without --to) it would
> definitely create a new instance in AWS and start configuring it, rather
> than from your VM.
>
> Which is why using something like the "lxd provider" would be a more
> natural use case, but according to James the sticking point is having to
> set up a controller in the first place. So "--to lxd:0" is easier for them
> to think about than setting up a provider and letting it decide how to
> allocate machines.
>
> Note, it probably wouldn't be possible to use JAAS to drive an LXD
> provider, because *that* would have JAAS be trying to make a direct
> connection to your LXD agent in order to provision the next machine.
> However "--to lxd:0" has the local juju agent (running for 'machine 0')
> talking to the local LXD agent in order to create a container.
>
> John
> =:->
>
>
> On Fri, Jun 2, 2017 at 6:28 PM, Jay Wren  wrote:
>
>> I do not understand how this works. Could someone with knowledge of how
>> jujud on a  controller communicates with jujud agents on units describe how
>> that is done?
>>
>> My limited understanding must be wrong give that James has this working.
>>
>> This is what I thought:
>>
>> On most cloud providers: add-machine instructs the cloud provider to
>> start a new instance and the cloud-config passed to cloud-init includes how
>> to download jujud agent and run it and configure it with public key trust
>> of the juju controller.
>>
>> On manually added machine: same thing only instead of cloud-init and
>> cloud-config an ssh connection is used to perform the same commands.
>>
>> I had thought the juju controller was initiating the ssh-connection to
>> the address given in the add-machine command and that a non-internet
>> routable address would simply not work as the controller cannot open any
>> TCP connection to it. This is where my understanding stops.
>>
>> Please, anyone, describe how this works?
>> --
>> Jay
>>
>>
>> On Fri, Jun 2, 2017 at 9:42 AM, James Beedy  wrote:
>>
>>> I think the primary advantage being less clutter to the end user. The
>>> difference between the end user have to bootstrap and control things from
>>> inside the vm vs from their host. For some reason this small change made
>>> some of my users who were previously not really catching on, far more apt
>>> to jump in. I personally like it because these little vms go further when
>>> they don't have the controller on them as well. @jameinel totally, possibly
>>> I'll add the bridge bits in place of the lxd-proxy in that write up, or
>>> possibly in another.
>>>
>>> ~James
>>>
>>> On Jun 2, 2017, at 12:56 AM, John Meinel  wrote:
>>>
>>> Interesting. I wouldn't have thought to use a manually added machine to
>>> use JAAS to deploy applications to your local virtualbox. Is there a reason
>>> this is easier than just "juju bootstrap lxd" from inside the VM?
>>>
>>> I suppose our default lxd provider puts the new containers on a NAT
>>> bridge, though you can reconfigure 'lxdbr0' to bridge your 'eth0' as well.
>>>
>>> John
>>> =:->
>>>
>>>
>>> On Fri, Jun 2, 2017 at 8:33 AM, James Beedy 
>>> wrote:
>>>

 https://medium.com/@jamesbeedy/using-jaas-to-deploy-lxd-containers-to-virtualbox-vms-on-os-x-a06a8046756a

 --
 Juju-dev mailing list
 juju-...@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


>>>
>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>
>>>
>>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> 

Re: OS X VMS on JAAS

2017-06-03 Thread Andrew Wilkins
On Sat, Jun 3, 2017 at 2:56 PM John Meinel  wrote:

> You can add a manually provisioned machine to any model, as long as there
> is connectivity from the machine to the controller. Now, I would have
> thought initial setup was initiated by the Controller, but its possible
> that initial setup is actually initiated from the client.
>

Given the command:

$ juju add-machine ssh:

it goes something like this:

1. client connects to  via SSH, and performs basic hardware/OS
discovery
2. client asks controller to add a machine entry, and controller returns a
script to be executed on the target machine, using the discovered details,
in order to fetch and install jujud
3. client executes that script over the SSH connection

Once initial setup is complete, then it is definitely true that all
> connections are initiated from the agent running on the controlled machine
> to the controller. The controller no longer tries to socket.connect to the
> machine. (In 1.X 'actions' were initiated via ssh from the controller, but
> in 2.X the agents listen to see if there are any actions to run like they
> do for all other changes.)
>
> Now, given that he added a model into "us-east-1" if he ever did just a
> plain "juju add-machine" or "juju deploy" (without --to) it would
> definitely create a new instance in AWS and start configuring it, rather
> than from your VM.
>
> Which is why using something like the "lxd provider" would be a more
> natural use case, but according to James the sticking point is having to
> set up a controller in the first place. So "--to lxd:0" is easier for them
> to think about than setting up a provider and letting it decide how to
> allocate machines.
>
> Note, it probably wouldn't be possible to use JAAS to drive an LXD
> provider, because *that* would have JAAS be trying to make a direct
> connection to your LXD agent in order to provision the next machine.
> However "--to lxd:0" has the local juju agent (running for 'machine 0')
> talking to the local LXD agent in order to create a container.
>
> John
> =:->
>
>
> On Fri, Jun 2, 2017 at 6:28 PM, Jay Wren  wrote:
>
>> I do not understand how this works. Could someone with knowledge of how
>> jujud on a  controller communicates with jujud agents on units describe how
>> that is done?
>>
>> My limited understanding must be wrong give that James has this working.
>>
>> This is what I thought:
>>
>> On most cloud providers: add-machine instructs the cloud provider to
>> start a new instance and the cloud-config passed to cloud-init includes how
>> to download jujud agent and run it and configure it with public key trust
>> of the juju controller.
>>
>> On manually added machine: same thing only instead of cloud-init and
>> cloud-config an ssh connection is used to perform the same commands.
>>
>> I had thought the juju controller was initiating the ssh-connection to
>> the address given in the add-machine command and that a non-internet
>> routable address would simply not work as the controller cannot open any
>> TCP connection to it. This is where my understanding stops.
>>
>> Please, anyone, describe how this works?
>> --
>> Jay
>>
>>
>> On Fri, Jun 2, 2017 at 9:42 AM, James Beedy  wrote:
>>
>>> I think the primary advantage being less clutter to the end user. The
>>> difference between the end user have to bootstrap and control things from
>>> inside the vm vs from their host. For some reason this small change made
>>> some of my users who were previously not really catching on, far more apt
>>> to jump in. I personally like it because these little vms go further when
>>> they don't have the controller on them as well. @jameinel totally, possibly
>>> I'll add the bridge bits in place of the lxd-proxy in that write up, or
>>> possibly in another.
>>>
>>> ~James
>>>
>>> On Jun 2, 2017, at 12:56 AM, John Meinel  wrote:
>>>
>>> Interesting. I wouldn't have thought to use a manually added machine to
>>> use JAAS to deploy applications to your local virtualbox. Is there a reason
>>> this is easier than just "juju bootstrap lxd" from inside the VM?
>>>
>>> I suppose our default lxd provider puts the new containers on a NAT
>>> bridge, though you can reconfigure 'lxdbr0' to bridge your 'eth0' as well.
>>>
>>> John
>>> =:->
>>>
>>>
>>> On Fri, Jun 2, 2017 at 8:33 AM, James Beedy 
>>> wrote:
>>>

 https://medium.com/@jamesbeedy/using-jaas-to-deploy-lxd-containers-to-virtualbox-vms-on-os-x-a06a8046756a

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


>>>
>>> --
>>> Juju mailing list
>>> j...@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>
>>>
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> 

Re: Call for testers: preview of persistent storage support

2017-05-25 Thread Andrew Wilkins
On Fri, May 26, 2017 at 11:26 AM Patrizio Bassi <patrizio.ba...@gmail.com>
wrote:

> Dear Andrew,
>  what about private clouds such as maas?
>

MAAS is a bit special, because the disks are physically attached to the
machines. If and when we support something like Ceph RBD natively inside
Juju, then that could be detached from one machine and attached to another.

The new storage commands should work with private OpenStack clouds.

Cheers,
Andrew


> Patrizio
>
> Il giorno mer 24 ma 2017 alle 03:38 Andrew Wilkins <
> andrew.wilk...@canonical.com> ha scritto:
>
>> Hi folks,
>>
>> One of the things we're working on for the 2.3 release (not 2.2!) is
>> persistent storage. What this means is the ability to detach storage from a
>> unit, and reattach it to another unit keeping the storage contents intact.
>> We would like to get some feedback before this all gets set in stone.
>>
>> With the changes, removing an application unit will detach storage rather
>> than destroy it as Juju currently does. The storage will then be available
>> for attaching to another unit using "juju attach-storage  "
>> or to a new application unit using "juju deploy  --attach-storage
>> "; or for removal using "juju remove-storage ".
>>
>> For example, I can deploy postgresql on AWS with EBS storage. If I remove
>> the postgresql application, I can add another and attach the storage to it:
>>
>> $ juju deploy postgresql --storage pgdata=100G,ebs
>> Located charm "cs:postgresql-148".
>> Deploying charm "cs:postgresql-148".
>> (wait for postgresql/0 to become active)
>>
>> $ juju remove-application postgresql
>> removing application postgresql
>> - will detach storage pgdata/0
>> (wait for postgresql/0 and machine 0 to be removed)
>>
>> $ juju deploy postgresql postgresql2 --attach-storage pgdata/0 --to
>> zone=
>> Located charm "cs:postgresql-148".
>> Deploying charm "cs:postgresql-148".
>> (wait for postgresql2/0 to become active)
>>
>> If you like, you can confirm for yourself that the data is persisted by
>> logging into the first machine and runing "sudo -u postgres psql", creating
>> some data, and then checking that it is still there from the second machine.
>>
>> (The --to zone=... is required due to a limitation that we will remove by
>> the time 2.3 is released. EBS volumes and EC2 instances must be created in
>> the same AZ, and that's not automatic yet. This is fixed by
>> https://github.com/juju/juju/pull/7378 which, at the time of writing
>> this email, has not yet been merged.)
>>
>> If you have any interest in these changes, please help us make them great
>> by testing out this early release:
>>
>> $ sudo snap install --channel=edge --classic juju-axw
>> $ /snap/bin/juju-axw.juju bootstrap ...
>>
>> The new/updated commands are:
>>  - juju attach-storage  
>>  - juju detach-storage 
>>  - juju remove-storage 
>>  - juju deploy  --attach-storage 
>>
>> (We'll also be adding --attach-storage to the "juju add-unit" command
>> soon.)
>>
>> Thank you!
>>
>> Cheers,
>> Andrew
>> --
>> Juju mailing list
>> j...@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
> --
>
> Patrizio Bassi
> www.patriziobassi.it
> http://piazzadelpopolo.patriziobassi.it
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Call for testers: preview of persistent storage support

2017-05-25 Thread Andrew Wilkins
On Fri, May 26, 2017 at 11:26 AM Patrizio Bassi <patrizio.ba...@gmail.com>
wrote:

> Dear Andrew,
>  what about private clouds such as maas?
>

MAAS is a bit special, because the disks are physically attached to the
machines. If and when we support something like Ceph RBD natively inside
Juju, then that could be detached from one machine and attached to another.

The new storage commands should work with private OpenStack clouds.

Cheers,
Andrew


> Patrizio
>
> Il giorno mer 24 ma 2017 alle 03:38 Andrew Wilkins <
> andrew.wilk...@canonical.com> ha scritto:
>
>> Hi folks,
>>
>> One of the things we're working on for the 2.3 release (not 2.2!) is
>> persistent storage. What this means is the ability to detach storage from a
>> unit, and reattach it to another unit keeping the storage contents intact.
>> We would like to get some feedback before this all gets set in stone.
>>
>> With the changes, removing an application unit will detach storage rather
>> than destroy it as Juju currently does. The storage will then be available
>> for attaching to another unit using "juju attach-storage  "
>> or to a new application unit using "juju deploy  --attach-storage
>> "; or for removal using "juju remove-storage ".
>>
>> For example, I can deploy postgresql on AWS with EBS storage. If I remove
>> the postgresql application, I can add another and attach the storage to it:
>>
>> $ juju deploy postgresql --storage pgdata=100G,ebs
>> Located charm "cs:postgresql-148".
>> Deploying charm "cs:postgresql-148".
>> (wait for postgresql/0 to become active)
>>
>> $ juju remove-application postgresql
>> removing application postgresql
>> - will detach storage pgdata/0
>> (wait for postgresql/0 and machine 0 to be removed)
>>
>> $ juju deploy postgresql postgresql2 --attach-storage pgdata/0 --to
>> zone=
>> Located charm "cs:postgresql-148".
>> Deploying charm "cs:postgresql-148".
>> (wait for postgresql2/0 to become active)
>>
>> If you like, you can confirm for yourself that the data is persisted by
>> logging into the first machine and runing "sudo -u postgres psql", creating
>> some data, and then checking that it is still there from the second machine.
>>
>> (The --to zone=... is required due to a limitation that we will remove by
>> the time 2.3 is released. EBS volumes and EC2 instances must be created in
>> the same AZ, and that's not automatic yet. This is fixed by
>> https://github.com/juju/juju/pull/7378 which, at the time of writing
>> this email, has not yet been merged.)
>>
>> If you have any interest in these changes, please help us make them great
>> by testing out this early release:
>>
>> $ sudo snap install --channel=edge --classic juju-axw
>> $ /snap/bin/juju-axw.juju bootstrap ...
>>
>> The new/updated commands are:
>>  - juju attach-storage  
>>  - juju detach-storage 
>>  - juju remove-storage 
>>  - juju deploy  --attach-storage 
>>
>> (We'll also be adding --attach-storage to the "juju add-unit" command
>> soon.)
>>
>> Thank you!
>>
>> Cheers,
>> Andrew
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
> --
>
> Patrizio Bassi
> www.patriziobassi.it
> http://piazzadelpopolo.patriziobassi.it
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Weekly Development Summary

2017-05-25 Thread Andrew Wilkins
Hi all,

It's been a busy week as we continue towards the 2.2-rc1 release. As Tim
said earlier in the week, the release will slip to next week as we continue
ironing out the kinks. One of the main things we've been focusing on this
week is improving scalability and resilience in large deployments, such as
JAAS.

One major issue [0] fixed is that if we overflow the transaction log
collection in Mongo, the controller will now automatically restart to get
all of the watchers back into the appropriate state. A new controller
config attribute, "max-txn-log-size", has been introduced, which allows you
to change the transaction log collection size from the default of 10MB.

Another critical fix is to prevent resource-get from blocking indefinitely
when fetching the resource fails [1]. Juju will now retry fetching
resources for a set number of attempts, and then abort if it continues to
fail, giving the operator a chance to fix the charm resources or charm code
itself.

Work is underway to run one log forwarding worker per model, instead of one
per controller as we have now. This will enable us to optimise the way in
which logs are managed, to improve scalability and performance. Having
per-model log forwarders will also enable future improvements to support
forwarding logs to cloud-specific logging facilities.

A handful of other notable fixes:
 - better validation of custom simplestreams metadata [2]
 - simplified mongo oplog size calculation, to prevent frequent mongo
restarts [3]
 - fixed an issue with add-storage, where the specified pool was not used
in some cases [4]
 - several fixes [5] [6] for model migration related to resources

Last week, Ian mentioned that there would be a snap forthcoming containing
a preview of storage improvements scheduled for the 2.3 release. This was
sent out earlier this week, and we're waiting for feedback. If you have a
moment, please try it out and let us know what you think. Next week we'll
aim to release an update to the snap, with the next bits: adding
--attach-storage to add-unit, and automatically placing VMs in the same AZ
as the attached volumes (at least for AWS).

Thanks for reading!

Cheers,
Andrew

---

Quick links:
  Work Pending: https://github.com/juju/juju/pulls
  Recent commits: https://github.com/juju/juju/commits/develop

[0] https://bugs.launchpad.net/juju/+bug/1692792
[1] https://bugs.launchpad.net/juju/+bug/1627127
[2] https://bugs.launchpad.net/juju/+bug/1690456
[3] https://bugs.launchpad.net/juju/+bug/1677592
[4] https://bugs.launchpad.net/juju/+bug/1692729
[5] https://bugs.launchpad.net/juju/+bug/1692610
[6] https://bugs.launchpad.net/juju/+bug/1692646
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Call for testers: preview of persistent storage support

2017-05-23 Thread Andrew Wilkins
Hi folks,

One of the things we're working on for the 2.3 release (not 2.2!) is
persistent storage. What this means is the ability to detach storage from a
unit, and reattach it to another unit keeping the storage contents intact.
We would like to get some feedback before this all gets set in stone.

With the changes, removing an application unit will detach storage rather
than destroy it as Juju currently does. The storage will then be available
for attaching to another unit using "juju attach-storage  "
or to a new application unit using "juju deploy  --attach-storage
"; or for removal using "juju remove-storage ".

For example, I can deploy postgresql on AWS with EBS storage. If I remove
the postgresql application, I can add another and attach the storage to it:

$ juju deploy postgresql --storage pgdata=100G,ebs
Located charm "cs:postgresql-148".
Deploying charm "cs:postgresql-148".
(wait for postgresql/0 to become active)

$ juju remove-application postgresql
removing application postgresql
- will detach storage pgdata/0
(wait for postgresql/0 and machine 0 to be removed)

$ juju deploy postgresql postgresql2 --attach-storage pgdata/0 --to
zone=
Located charm "cs:postgresql-148".
Deploying charm "cs:postgresql-148".
(wait for postgresql2/0 to become active)

If you like, you can confirm for yourself that the data is persisted by
logging into the first machine and runing "sudo -u postgres psql", creating
some data, and then checking that it is still there from the second machine.

(The --to zone=... is required due to a limitation that we will remove by
the time 2.3 is released. EBS volumes and EC2 instances must be created in
the same AZ, and that's not automatic yet. This is fixed by
https://github.com/juju/juju/pull/7378 which, at the time of writing this
email, has not yet been merged.)

If you have any interest in these changes, please help us make them great
by testing out this early release:

$ sudo snap install --channel=edge --classic juju-axw
$ /snap/bin/juju-axw.juju bootstrap ...

The new/updated commands are:
 - juju attach-storage  
 - juju detach-storage 
 - juju remove-storage 
 - juju deploy  --attach-storage 

(We'll also be adding --attach-storage to the "juju add-unit" command soon.)

Thank you!

Cheers,
Andrew
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Call for testers: preview of persistent storage support

2017-05-23 Thread Andrew Wilkins
Hi folks,

One of the things we're working on for the 2.3 release (not 2.2!) is
persistent storage. What this means is the ability to detach storage from a
unit, and reattach it to another unit keeping the storage contents intact.
We would like to get some feedback before this all gets set in stone.

With the changes, removing an application unit will detach storage rather
than destroy it as Juju currently does. The storage will then be available
for attaching to another unit using "juju attach-storage  "
or to a new application unit using "juju deploy  --attach-storage
"; or for removal using "juju remove-storage ".

For example, I can deploy postgresql on AWS with EBS storage. If I remove
the postgresql application, I can add another and attach the storage to it:

$ juju deploy postgresql --storage pgdata=100G,ebs
Located charm "cs:postgresql-148".
Deploying charm "cs:postgresql-148".
(wait for postgresql/0 to become active)

$ juju remove-application postgresql
removing application postgresql
- will detach storage pgdata/0
(wait for postgresql/0 and machine 0 to be removed)

$ juju deploy postgresql postgresql2 --attach-storage pgdata/0 --to
zone=
Located charm "cs:postgresql-148".
Deploying charm "cs:postgresql-148".
(wait for postgresql2/0 to become active)

If you like, you can confirm for yourself that the data is persisted by
logging into the first machine and runing "sudo -u postgres psql", creating
some data, and then checking that it is still there from the second machine.

(The --to zone=... is required due to a limitation that we will remove by
the time 2.3 is released. EBS volumes and EC2 instances must be created in
the same AZ, and that's not automatic yet. This is fixed by
https://github.com/juju/juju/pull/7378 which, at the time of writing this
email, has not yet been merged.)

If you have any interest in these changes, please help us make them great
by testing out this early release:

$ sudo snap install --channel=edge --classic juju-axw
$ /snap/bin/juju-axw.juju bootstrap ...

The new/updated commands are:
 - juju attach-storage  
 - juju detach-storage 
 - juju remove-storage 
 - juju deploy  --attach-storage 

(We'll also be adding --attach-storage to the "juju add-unit" command soon.)

Thank you!

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: How to build juju for centOS

2017-05-18 Thread Andrew Wilkins
On Thu, May 11, 2017 at 10:27 PM fengxia <fx...@lenovo.com> wrote:

> Andrew,
>
> I tried stock Juju on Ubuntu 16.04, but having the same error:
>
> ERROR cannot obtain provisioning script
> ERROR getting instance config: finding tools: no matching tools available
> (not found)
>
> Here are the steps:
>
> 1. juju bootstrap lxd lxd-test
>
> 2. juju add-machine ssh:username@ip --series centos7
>
> I have also tried setting default-series when bootstrap, same error.
>
> I checked streams.canonical.com, there is centos agent listed under
> /tools. I also manually tried setting version to 2.0.1, for example, and
> got the same error.
>
Hi Feng,

Sorry for the late response.

You may be hitting https://bugs.launchpad.net/juju/+bug/1495978. There are
several fixes for CentOS related to LXD that were released in Juju 2.1.
Please try updating to a newer version.

FWIW, I've just successfully started a centos7 series machine on AWS using
Juju 2.2-beta4, but earlier versions should work there as well.

Cheers,
Andrew

> Best,
>
> Feng
> On 05/10/2017 03:44 AM, Andrew Wilkins wrote:
>
> On Wed, May 10, 2017 at 3:08 PM fengxia <fx...@lenovo.com> wrote:
>
>> I have followed dev instruction and can build Juju binaries for Ubuntu.
>> The dev machine is also Ubuntu.
>>
>> $ go install -v github.com/juju/juju/…
>>
>> Using the same binaries will not however bootstrap with "--config
>> default-series=centos", nor "add-machine --series centos". Both failed at
>> "no tools founds".
>>
>> How to build an agent for centos?
>>
> For a start, you should use "centos7", not "centos". "juju add-machine
> --series=centos" *should* give you an immediate error indicating that
> that's not a valid series, and ideally inform you of the closest match(es);
> I'll file a bug to get that fixed.
>
> Do you need to build from source? If you're using a released version of
> Juju, then the agents are available on streams.canonical.com.
>
> For dev builds, we don't have a nice, supported solution. The supported
> solution is to create agent tarballs and generate simplestreams metadata. I
> wrote a plugin a while ago that you can use to build and upload agent
> tarballs to the controller directly, but you shouldn't use it in production
> systems:
>
> $ go get github.com/axw/juju-tools
> $ juju tools build 2.2-beta4.1-centos7-amd64
> building: juju-2.2-beta4.1-centos7-amd64.tgz
> $ juju tools upload -m controller juju-2.2-beta4.1-centos7-amd64.tgz
> uploading "juju-2.2-beta4.1-centos7-amd64.tgz"
> $ juju add-machine --series=centos7
>
> Cheers,
> Andrew
>
>> --
>> Feng xia
>> Engineer
>> Lenovo USA
>>
>> Phone: 5088011794 <%28508%29%20801-1794>fx...@lenovo.com
>>  
>> Lenovo.com
>> Twitter | Facebook | Instagram | Blogs | Forums
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>
> --
> Feng xia
> Engineer
> Lenovo USA
>
> Phone: 5088011794 <(508)%20801-1794>fx...@lenovo.com
>   
> Lenovo.com
> Twitter | Facebook | Instagram | Blogs | Forums
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: matchmaking a jupyter community, and a reminder to share your work

2017-05-16 Thread Andrew Wilkins
On Tue, May 16, 2017 at 9:46 PM Rick Harding 
wrote:

> Last week for the Juju Show [1] I played with Jupyter Notebook which is a
> great way to put together online instructional for code. It was fun to get
> going and I was motivated by a charm from a new author [2] I had seen in
> the new published api [3].
>
> What was interesting was five minutes after the show Merlijn pointed out
> that he had a charm[4] for it that he'd never pushed to the store. Then
> after that Andrew from the Juju team mentioned he was playing with it and
> had some layers and code [5][6][7][8][9] sitting around for Jupyter.
>

(Thanks Rick)

Hi folks,

I'm not actively working on these layers/interfaces at the moment, but if
anyone wants to pick them up and move them to an org then that's fine by
me. I'd then contribute later if/when I have the time.


> Clearly, this is awesome that there's so much interest around a great
> piece of software. The bigger opportunity is that folks can now start
> collaborating and really taking advantage of the shared brain powers of
> everyone out there.
>
> So I get to play matchmaker. Guiseppe, Merlijn, Andrew, and everyone else
> out there interested in Jupyter I suggest you get together. I'm excited to
> see what the combined power can bring to a great Jupyter experience.
>
> If you've got a charm you've been sitting on and not yet pushed to the
> charm store I really suggest you do it now. You never know what community
> is waiting to pool around chunk of work. They just need that central point
> to kick it all off.
>

On that note: yesterday I started work on an interface and layer for GitLab
Runner. It doesn't do anything except for install gitlab-runner yet, but my
intention is to modify layer-gitlab with a relation that will extract the
registration token from the DB and send it across. Then the runner can
automatically register itself. If anyone else is working on this/similar,
please let me know.

Cheers,
Andrew


> Rick
>
> 1: https://www.youtube.com/watch?v=oJukQzROo-Q
> 2: https://jujucharms.com/u/attardi-h/jupyter-notebook/
> 3: https://api.jujucharms.com/charmstore/v5/changes/published?limit=100
> 4: https://jujucharms.com/u/tengu-team/jupyter-notebook/
> 5: https://github.com/axw/interface-jupyterhub-spawner
> 6: https://github.com/axw/interface-jupyterhub-authenticator
> 7: https://github.com/axw/layer-jupyterhub
> 8: https://github.com/axw/jupyterhub-usso-authenticator
> 9: https://github.com/axw/jupyterhub-lxd-spawner
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How to build juju for centOS

2017-05-10 Thread Andrew Wilkins
On Wed, May 10, 2017 at 3:08 PM fengxia  wrote:

> I have followed dev instruction and can build Juju binaries for Ubuntu.
> The dev machine is also Ubuntu.
>
> $ go install -v github.com/juju/juju/…
>
> Using the same binaries will not however bootstrap with "--config
> default-series=centos", nor "add-machine --series centos". Both failed at
> "no tools founds".
>
> How to build an agent for centos?
>
For a start, you should use "centos7", not "centos". "juju add-machine
--series=centos" *should* give you an immediate error indicating that
that's not a valid series, and ideally inform you of the closest match(es);
I'll file a bug to get that fixed.

Do you need to build from source? If you're using a released version of
Juju, then the agents are available on streams.canonical.com.

For dev builds, we don't have a nice, supported solution. The supported
solution is to create agent tarballs and generate simplestreams metadata. I
wrote a plugin a while ago that you can use to build and upload agent
tarballs to the controller directly, but you shouldn't use it in production
systems:

$ go get github.com/axw/juju-tools
$ juju tools build 2.2-beta4.1-centos7-amd64
building: juju-2.2-beta4.1-centos7-amd64.tgz
$ juju tools upload -m controller juju-2.2-beta4.1-centos7-amd64.tgz
uploading "juju-2.2-beta4.1-centos7-amd64.tgz"
$ juju add-machine --series=centos7

Cheers,
Andrew

> --
> Feng xia
> Engineer
> Lenovo USA
>
> Phone: 5088011794 <(508)%20801-1794>fx...@lenovo.com
>   
> Lenovo.com
> Twitter | Facebook | Instagram | Blogs | Forums
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Go 1.8

2017-04-27 Thread Andrew Wilkins
On Fri, Apr 28, 2017 at 5:55 AM Michael Hudson-Doyle <
michael.hud...@canonical.com> wrote:

> On 27 April 2017 at 17:08, Andrew Wilkins <andrew.wilk...@canonical.com>
> wrote:
>
>>
>> The snap is working well for me.
>>
>
> \o/
>
>
>> Heather has found that it doesn't expose "gofmt", which makes the
>> pre-push hook sad. You can add /snap/go/current/bin to your $PATH to get
>> around that for now.
>>
>
> I've added a gofmt alias to the snap(s) now so you can run "snap alias go
> gofmt" after you've refreshed to the new snap. I've also asked for this
> alias to be automatically set up, I'm not sure how long that will take to
> happen though.
>

Thanks Michael!


> Cheers,
> mwh
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Go 1.8

2017-04-26 Thread Andrew Wilkins
Folks,

We have moved to Go 1.8 as a requirement for building juju core. I'm just
about to land a change that brings in a more recent golang.org/x/crypto
version, which means we'll be using the standard library's "context", which
isn't available in Go 1.6 and earlier. Please update your toolchain.

The snap is working well for me. Heather has found that it doesn't expose
"gofmt", which makes the pre-push hook sad. You can
add /snap/go/current/bin to your $PATH to get around that for now.

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Can't juju deploy p2.xlarge in aws/us-east-1

2017-03-20 Thread Andrew Wilkins
On Mon, Mar 20, 2017 at 10:17 PM Junien Fridrick <
junien.fridr...@canonical.com> wrote:

> And once/if you have no Classic instance running in a region, you should
> file a ticket to AWS Support asking for a default VPC creation. Once
> it's done, you don't need to supply the VPC ID anymore.
>
> Could juju maybe be instructed to use a VPC if and only if a single VPC
> exist in a region, without specifying its ID ? For example, --config
> "force-vpc=true" or something.
>

Yes we could do that. I'd like to go a bit further; what I would like to
have happen is:

1. juju will use the VPC specified in config, if any
2. else, Juju will look for a VPC called (say) "juju-default-vpc"
3. else, Juju will create a VPC with the same name, if it can*, and use that
4. else, Juju will use the default VPC, if available
5. else, bootstrap/add-model will fail

The outcome being that Juju will then *always* use VPC, and the user
shouldn't need to do a thing. This would also allow the provider to be
cleaned up (assuming there's a migration path for existing deployments),
and new features to be implemented more easily (e.g. support for EFS).

Everyone on the team is currently fully booked, so I can't give you an
estimate of when that'll come to fruition yet. We could do everything
except have Juju fail pretty easily I would think.

* "if it can", because there are pretty severe limits on the number of VPCs
you can create: 5 per region.

Cheers,
Andrew


> On Thu, Mar 16, 2017 at 08:04:22AM -0400, Tim Van Steenburgh wrote:
> > A. That was it. After passing --config "vpc-id=vpc-924fc7f6" to `juju
> > bootstrap`, I can now deploy p2 instance types. Thanks Andrew!
> >
> > On Thu, Mar 16, 2017 at 4:21 AM, Samuel Cozannet <
> > samuel.cozan...@canonical.com> wrote:
> >
> > > Aaah whaow. I have a default VPC myself, so that may explain the
> problem
> > > Tim is having. Early adopters problem!!
> > >
> > >
> > > --
> > > Samuel Cozannet
> > > Cloud, Big Data and IoT Strategy Team
> > > Business Development - Cloud and ISV Ecosystem
> > > Changing the Future of Cloud
> > > Ubuntu <http://ubuntu.com>  / Canonical UK LTD <http://canonical.com>
> /
> > > Juju <https://jujucharms.com>
> > > samuel.cozan...@canonical.com
> > > mob: +33 616 702 389
> > > skype: samnco
> > > Twitter: @SaMnCo_23
> > > [image: View Samuel Cozannet's profile on LinkedIn]
> > > <https://es.linkedin.com/in/scozannet>
> > >
> > > On Thu, Mar 16, 2017 at 9:17 AM, Andrew Wilkins <
> > > andrew.wilk...@canonical.com> wrote:
> > >
> > >> On Thu, Mar 16, 2017 at 3:57 PM Samuel Cozannet <
> > >> samuel.cozan...@canonical.com> wrote:
> > >>
> > >>> I am using the default settings, no change as far as I know to what
> Juju
> > >>> would do by default.
> > >>>
> > >>
> > >> What Juju will do depends on what is available in your EC2 account.
> Not
> > >> all EC2 accounts were born alike.
> > >>
> > >> If your account has a default VPC, that will be used by Juju. In that
> > >> case, you'll have p2 instance types available. I expect this to be
> the case
> > >> for most people - all accounts created since 2013-12-04 will have a
> default
> > >> VPC.
> > >>
> > >> If you've got an older account, then you may or may not have a default
> > >> VPC. If you do not, then Juju will fall back to EC2 Classic. In that
> case,
> > >> no p2 instance types.
> > >>
> > >> Cheers,
> > >> Andrew
> > >>
> > >>
> > >>> --
> > >>> Samuel Cozannet
> > >>> Cloud, Big Data and IoT Strategy Team
> > >>> Business Development - Cloud and ISV Ecosystem
> > >>> Changing the Future of Cloud
> > >>> Ubuntu <http://ubuntu.com>  / Canonical UK LTD <http://canonical.com>
> /
> > >>> Juju <https://jujucharms.com>
> > >>> samuel.cozan...@canonical.com
> > >>> mob: +33 616 702 389
> > >>> skype: samnco
> > >>> Twitter: @SaMnCo_23
> > >>> [image: View Samuel Cozannet's profile on LinkedIn]
> > >>> <https://es.linkedin.com/in/scozannet>
> > >>>
> > >>> On Thu, Mar 16, 2017 at 8:52 AM, Andrew Wilkins <
> > >>> andrew.wilk...@canonical.com> wrote:
> > >>>
> > >>> On Tue, Mar

Re: Can't juju deploy p2.xlarge in aws/us-east-1

2017-03-16 Thread Andrew Wilkins
On Thu, Mar 16, 2017 at 3:57 PM Samuel Cozannet <
samuel.cozan...@canonical.com> wrote:

> I am using the default settings, no change as far as I know to what Juju
> would do by default.
>

What Juju will do depends on what is available in your EC2 account. Not all
EC2 accounts were born alike.

If your account has a default VPC, that will be used by Juju. In that case,
you'll have p2 instance types available. I expect this to be the case for
most people - all accounts created since 2013-12-04 will have a default VPC.

If you've got an older account, then you may or may not have a default VPC.
If you do not, then Juju will fall back to EC2 Classic. In that case, no p2
instance types.

Cheers,
Andrew


> --
> Samuel Cozannet
> Cloud, Big Data and IoT Strategy Team
> Business Development - Cloud and ISV Ecosystem
> Changing the Future of Cloud
> Ubuntu <http://ubuntu.com>  / Canonical UK LTD <http://canonical.com> /
> Juju <https://jujucharms.com>
> samuel.cozan...@canonical.com
> mob: +33 616 702 389
> skype: samnco
> Twitter: @SaMnCo_23
> [image: View Samuel Cozannet's profile on LinkedIn]
> <https://es.linkedin.com/in/scozannet>
>
> On Thu, Mar 16, 2017 at 8:52 AM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
> On Tue, Mar 14, 2017 at 8:48 PM Tim Van Steenburgh <
> tim.van.steenbu...@canonical.com> wrote:
>
> 2.1.1 juju client and controller, controller bootstrapped in aws/us-east-1:
>
> juju deploy ./kubernetes-worker --constraints "instance-type=p2.xlarge" 
> kubernetes-worker-gpu
> Deploying charm "local:xenial/kubernetes-worker-1".
> ERROR cannot add application "kubernetes-worker-gpu": invalid constraint 
> value: instance-type=p2.xlarge
> valid values are: [m1.small cc2.8xlarge cr1.8xlarge g2.2xlarge r3.8xlarge 
> i2.xlarge t1.micro c1.xlarge g2.8xlarge m3.xlarge m3.medium c3.4xlarge 
> hs1.8xlarge r3.2xlarge m1.xlarge c3.xlarge c3.large c3.8xlarge r3.xlarge 
> m2.xlarge m1.large i2.2xlarge i2.8xlarge cg1.4xlarge d2.2xlarge m2.2xlarge 
> m3.2xlarge hi1.4xlarge m2.4xlarge r3.4xlarge r3.large d2.xlarge c1.medium 
> d2.8xlarge m3.large m1.medium c3.2xlarge i2.4xlarge d2.4xlarge]
>
> Are you using VPC? p2 instance types only support VPC.
>
> I /am/ able to deploy a p2.xlarge in aws/us-east-1 using the AWS console. 
> Looking at the code it seems this instance-type should be available: 
> https://github.com/juju/juju/blob/juju-2.1.1/provider/ec2/internal/ec2instancetypes/generated.go#L6165
>
> Not sure if this is a bug or PEBKAC. Grateful for any ideas while I continue 
> to poke at it.
>
>
> Tim
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Can't juju deploy p2.xlarge in aws/us-east-1

2017-03-16 Thread Andrew Wilkins
On Tue, Mar 14, 2017 at 8:48 PM Tim Van Steenburgh <
tim.van.steenbu...@canonical.com> wrote:

> 2.1.1 juju client and controller, controller bootstrapped in aws/us-east-1:
>
> juju deploy ./kubernetes-worker --constraints "instance-type=p2.xlarge" 
> kubernetes-worker-gpu
> Deploying charm "local:xenial/kubernetes-worker-1".
> ERROR cannot add application "kubernetes-worker-gpu": invalid constraint 
> value: instance-type=p2.xlarge
> valid values are: [m1.small cc2.8xlarge cr1.8xlarge g2.2xlarge r3.8xlarge 
> i2.xlarge t1.micro c1.xlarge g2.8xlarge m3.xlarge m3.medium c3.4xlarge 
> hs1.8xlarge r3.2xlarge m1.xlarge c3.xlarge c3.large c3.8xlarge r3.xlarge 
> m2.xlarge m1.large i2.2xlarge i2.8xlarge cg1.4xlarge d2.2xlarge m2.2xlarge 
> m3.2xlarge hi1.4xlarge m2.4xlarge r3.4xlarge r3.large d2.xlarge c1.medium 
> d2.8xlarge m3.large m1.medium c3.2xlarge i2.4xlarge d2.4xlarge]
>
> Are you using VPC? p2 instance types only support VPC.

> I /am/ able to deploy a p2.xlarge in aws/us-east-1 using the AWS console. 
> Looking at the code it seems this instance-type should be available: 
> https://github.com/juju/juju/blob/juju-2.1.1/provider/ec2/internal/ec2instancetypes/generated.go#L6165
>
> Not sure if this is a bug or PEBKAC. Grateful for any ideas while I continue 
> to poke at it.
>
>
> Tim
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What sort of encoding does the websocket API expect for config-yaml?

2017-02-28 Thread Andrew Wilkins
On Wed, Mar 1, 2017 at 6:42 AM Pete Vander Giessen <
pete.vandergies...@canonical.com> wrote:

> Hi All,
>
> I'm currently working on getting python-libjuju to successfully deploy the
> landscape-dense-maas bundle. It fails, as outlined in
> https://bugs.launchpad.net/juju/+bug/1651260.
>
> (python-libjuju is a Python client that talks to Juju's websocket API; I'm
> currently using it inside the matrix testing framework.)
>
> The comments in that bug suggest that the root of the problem is that I'm
> trying to deploy a charm (haproxy) that has an empty string as a default
> value, but python-libjuju is using the legacy "config" param when it calls
> ApplicationDeploy in the api, rather than the new "config-yaml" param. This
> sounds simple to fix -- all I have to do is change an arg from "config" to
> "config-yaml", and everything should work!
>
> One hitch: the "config" param expects a json object, which is what we get
> back when we make our initial call to the planner, but config-yaml expects
> a string with yaml in it (this is per the logic in deployApplication in
> juju/apiserver/application/application.go).
>
> That also sounds simple. As we do elsewhere in python-libjuju when we want
> to pass a yaml blob to the API, I use Python's handy yaml library, do
> yaml.dump(config), where "config" is the json config object I got from the
> planner, and everything should work!
>
> This is where I'm stuck. If I pass in such a string, the websocket API
> simple hangs, and stops talking to me. I don't even see any error messages
> in the logs on my controller :-/
>
> Does anyone have any insight as to what I might be doing incorrectly? In
> Python3, yaml.dump will produce a utf-8 string by default. All of that will
> get serialized to json before being submitted over the websocket, though,
> so I don't *think* that it's an encoding issue. (Passing the bundle in as a
> yaml blob to the planner in the first place works.) The config object I get
> back from the planner isn't wrapped in an "options" key, but adding that
> key before dumping the config to a yaml string doesn't fix the problem -- I
> still see the hang.
>
> Apologies for the length of the post. And thanks in advance for anything
> you can do to get me unstuck!
>

I suggest you turn up logging (juju.apiserver=TRACE) on the controller, and
compare the API request payload that python-libjuju is sending to what
"juju deploy " sends.


> ~ PeteVG
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-24 Thread Andrew Wilkins
On Fri, Feb 24, 2017 at 6:51 PM Mark Shuttleworth <m...@ubuntu.com> wrote:

> On 24/02/17 11:30, Andrew Wilkins wrote:
>
> On Fri, Feb 24, 2017 at 6:15 PM Adam Collard <adam.coll...@canonical.com>
> wrote:
>
> On Fri, 24 Feb 2017 at 10:07 Adam Israel <adam.isr...@canonical.com>
> wrote:
>
> Thanks for calling this out, Simon! We should be shouting this from the
> rooftops and celebrating in the streets.
>
>
> Only if you also wave a big WARNING banner!
>
> I can definitely see value in pre-installing a bunch of things in your LXD
> images as a way of speeding up the development/testing cycle, but doing so
> might give you false confidence in your charm. It will become much easier
> to forget to list a package that you need installing,  or to ensure that
> you have the correct access (PPA credentials, or proxy details etc.) and
> having your charm gracefully handle when those are missing.
>
> Juju promises charms encoding operations that can work across multiple
> cloud providers, bare metal and containers please keep that in mind :)
>
>
> Indeed, and this is the reason why it wasn't called out. We probably
> should document it for power-users/charmers, but in general I wouldn't go
> encouraging its use. Optimising for LXD is great for repeat deploys, but it
> wouldn't be great if that leads to less attention to quality on the rest of
> the providers.
>
> Anyway, I'm glad it's helping make charmers' lives easier!
>
>
> We should call this out loudly because it helps people making charms.
>
> Those people are plenty smart enough to debug a failure if they forget a
> dependency which was preinstalled in their dev images.
>

I was thinking about deployment times more than anything else. If you don't
feel your user's pain, you're less likely to make it go away. But anyway,
that can be fixed with automation as well (CI, as you say below).


> Don't HIDE something that helps developers for fear of those developers
> making mistakes, TEACH them to put CI or other out-of-band tests in place
> anyway that will catch that every time.
>

FWIW, it wasn't intentionally hidden to start with, it was just missed. I
made the changes primarily to support an external user who wanted to demo
CentOS charms on LXD; the change also enabled custom images in general, and
also slightly improved container startup time. Three birds, one stone; only
one bird-hitting was reported ;)

Cheers,
Andrew
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-24 Thread Andrew Wilkins
On Fri, Feb 24, 2017 at 6:51 PM Mark Shuttleworth <m...@ubuntu.com> wrote:

> On 24/02/17 11:30, Andrew Wilkins wrote:
>
> On Fri, Feb 24, 2017 at 6:15 PM Adam Collard <adam.coll...@canonical.com>
> wrote:
>
> On Fri, 24 Feb 2017 at 10:07 Adam Israel <adam.isr...@canonical.com>
> wrote:
>
> Thanks for calling this out, Simon! We should be shouting this from the
> rooftops and celebrating in the streets.
>
>
> Only if you also wave a big WARNING banner!
>
> I can definitely see value in pre-installing a bunch of things in your LXD
> images as a way of speeding up the development/testing cycle, but doing so
> might give you false confidence in your charm. It will become much easier
> to forget to list a package that you need installing,  or to ensure that
> you have the correct access (PPA credentials, or proxy details etc.) and
> having your charm gracefully handle when those are missing.
>
> Juju promises charms encoding operations that can work across multiple
> cloud providers, bare metal and containers please keep that in mind :)
>
>
> Indeed, and this is the reason why it wasn't called out. We probably
> should document it for power-users/charmers, but in general I wouldn't go
> encouraging its use. Optimising for LXD is great for repeat deploys, but it
> wouldn't be great if that leads to less attention to quality on the rest of
> the providers.
>
> Anyway, I'm glad it's helping make charmers' lives easier!
>
>
> We should call this out loudly because it helps people making charms.
>
> Those people are plenty smart enough to debug a failure if they forget a
> dependency which was preinstalled in their dev images.
>

I was thinking about deployment times more than anything else. If you don't
feel your user's pain, you're less likely to make it go away. But anyway,
that can be fixed with automation as well (CI, as you say below).


> Don't HIDE something that helps developers for fear of those developers
> making mistakes, TEACH them to put CI or other out-of-band tests in place
> anyway that will catch that every time.
>

FWIW, it wasn't intentionally hidden to start with, it was just missed. I
made the changes primarily to support an external user who wanted to demo
CentOS charms on LXD; the change also enabled custom images in general, and
also slightly improved container startup time. Three birds, one stone; only
one bird-hitting was reported ;)

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-24 Thread Andrew Wilkins
On Fri, Feb 24, 2017 at 6:15 PM Adam Collard 
wrote:

> On Fri, 24 Feb 2017 at 10:07 Adam Israel 
> wrote:
>
> Thanks for calling this out, Simon! We should be shouting this from the
> rooftops and celebrating in the streets.
>
>
> Only if you also wave a big WARNING banner!
>
> I can definitely see value in pre-installing a bunch of things in your LXD
> images as a way of speeding up the development/testing cycle, but doing so
> might give you false confidence in your charm. It will become much easier
> to forget to list a package that you need installing,  or to ensure that
> you have the correct access (PPA credentials, or proxy details etc.) and
> having your charm gracefully handle when those are missing.
>
> Juju promises charms encoding operations that can work across multiple
> cloud providers, bare metal and containers please keep that in mind :)
>

Indeed, and this is the reason why it wasn't called out. We probably should
document it for power-users/charmers, but in general I wouldn't go
encouraging its use. Optimising for LXD is great for repeat deploys, but it
wouldn't be great if that leads to less attention to quality on the rest of
the providers.

Anyway, I'm glad it's helping make charmers' lives easier!

Cheers,
Andrew


> On Fri, Feb 24, 2017 at 8:42 AM Stuart Bishop 
> wrote:
>
> On 23 February 2017 at 23:20, Simon Davy  wrote:
>
> > One thing that seems to have landed in 2.1, which is worth noting IMO, is
> > the local juju lxd image aliases.
> >
> > tl;dr: juju 2.1 now looks for the lxd image alias juju/$series/$arch in
> the
> > local lxd server, and uses that if it finds it.
> >
> > This is amazing. I can now build a local nightly image[1] that
> pre-installs
> > and pre-downloads a whole set of packages[2], and my local lxd units
> don't
> > have to install them when they spin up. Between layer-basic and Canonical
> > IS' basenode, for us that's about 111 packages that I don't need to
> install
> > on every machine in my 10 node bundle. Took my install hook times from
> 5min+
> > each to <1min, and probably halfs my initial deploy time, on average.
>
> Ooh, thanks for highlighting this! I've needed this feature for a long
> time for exactly the same reasons.
>
>
> > [2] my current nightly cron:
> > https://gist.github.com/bloodearnest/3474741411c4fdd6c2bb64d08dc75040
>
> /me starts stealing
>
> --
> Stuart Bishop 
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Adam Israel, Software Engineer
> Canonical // Cloud DevOps // Juju // Ecosystem
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-24 Thread Andrew Wilkins
On Fri, Feb 24, 2017 at 6:15 PM Adam Collard 
wrote:

> On Fri, 24 Feb 2017 at 10:07 Adam Israel 
> wrote:
>
> Thanks for calling this out, Simon! We should be shouting this from the
> rooftops and celebrating in the streets.
>
>
> Only if you also wave a big WARNING banner!
>
> I can definitely see value in pre-installing a bunch of things in your LXD
> images as a way of speeding up the development/testing cycle, but doing so
> might give you false confidence in your charm. It will become much easier
> to forget to list a package that you need installing,  or to ensure that
> you have the correct access (PPA credentials, or proxy details etc.) and
> having your charm gracefully handle when those are missing.
>
> Juju promises charms encoding operations that can work across multiple
> cloud providers, bare metal and containers please keep that in mind :)
>

Indeed, and this is the reason why it wasn't called out. We probably should
document it for power-users/charmers, but in general I wouldn't go
encouraging its use. Optimising for LXD is great for repeat deploys, but it
wouldn't be great if that leads to less attention to quality on the rest of
the providers.

Anyway, I'm glad it's helping make charmers' lives easier!

Cheers,
Andrew


> On Fri, Feb 24, 2017 at 8:42 AM Stuart Bishop 
> wrote:
>
> On 23 February 2017 at 23:20, Simon Davy  wrote:
>
> > One thing that seems to have landed in 2.1, which is worth noting IMO, is
> > the local juju lxd image aliases.
> >
> > tl;dr: juju 2.1 now looks for the lxd image alias juju/$series/$arch in
> the
> > local lxd server, and uses that if it finds it.
> >
> > This is amazing. I can now build a local nightly image[1] that
> pre-installs
> > and pre-downloads a whole set of packages[2], and my local lxd units
> don't
> > have to install them when they spin up. Between layer-basic and Canonical
> > IS' basenode, for us that's about 111 packages that I don't need to
> install
> > on every machine in my 10 node bundle. Took my install hook times from
> 5min+
> > each to <1min, and probably halfs my initial deploy time, on average.
>
> Ooh, thanks for highlighting this! I've needed this feature for a long
> time for exactly the same reasons.
>
>
> > [2] my current nightly cron:
> > https://gist.github.com/bloodearnest/3474741411c4fdd6c2bb64d08dc75040
>
> /me starts stealing
>
> --
> Stuart Bishop 
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Adam Israel, Software Engineer
> Canonical // Cloud DevOps // Juju // Ecosystem
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: removing colours from loggo package

2017-02-21 Thread Andrew Wilkins
On Tue, Feb 21, 2017 at 10:02 PM roger peppe 
wrote:

> In August, the loggo package was changed to make it log ANSI-terminal
> colour escape sequences by default.
>
> I'm not sure that was the right decision, for a couple of reasons:
>
> - daemons are often given a pseudo tty to run in, so the log files
> produced by our running services are now obscured by escape sequences,
> making them hard to process automatically (and larger).
>
> - it means that the loggo dependencies are much larger.  Where
> previously the loggo package had no non-stdlib dependencies, it now
> depends on 5 significantly sized repositories holding >100k lines of
> source, including at least one system-dependent (Windows-only) repo.
>
> I'd like to propose that we remove the colour-by-default logic from
> loggo and move it to a separate package where it can be used if
> required by a given client. Perhaps github.com/utils/loggocolor might
> be a reasonable place.
>
> Then we can change the juju command line client to use that writer by
> default meaning there should be no change in externally visible
> behaviour to juju client users. The jujud server executable should
> probably not be using coloured logging output IMO.
>
> Thoughts?
>

Seems reasonable to me. I've added it to today's tech board agenda.


>   cheers,
> rog.
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New in 2.1-beta5: Prometheus monitoring

2017-02-07 Thread Andrew Wilkins
On Tue, Feb 7, 2017 at 6:19 PM Jacek Nykis <jacek.ny...@canonical.com>
wrote:

> On 07/02/17 02:25, Andrew Wilkins wrote:
> > Hi folks,
> >
> > In the release notes there was an innocuous line about introspection
> > endpoints added to the controller. What this really means is that you can
> > now monitor Juju controllers with Prometheus. Juju controllers export
> > metrics, including:
> >  - API requests (total number and latencies by facade/method, grouped by
> > error code)
> >  - numbers of entities (models, users, machines, ...)
> >  - mgo/txn op counts
> >
> > We're working on getting the online docs updated. In the mean time,
> please
> > refer to https://github.com/juju/docs/issues/1624 for instructions on
> how
> > to set up Prometheus to scrape Juju. It would be great to get some early
> > feedback.
>
> Hi Andrew,
>
> Thanks! Those metrics will be super useful, I will try to find some time
> to look into them properly.
>
> Some early feedback:
> 1. Your docs say the metrics endpoint requires authentication. I think
> this can be problematic for people who run multiple controllers or
> recycle them often. Secrets set up requires manual steps and they need
> to be distributed to prometheus server. It would be very useful to allow
> unauthenticated access and rely on firewalls to restrict access
> (approach followed by most prometheus exporters I looked at).
> 2. You don't offer option to downgrade to HTTP which is problematic as
> well IMO. Similar to above it's an obstacle users have to go through
> before they can scrape targets, manual steps are required, CA certs need
> to be shipped around. It would be very convenient if users could
> explicitly fall back to http and let other layers to provide security.
>
> Basically I think letting users enable unauthenticated HTTP endpoint for
> prometheus metrics would be big usability win.
>

Thanks for the feedback, Jacek.

I agree that providing unauthenticated HTTP would be helpful for many
users. I don't think that should be the default, because some of the
metrics exposed could be considered sensitive. Also, it should be fairly
straight forward to automate the configuration of the Prometheus server.

Eventually, we intend for Juju itself to be described within the model.
When that is reality, it would be sensible for the Juju controller
application to have an endpoint for unauthenticated HTTP access to metrics.
You could then just bind that to a space that Prometheus can access.

In the interim, there is https://jujucharms.com/u/axwalk/juju-introspection/.
Deploy that to any machine in Juju (including but not limited to controller
machines), and you get access to that machine agent's metrics over
unauthenticated HTTP on a configurable port. PRs welcome if it doesn't
quite fit your needs.

Cheers,
Andrew
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: New in 2.1-beta5: Prometheus monitoring

2017-02-07 Thread Andrew Wilkins
On Tue, Feb 7, 2017 at 6:19 PM Jacek Nykis <jacek.ny...@canonical.com>
wrote:

> On 07/02/17 02:25, Andrew Wilkins wrote:
> > Hi folks,
> >
> > In the release notes there was an innocuous line about introspection
> > endpoints added to the controller. What this really means is that you can
> > now monitor Juju controllers with Prometheus. Juju controllers export
> > metrics, including:
> >  - API requests (total number and latencies by facade/method, grouped by
> > error code)
> >  - numbers of entities (models, users, machines, ...)
> >  - mgo/txn op counts
> >
> > We're working on getting the online docs updated. In the mean time,
> please
> > refer to https://github.com/juju/docs/issues/1624 for instructions on
> how
> > to set up Prometheus to scrape Juju. It would be great to get some early
> > feedback.
>
> Hi Andrew,
>
> Thanks! Those metrics will be super useful, I will try to find some time
> to look into them properly.
>
> Some early feedback:
> 1. Your docs say the metrics endpoint requires authentication. I think
> this can be problematic for people who run multiple controllers or
> recycle them often. Secrets set up requires manual steps and they need
> to be distributed to prometheus server. It would be very useful to allow
> unauthenticated access and rely on firewalls to restrict access
> (approach followed by most prometheus exporters I looked at).
> 2. You don't offer option to downgrade to HTTP which is problematic as
> well IMO. Similar to above it's an obstacle users have to go through
> before they can scrape targets, manual steps are required, CA certs need
> to be shipped around. It would be very convenient if users could
> explicitly fall back to http and let other layers to provide security.
>
> Basically I think letting users enable unauthenticated HTTP endpoint for
> prometheus metrics would be big usability win.
>

Thanks for the feedback, Jacek.

I agree that providing unauthenticated HTTP would be helpful for many
users. I don't think that should be the default, because some of the
metrics exposed could be considered sensitive. Also, it should be fairly
straight forward to automate the configuration of the Prometheus server.

Eventually, we intend for Juju itself to be described within the model.
When that is reality, it would be sensible for the Juju controller
application to have an endpoint for unauthenticated HTTP access to metrics.
You could then just bind that to a space that Prometheus can access.

In the interim, there is https://jujucharms.com/u/axwalk/juju-introspection/.
Deploy that to any machine in Juju (including but not limited to controller
machines), and you get access to that machine agent's metrics over
unauthenticated HTTP on a configurable port. PRs welcome if it doesn't
quite fit your needs.

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


New in 2.1-beta5: Prometheus monitoring

2017-02-06 Thread Andrew Wilkins
Hi folks,

In the release notes there was an innocuous line about introspection
endpoints added to the controller. What this really means is that you can
now monitor Juju controllers with Prometheus. Juju controllers export
metrics, including:
 - API requests (total number and latencies by facade/method, grouped by
error code)
 - numbers of entities (models, users, machines, ...)
 - mgo/txn op counts

We're working on getting the online docs updated. In the mean time, please
refer to https://github.com/juju/docs/issues/1624 for instructions on how
to set up Prometheus to scrape Juju. It would be great to get some early
feedback.

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: RFC "bootstrap --config" should be treated as "--model-default" and add "--model-config"

2017-01-26 Thread Andrew Wilkins
On Thu, Jan 26, 2017 at 9:16 PM Mark Shuttleworth  wrote:

>
> Why do we have bootstrap-constraints as a weird and different constraint,
> when we are intending to represent the controller services as apps with
> endpoints in the controller model anyway? Surely these are normal
> constraints on those apps?
>

bootstrap-constraints would apply to the controller only, within the
controller model. When we do represent the controller as an app, it would
make sense to store the constraints on that app.

It's separate from --constraints because --constraints affects the default
model's constraints. If you just want bigger controller instances without
affecting anything else, that's when you pass --bootstrap-constraints.


>
> Mark
>
>
> On 26/01/17 11:56, John Meinel wrote:
>
> So we know have "--bootstrap-constraints" to be very clear when you are
> talking about the controller itself vs the other machines in the model.
>
> Entries in a "config:" section in ~/.local/share/juju/cloud.yaml (and I
> believe controllers.yaml) would show up in model-defaults and apply to all
> models on your controller.
>
> Given '--config' is what people are used to supplying, and is the wording
> used in configuration files, it feels natural to use it on the command line
> as well.
>
> I'm not 100% sure whether it should be --model-config to match 'juju
> model-config' or --bootstrap-config to match --bootstrap-constraints to
> make it clear that you are setting values for the controller model, and not
> for all models that you will be creating thereafter.
>
> I'm pretty sure Michael Foord brought something like this up in the past,
> and I'm realizing that it really does follow well from his proposal. I'd be
> ok with leaving --model-default as a sort of alias/explicit request, but it
> does feel like people using --config probably really do mean
> --model-default.
>
> John
> =:->
>
>
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.x bootstrap on LXD broken (regular bridge, no NAT)

2017-01-23 Thread Andrew Wilkins
On Tue, Jan 24, 2017 at 12:54 AM John Meinel <j...@arbash-meinel.com> wrote:

> For your workaround, where does spec.Endpoint get filled in? By the user
> as part of bootstrap-contraints? or by juju as a default value? I don't see
> anything in your patch, which sounds like you would have to do:
>  juju bootstrap --bootstrap-constraints endpoint=HOSTIP lxd ...
> if you were working on a non-NAT bridge.
>
> Is that true?
>

The workaround involves adding a new cloud, as described in
https://bugs.launchpad.net/juju/+bug/1640455, Comment #9 and down.


> John
> =:->
>
>
> On Mon, Jan 23, 2017 at 3:07 PM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
> On Mon, Jan 23, 2017 at 6:54 PM Toubeau, Anthony <
> anthony.toub...@intel.com> wrote:
>
> Hello all,
>
> I'd like to bring to your attention a currently broken bootstrapping
> scenario:
> Local deployment through LXD using a standard bridge instead of the usual
> LXD provided lxdbr0.
>
> As a development vehicle, a Juju install was planned reusing the
> LXD-host's network. This implies having the Juju agent plus the future
> charms on the same subnet as the actual LXD host.
> As detailed in Bug 1633788 (https://bugs.launchpad.net/juju/+bug/1633788),
> the LXD-host's gateway is assumed (based on the current code) to be
> providing the tools for the actual bootstrapping.
>
> But, in the "non-lxdbr0" case let's say, the actual gateway may only be a
> simple DHCP provider for example, hence leading to a failed deployment.
>
> Reference:
> https://github.com/juju/juju/blob/staging/provider/lxd/environ_raw.go#L140
>
> // getRemoteConfig returns a lxdclient.Config using a TCP-based remote
> // if called from within an instance started by the LXD provider.
> Otherwise,
> // it returns an errors satisfying errors.IsNotFound.
> func getRemoteConfig(readFile readFileFunc, runCommand runCommandFunc)
> (*lxdclient.Config, error) {
> [...]
>
> // Read here...
> hostAddress, err := getDefaultGateway(runCommand)
>
> [...]
> return {
> lxdclient.Remote{
> Name:  "remote",
> Host:  hostAddress,
> [...]
> },
> }, nil
>
>
> Is this behavior an assumed trend or could we consider fixing it to allow
> this sort of "localhost-based" deployments without NAT?
>
>
> A workaround has been implemented in the 2.1 branch, here:
>
> https://github.com/juju/juju/commit/19bf802db6511d2081369da2a3fe9b13f1bcb9fd
>
> To use this workaround, please see the comments on
> https://bugs.launchpad.net/juju/+bug/1640455. I'll mark that bug as a
> duplicate of #1633788 now.
>
> This is just a workaround though. We can go (and probably should) go back
> to doing what we were doing. That was also imperfect: there was a built-in
> assumption that the host's address would never change. The ideal solution
> requires that LXD be changed; changes which got bumped this cycle.
>
> I'll look at reverting the default behaviour ASAP, if nobody else gets to
> it first.
>
> HTH,
> Andrew
>
>
> Many thanks in advance,
> Anthony
> -
> Intel Corporation SAS (French simplified joint stock company)
> Registered headquarters: "Les Montalets"- 2, rue de Paris,
> 92196 Meudon Cedex, France
> Registration Number:  302 456 199 R.C.S. NANTERRE
> Capital: 4,572,000 Euros
>
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.x bootstrap on LXD broken (regular bridge, no NAT)

2017-01-23 Thread Andrew Wilkins
On Mon, Jan 23, 2017 at 6:54 PM Toubeau, Anthony 
wrote:

> Hello all,
>
> I'd like to bring to your attention a currently broken bootstrapping
> scenario:
> Local deployment through LXD using a standard bridge instead of the usual
> LXD provided lxdbr0.
>
> As a development vehicle, a Juju install was planned reusing the
> LXD-host's network. This implies having the Juju agent plus the future
> charms on the same subnet as the actual LXD host.
> As detailed in Bug 1633788 (https://bugs.launchpad.net/juju/+bug/1633788),
> the LXD-host's gateway is assumed (based on the current code) to be
> providing the tools for the actual bootstrapping.
>
> But, in the "non-lxdbr0" case let's say, the actual gateway may only be a
> simple DHCP provider for example, hence leading to a failed deployment.
>
> Reference:
> https://github.com/juju/juju/blob/staging/provider/lxd/environ_raw.go#L140
>
> // getRemoteConfig returns a lxdclient.Config using a TCP-based remote
> // if called from within an instance started by the LXD provider.
> Otherwise,
> // it returns an errors satisfying errors.IsNotFound.
> func getRemoteConfig(readFile readFileFunc, runCommand runCommandFunc)
> (*lxdclient.Config, error) {
> [...]
>
> // Read here...
> hostAddress, err := getDefaultGateway(runCommand)
>
> [...]
> return {
> lxdclient.Remote{
> Name:  "remote",
> Host:  hostAddress,
> [...]
> },
> }, nil
>
>
> Is this behavior an assumed trend or could we consider fixing it to allow
> this sort of "localhost-based" deployments without NAT?
>

A workaround has been implemented in the 2.1 branch, here:

https://github.com/juju/juju/commit/19bf802db6511d2081369da2a3fe9b13f1bcb9fd

To use this workaround, please see the comments on
https://bugs.launchpad.net/juju/+bug/1640455. I'll mark that bug as a
duplicate of #1633788 now.

This is just a workaround though. We can go (and probably should) go back
to doing what we were doing. That was also imperfect: there was a built-in
assumption that the host's address would never change. The ideal solution
requires that LXD be changed; changes which got bumped this cycle.

I'll look at reverting the default behaviour ASAP, if nobody else gets to
it first.

HTH,
Andrew


> Many thanks in advance,
> Anthony
> -
> Intel Corporation SAS (French simplified joint stock company)
> Registered headquarters: "Les Montalets"- 2, rue de Paris,
> 92196 Meudon Cedex, France
> Registration Number:  302 456 199 R.C.S. NANTERRE
> Capital: 4,572,000 Euros
>
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: "unfairness" of juju/mutex

2016-11-16 Thread Andrew Wilkins
On Wed, Nov 16, 2016 at 11:26 PM John Meinel  wrote:

> So we just ran into an issue when you are running multiple units on the
> same machine and one of them is particularly busy.
>
> The specific case is when deploying Openstack and colocating things like
> "monitoring" charms with the "keystone" charm. Keystone itself has *lots* of
> things that relate to it, so it wants to fire something like 50
> relation-joined+changed hooks.
>
> The symptom is that unit-keystone ends up acquiring and re-acquiring the
> uniter hook lock for approximately 50 minutes and starves out all other
> units from coming up, because they can't run any of their hooks.
>
> From what I can tell, on Linux we are using
> net.Listen("abstract-unix-socket") and then polling at a 250ms interval to
> see if we can grab that socket.
>
> However, that means that every process that *doesn't* have the lock has
> an average time of 125ms to wake up and notice that the lock isn't held.
> However, a process that had the lock but has more hooks to fire is just
> going to release the lock, do a bit of logic, and then be ready to acquire
> the lock again, most likely much faster than 125ms.
>
> We *could* introduce some sort of sleep there, to give some other
> processes a chance. And/or use a range of times, instead of a fixed 250ms.
> (If sometimes you sleep for 50ms, etc).
>
> However, if we were using something like 'flock' then it has a blocking
> mode, where it can give you the lock as soon as someone else releases it.
>
> AIUI the only reason we liked abstract-unix-sockets was to not have a file
> on disk, but we had a whole directory on disk, and flock seems like it
> still gives us better sharing primitives than net.Listen.
>

+1 to blocking file lock. We could probably leave Windows alone, and just
do that on *nix.

Thoughts?
> John
> =:->
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Instance metadata for private cloud.

2016-11-11 Thread Andrew Wilkins
On Thu, Nov 10, 2016 at 11:52 PM Jonathan Proulx  wrote:

> Hi All,
>

Hi Jon,


> Trying to test out juju on my private OpenStack cloud.
>
> Having trouble bootstraping.
>

First, sorry. We recognise that the experience for setting up image
metadata is not great. We've got some ideas about improving the validation
tools, and generally simplifying the process. I can't say when that'll get
implemented, but it is on the list.


> I've followed https://jujucharms.com/docs/stable/howto-privatecloud to
> generate image metadata in ~/simplestreams and nomatter how I feed it
> in I get:
>
> ERROR failed to bootstrap model: no image metadata found
>
>
> I've tried the 'juju bootstrap --metadata-source
> /afs/csail.mit.edu/u/j/jon/simplestreams' suggested by the out put of
> 'juju metadata generate-image'
>
> I've tried uploading to OpenStack object store as described in
> https://jujucharms.com/docs/stable/howto-privatecloud and creating a
> servic eand an endpoitn for it. This may have failed because our
> object store is ceph not swift so semantics might be different.
>
> Also tried copying the simplestreams directory to public webspace and
> specifying:
>
> juju bootstrap tigtest tigtest-CSAIL_Stata --config image-metadata-url=
> https://people.csail.mit.edu/jon/simplestreams


It looks to me that you just need to append "/images" to that URL. I've
just gone ahead and run the same command but with "
https://people.csail.mit.edu/jon/simplestreams/images;.

I just hacked up my client to query as if it were your setup, with a reigon
name of "CSAIL_Stata" and keystone endpoint of "
https://keystone.csail.mit.edu:35358/v3;. With the adjusted
image-metadata-url, my client found the image metadata.

HTH.

Cheers,
Andrew


> and stepping out that url one level at a time as far as
>
> https://people.csail.mit.edu/jon/simplestreams/images/streams/v1/com.ubuntu.cloud-released-imagemetadata.json
>
> but always erors out the same way.
>
> What am I missing here?
>
>
> -Jon
> --
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju agent status remains "Allocating" and is not starting on MAAS 2.0 when --storage option is used

2016-11-08 Thread Andrew Wilkins
On Tue, Nov 8, 2016 at 8:29 PM Shilpa Kaul  wrote:

> Hi,
>
> I am trying to deploy a charm on MAAS 2.0 and making use of Juju Storage
> feature, so when I deploy my charm as shown below:
> *juju deploy  ibm-spectrum-scale  --storage disks=maas*, the machine gets
> started and status on MAAS is "deployed" but the juju agent does not start.
> The agent message is "*agent initializing*" and nothing happens after
> that.
>
> The same charm gets deployed if I dont give the --storage parameter during
> deploy time or use *--storage disks=loop*. This issue is seen only when i
> deploy the charm with *--storage disks=maas*. I have tried deploying the
> charm on AWS environment with --storage disks=ebs, there it works fine and
> no issues are seen.
>
> In the juju logs I see the below messages:
> to agent config as ["192.168.100.101:17070" "192.168.122.101:17070"]
> 2016-11-08 17:47:59 ERROR juju.worker.dependency engine.go:539
> "metric-collect" manifold worker returned unexpected error: failed to read
> charm from: /var/lib/juju/agents/unit-ibm-spectrum-scale-manager2-2/charm:
> stat /var/lib/juju/agents/unit-ibm-spectrum-scale-manager2-2/charm: no such
> file or directory
> 2016-11-08 17:47:59 INFO juju.worker.leadership tracker.go:184
> ibm-spectrum-scale-manager2/2 will renew ibm-spectrum-scale-manager2
> leadership at 2016-11-08 17:48:29.13919966 + UTC
> 2016-11-08 17:47:59 INFO juju.worker.meterstatus connected.go:112 skipped
> "meter-status-changed" hook (missing)
> 2016-11-08 17:47:59 INFO worker.uniter.jujuc tools.go:20 ensure jujuc
> symlinks in /var/lib/juju/tools/unit-ibm-spectrum-scale-manager2-2
> 2016-11-08 17:47:59 INFO worker.uniter.jujuc tools.go:40 was a symlink,
> now looking at /var/lib/juju/tools/2.0.1-xenial-amd64
> 2016-11-08 17:47:59 INFO juju.worker.uniter uniter.go:159 unit
> "ibm-spectrum-scale-manager2/2" started
> 2016-11-08 17:47:59 INFO juju.worker.uniter uniter.go:168 resuming charm
> install
> 2016-11-08 17:47:59 INFO juju.worker.uniter.charm bundles.go:77
> downloading local:xenial/ibm-spectrum-scale-manager2-3 from API server
> 2016-11-08 17:47:59 INFO juju.downloader download.go:111 downloading from
> local:xenial/ibm-spectrum-scale-manager2-3
> 2016-11-08 17:47:59 INFO juju.downloader download.go:94 download complete
> ("local:xenial/ibm-spectrum-scale-manager2-3")
> 2016-11-08 17:47:59 INFO juju.downloader download.go:174 download verified
> ("local:xenial/ibm-spectrum-scale-manager2-3")
>
> I compared the logs of the charm that I deployed on AWS and MAAS. In AWS
> also when I deploy the charm, same messages are logged, the only difference
> is that there its resuming the correct unit ie
> *ibm-spectrum-scale-manager2-2,* but in case of MAAS, it resumes for unit*
> ibm-spectrum-scale-manager2-3* which does not exist.
>
> Can anyone please help me in resolving this issue, I  need to test my
> charm deployment on MAAS making use of juju storage, but unable to do so
> due to the above noted behavior.


Hi Shilpa,

AFAICT, this is the same issue as you mentioned in the thread "Regarding
juju Storage - using MAAS as cloud provider". Can you please respond to
Blake's request for information in that thread? That should help us
identify the problem.

Thanks,
Andrew


> Thanks and Regards,
> Shilpa Kaul
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] Mattermost and layer:lets-encrypt

2016-10-24 Thread Andrew Wilkins
On Tue, Oct 25, 2016 at 2:22 AM Casey Marshall <casey.marsh...@canonical.com>
wrote:

> On Mon, Oct 24, 2016 at 3:51 AM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
> On Sun, Oct 16, 2016 at 12:07 AM Casey Marshall <
> casey.marsh...@canonical.com> wrote:
>
> With a much appreciated recent contribution from James Beedy (bdx) our
> Mattermost charm cs:~cmars/mattermost[1] is working again!
>
> This encouraged me to add some followup improvements to the charm to make
> it even easier to deploy with secure defaults -- automatic registration
> with Let's Encrypt.
>
> I've broken out the Let's Encrypt integration to a separate reactive charm
> layer, layer:lets-encrypt[2], which extends layer:nginx. Practically any
> charm that uses layer:nginx should be able to use layer:lets-encrypt. When
> a config option `fqdn` is set, the layer will automatically register with
> LE and obtain certificates.
>
>
> Nice, thank you. This will be useful for some stuff I'm working on.
>
> It would be nice if the nginx charm supported (included) this by default.
> Is there a reason it shouldn't? Having to create a separate charm that
> includes both means you then have to track changes.
>
>
> I came to the conclusion that layer:nginx and layer:lets-encrypt should
> each do one thing well for better reuse. layer:lets-encrypt no longer
> includes layer:nginx, but they're easy to use together.
>
> layer:lets-encrypt now just obtains certs, which you can then use in nginx
> templates -- or in any web application configuration, really. See the
> latest README[1] for up-to-date usage.
>

OK, indeed sounds better than including nginx. I'll have a play with it
soon.

Cheers,
Andrew


> -Casey
>
> [1] https://github.com/cmars/layer-lets-encrypt
>
>
>
> With the epic release of Juju 2.0 this past week -- the Mattermost charm
> depends on several features new in Juju 2 -- it's now ready to share.
>
> I'm operating a deployment of cs:~cmars/mattermost at
> https://live.cmars.tech/. If you'd like to try it out, here's an invite:
> https://live.cmars.tech/signup_user_complete/?id=wccpdu6dqp88mjkqbngnws6ohc
>
> -Casey
>
> [1] https://jujucharms.com/u/cmars/mattermost
> https://github.com/cmars/juju-charm-mattermost
> [2] https://github.com/cmars/layer-lets-encrypt
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [ANN] Mattermost and layer:lets-encrypt

2016-10-24 Thread Andrew Wilkins
On Sun, Oct 16, 2016 at 12:07 AM Casey Marshall <
casey.marsh...@canonical.com> wrote:

> With a much appreciated recent contribution from James Beedy (bdx) our
> Mattermost charm cs:~cmars/mattermost[1] is working again!
>
> This encouraged me to add some followup improvements to the charm to make
> it even easier to deploy with secure defaults -- automatic registration
> with Let's Encrypt.
>
> I've broken out the Let's Encrypt integration to a separate reactive charm
> layer, layer:lets-encrypt[2], which extends layer:nginx. Practically any
> charm that uses layer:nginx should be able to use layer:lets-encrypt. When
> a config option `fqdn` is set, the layer will automatically register with
> LE and obtain certificates.
>

Nice, thank you. This will be useful for some stuff I'm working on.

It would be nice if the nginx charm supported (included) this by default.
Is there a reason it shouldn't? Having to create a separate charm that
includes both means you then have to track changes.

With the epic release of Juju 2.0 this past week -- the Mattermost charm
> depends on several features new in Juju 2 -- it's now ready to share.
>
> I'm operating a deployment of cs:~cmars/mattermost at
> https://live.cmars.tech/. If you'd like to try it out, here's an invite:
> https://live.cmars.tech/signup_user_complete/?id=wccpdu6dqp88mjkqbngnws6ohc
>
> -Casey
>
> [1] https://jujucharms.com/u/cmars/mattermost
> https://github.com/cmars/juju-charm-mattermost
> [2] https://github.com/cmars/layer-lets-encrypt
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.0 is here!

2016-10-14 Thread Andrew Wilkins
On Fri, 14 Oct 2016, 12:34 p.m. Nicholas Skaggs, <
nicholas.ska...@canonical.com> wrote:

Juju 2.0 is here! This release has been a year in the making. We’d like
to thank everyone for their feedback, testing, and adoption of juju 2.0
throughout its development process! Juju brings refinements in ease of
use, while adding support for new clouds and features.


What an epic release! I for one am proud of what we've created together.
Great work everyone.

## New to juju 2?

https://jujucharms.com/docs/2.0/getting-started

## Need to install it?

If you are running Ubuntu, you can get it from the juju stable ppa:

 sudo add-apt-repository ppa:juju/stable
 sudo apt update; sudo apt install juju-2.0

Or install it from the snap store

 snap install juju --beta --devmode

Windows, Centos, and MacOS users can get a corresponding installer at:

https://launchpad.net/juju/+milestone/2.0.0

## Want to upgrade to GA?

Those of you running an RC version of juju 2 can upgrade to this release
by running:

juju upgrade-juju

## Feedback Appreciated!

We encourage everyone to subscribe the mailing list at
juju@lists.ubuntu.com and join us on #juju on freenode. We would love to
hear
your feedback and usage of juju.


--
Juju-dev mailing list
juju-...@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/juju-dev
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.0 is here!

2016-10-14 Thread Andrew Wilkins
On Fri, 14 Oct 2016, 12:34 p.m. Nicholas Skaggs, <
nicholas.ska...@canonical.com> wrote:

Juju 2.0 is here! This release has been a year in the making. We’d like
to thank everyone for their feedback, testing, and adoption of juju 2.0
throughout its development process! Juju brings refinements in ease of
use, while adding support for new clouds and features.


What an epic release! I for one am proud of what we've created together.
Great work everyone.

## New to juju 2?

https://jujucharms.com/docs/2.0/getting-started

## Need to install it?

If you are running Ubuntu, you can get it from the juju stable ppa:

 sudo add-apt-repository ppa:juju/stable
 sudo apt update; sudo apt install juju-2.0

Or install it from the snap store

 snap install juju --beta --devmode

Windows, Centos, and MacOS users can get a corresponding installer at:

https://launchpad.net/juju/+milestone/2.0.0

## Want to upgrade to GA?

Those of you running an RC version of juju 2 can upgrade to this release
by running:

juju upgrade-juju

## Feedback Appreciated!

We encourage everyone to subscribe the mailing list at
j...@lists.ubuntu.com and join us on #juju on freenode. We would love to
hear
your feedback and usage of juju.


--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/juju-dev
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Big memory usage improvements

2016-10-12 Thread Andrew Wilkins
On Thu, 13 Oct 2016, 4:39 a.m. Menno Smits, 
wrote:

> Christian (babbageclunk) has been busy fixing various memory leaks in the
> Juju controllers and has made some significant improvements. Chris
> (veebers) has been tracking resource usage for a long running test which
> adds and removes a bunch of models and he noticed the difference.
>
> Take a look at the memory usage graphs here:
>
> Before: http://people.canonical.com/~leecj2/perfscalemem/
> After: http://people.canonical.com/~leecj2/perfscalemem2/
>

Very nice. Thank you Christian and Chris.

Interestingly the MongoDB memory usage profile is quite different as well.
> I'm not sure if this is due to Christian's improvements or something else.
>
> There's possibly still some more small leaks somewhere but this is
> fantastic regardless. Thanks to Christian for tackling this and Chris for
> tracking the numbers.
>
> - Menno
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Upgrading to VPC on AWS

2016-10-09 Thread Andrew Wilkins
On Fri, Oct 7, 2016 at 11:31 PM Mark Shuttleworth  wrote:

> Hi folks
>
>
>
> My AWS account pre-dates VPCs, so I didn't have one by default. I've now
>
> added one, how can I update my credential / controller to make use of it
>
> by default?
>

At the moment the only way to use a non-default VPC is to specify it
explicitly when bootstrapping or adding a model, using "--config
vpc-id=vpc-xyz". There are some requirements for the VPC, which bootstrap
will inform you of if they are not met. The error message is pasted below.
A couple of additional things I found while trying it out just now:
 - you should have a subnet for each availability zone
 - to create "default route", set the destination as 0.0.0.0/0. Your routes
should look like:
Destination
Target
Status
Propagated
10.0.0.0/16
local
Active
No
0.0.0.0/0
igw-5bd2103c

Active
No

We'll be making this all automatic. If it can, Juju will create a suitable
VPC, subnets, etc. for you to use, and then use it without you having to
specify any config.

Cheers,
Andrew

"""
The given vpc-id does not meet one or more of the following minimum
Juju requirements:

1. VPC should be in "available" state and contain one or more subnets.
2. An Internet Gateway (IGW) should be attached to the VPC.
3. The main route table of the VPC should have both a default route
   to the attached IGW and a local route matching the VPC CIDR block.
4. At least one of the VPC subnets should have MapPublicIPOnLaunch
   attribute enabled (i.e. at least one subnet needs to be 'public').
5. All subnets should be implicitly associated to the VPC main route
   table, rather than explicitly to per-subnet route tables.

A default VPC already satisfies all of the requirements above. If you
still want to use the VPC, try running 'juju bootstrap' again with:

  --config vpc-id=%s --config vpc-id-force=true

to force Juju to bypass the requirements check (NOT recommended unless
you understand the implications: most importantly, not being able to
access the Juju controller, likely causing bootstrap to fail, or trying
to deploy exposed workloads on instances started in private or isolated
subnets).
"""


> Mark
>
>
>
>
>
> --
>
> Juju mailing list
>
> Juju@lists.ubuntu.com
>
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.0-rc3 is here!

2016-10-06 Thread Andrew Wilkins
On Fri, Oct 7, 2016 at 6:15 AM Curtis Hovey-Canonical 
wrote:

> A new development release of Juju, 2.0-rc3, is here!
>
>
> ## What's new?
>
> * For an AWS VPC account juju will create a t2.medium for controller
>   instances by default now. Non-controller instances are unchanged for
>   now, and remain m3.medium by default. Controller instance root disk
>   now defaults to 32GiB, but can be overridden with constraints.
> * Shorten the hostnames we apply to instances created by the OpenStack
>   provider.
> Example old hostname:
> juju-fd943864-df2e-4da1-8e7d-5116a87d4e7c-machine-14
>
> Example new hostname:
> Juju-df7591-controller-0
> * Added support for LXD 2.3 apis
> * New update-credential command
> * Added --model-default option to the bootstrap
> * LXD containers now have proper hostnames set
>

Also, support for the aws/ap-south-1 region has been added.

Adam Stokes just found a bug related to this, which prevented him from
being able to destroy his rc2 controller with an rc3 client. I'll fix this
for 2.0, but in the mean time I recommend you destroy your controller
before upgrading the client.

If you do upgrade the client, then you will need to modify
~/.local/share/juju/bootstrap-config.yaml, fixing the endpoint cached in
there. You can find the correct value by running "juju show-cloud aws", and
picking out the value associated with the region you bootstrapped.

Cheers,
Andrew


> ## How do I get it?
>
> If you are running Ubuntu, you can get it from the juju devel ppa:
>
> sudo add-apt-repository ppa:juju/devel
> sudo apt-get update; sudo apt-get install juju-2.0
>
> Windows, Centos, and MacOS users can get a corresponding installer at:
>
> https://launchpad.net/juju/+milestone/2.0-rc3
>
>
> ## Feedback Appreciated!
>
> We encourage everyone to subscribe the mailing list at
> juju@lists.ubuntu.com and join us on #juju on freenode. We would love to
> hear
> your feedback and usage of juju.
>
>
> ## Anything else?
>
> You can read more information about what's in this release by viewing the
> release notes here:
>
> https://jujucharms.com/docs/devel/temp-release-notes
>
>
> --
> Curtis Hovey
> Canonical Cloud Development and Operations
> http://launchpad.net/~sinzui
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.0-rc3 is here!

2016-10-06 Thread Andrew Wilkins
On Fri, Oct 7, 2016 at 6:15 AM Curtis Hovey-Canonical 
wrote:

> A new development release of Juju, 2.0-rc3, is here!
>
>
> ## What's new?
>
> * For an AWS VPC account juju will create a t2.medium for controller
>   instances by default now. Non-controller instances are unchanged for
>   now, and remain m3.medium by default. Controller instance root disk
>   now defaults to 32GiB, but can be overridden with constraints.
> * Shorten the hostnames we apply to instances created by the OpenStack
>   provider.
> Example old hostname:
> juju-fd943864-df2e-4da1-8e7d-5116a87d4e7c-machine-14
>
> Example new hostname:
> Juju-df7591-controller-0
> * Added support for LXD 2.3 apis
> * New update-credential command
> * Added --model-default option to the bootstrap
> * LXD containers now have proper hostnames set
>

Also, support for the aws/ap-south-1 region has been added.

Adam Stokes just found a bug related to this, which prevented him from
being able to destroy his rc2 controller with an rc3 client. I'll fix this
for 2.0, but in the mean time I recommend you destroy your controller
before upgrading the client.

If you do upgrade the client, then you will need to modify
~/.local/share/juju/bootstrap-config.yaml, fixing the endpoint cached in
there. You can find the correct value by running "juju show-cloud aws", and
picking out the value associated with the region you bootstrapped.

Cheers,
Andrew


> ## How do I get it?
>
> If you are running Ubuntu, you can get it from the juju devel ppa:
>
> sudo add-apt-repository ppa:juju/devel
> sudo apt-get update; sudo apt-get install juju-2.0
>
> Windows, Centos, and MacOS users can get a corresponding installer at:
>
> https://launchpad.net/juju/+milestone/2.0-rc3
>
>
> ## Feedback Appreciated!
>
> We encourage everyone to subscribe the mailing list at
> j...@lists.ubuntu.com and join us on #juju on freenode. We would love to
> hear
> your feedback and usage of juju.
>
>
> ## Anything else?
>
> You can read more information about what's in this release by viewing the
> release notes here:
>
> https://jujucharms.com/docs/devel/temp-release-notes
>
>
> --
> Curtis Hovey
> Canonical Cloud Development and Operations
> http://launchpad.net/~sinzui
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Upcoming changes for the ec2 provider

2016-10-06 Thread Andrew Wilkins
Hi folks,

Just a heads up to let you know about some changes made to the ec2
provider, which will show up in Juju 2.0.

We are now pulling down the Price List API (
http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/price-changes.html)
and using that to determine which instance types to launch. This means that
you will have access to more instance types now, and we should have more
accurate/up-to-date pricing information, and region availability.

The default constraints for controllers has been modified, so that on ec2
we will now default controller machines to t2.medium, as long as you have a
default VPC in your region, or you specify one by ID (--config vpc-id=...).
We were previously defaulting to m3.medium. The t2.medium instances have a
little more memory, and 2 CPUs instead of 1. They are burstable instances,
so they should suit controllers managing a small-to-medium set of
applications. Controllers are now started with 32GiB root disk, instead of
8GiB as they were before.

As before, you can override the defaults by specifying constraints at
bootstrap time. Also, these changes do not affect non-controller instances.
If you use "add-machine", you will get an m3.medium if your region supports
them.

Cheers,
Andrew
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Reviews on Github

2016-09-21 Thread Andrew Wilkins
On Wed, Sep 21, 2016 at 5:53 AM Menno Smits 
wrote:

> Some of us probably got a little excited (me included). There should be
> discussion and a clear announcement before we make a signigicant change to
> our process. The tech board meeting is today/tonight so we'll discuss it
> there as per Rick's email. Please contribute to this thread if you haven't
> already and have strong opinions either way on the topic.
>

We discussed Github reviews vs. Reviewboard at the tech board meeting
today, and we all agreed that we should go ahead with a trial for 2 weeks.

There are pros and cons to each; neither is perfect. You can find the main
points of discussion in the tech board agenda.

Please give it a shot and provide your criticisms so we decide on the best
path forward at the end of the trial.

Cheers,
Andrew

Interestingly our Github/RB integration seems to have broken a little since
> Github made these changes. The links to Reviewboard on pull requests aren't
> getting inserted any more. If we decide to stay with RB
>
> On 21 September 2016 at 05:54, Rick Harding 
> wrote:
>
>> I spoke with Alexis today about this and it's on her list to check with
>> her folks on this. The tech board has been tasked with he decision, so
>> please feel free to shoot a copy of your opinions their way. As you say, on
>> the one hand it's a big impact on the team, but it's also a standard
>> developer practice that not everyone will agree with so I'm sure the tech
>> board is a good solution to limiting the amount of bike-shedding and to
>> have some multi-mind consensus.
>>
>> On Tue, Sep 20, 2016 at 1:52 PM Katherine Cox-Buday <
>> katherine.cox-bu...@canonical.com> wrote:
>>
>>> Seems like a good thing to do would be to ensure the tech board doesn't
>>> have any objections and then put it to a vote since it's more a property of
>>> the team and not the codebase.
>>>
>>> I just want some consistency until a decision is made. E.g. "we will be
>>> trying out GitHub reviews for the next two weeks; all reviews should be
>>> done on there".
>>>
>>> --
>>> Katherine
>>>
>>> Nate Finch  writes:
>>>
>>> > Can we try reviews on github for a couple weeks? Seems like we'll
>>> > never know if it's sufficient if we don't try it. And there's no setup
>>> > cost, which is nice.
>>> >
>>> > On Tue, Sep 20, 2016 at 12:44 PM Katherine Cox-Buday
>>> >  wrote:
>>> >
>>> > I see quite a few PRs that are being reviewed in GitHub and not
>>> > ReviewBoard. I really don't care where we do them, but can we
>>> > please pick a direction and move forward? And until then, can we
>>> > stick to our previous decision and use RB? With people using both
>>> > it's much more difficult to tell what's been reviewed and what
>>> > hasn't.
>>> >
>>> > --
>>> > Katherine
>>> >
>>> > Nate Finch  writes:
>>> >
>>> > > In case you missed it, Github rolled out a new review process.
>>> > It
>>> > > basically works just like reviewboard does, where you start a
>>> > review,
>>> > > batch up comments, then post the review as a whole, so you don't
>>> > just
>>> > > write a bunch of disconnected comments (and get one email per
>>> > review,
>>> > > not per comment). The only features reviewboard has is the edge
>>> > case
>>> > > stuff that we rarely use: like using rbt to post a review from a
>>> > > random diff that is not connected directly to a github PR. I
>>> > think
>>> > > that is easy enough to give up in order to get the benefit of
>>> > not
>>> > > needing an entirely separate system to handle reviews.
>>> > >
>>> > > I made a little test review on one PR here, and the UX was
>>> > almost
>>> > > exactly like working in reviewboard:
>>> > > https://github.com/juju/juju/pull/6234
>>> > >
>>> > > There may be important edge cases I'm missing, but I think it's
>>> > worth
>>> > > looking into.
>>> > >
>>> > > -Nate
>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0-rc1 is here!

2016-09-21 Thread Andrew Wilkins
On Wed, Sep 21, 2016 at 1:56 PM Curtis Hovey-Canonical 
wrote:

> A new development release of Juju, 2.0-rc1, is here!
>

Woohoo!


> ## What's New in RC1
>
> * The Juju client now works on any Linux flavour. When bootstrapping
>   with local tools, it's now possible to create a controller of any
>   supported Linux series regardless of the Linux flavour the client
>   is running on.
> * Juju resolved command retries failed hooks by default:
>   juju resolved  // marks unit errors resolved and retries failed
> hooks
>   juju resolved --no-retry  //marks unit errors resolved w/o
> retrying hooks
> * MAAS 2.0 Juju provider has been updated to use MAAS API 2.0's owner
>   data for instance tagging.
> * Networking fixes for containers in MAAS 2.0 when the parent device is
>   unconfigured. (#1566791)
> * Azure provider performance has been enhanced, utilising Azure Resource
>   Manager templates, and improved parallelisation.
> * Azure provider now supports an "interactive" auth-type, making it much
>   easier to set up credentials for bootstrapping. The "userpass"
>   auth-type has been deprecated, and replaced with
>   "service-principal-secret".
>

In case anyone jumps right on this, please note that
https://streams.canonical.com/juju/public-clouds.syaml isn't yet updated.
It will be updated soon, but in the mean time, if you want to try out the
azure interactive add-credential, make sure you:
 - delete ~/.local/share/juju/public-clouds.yaml (if it exists)
 - *don't* run "juju update-clouds" until that file is updated
Then Juju will use the cloud definitions built into the client.

Cheers,
Andrew


> ## How do I get it?
>
> If you are running Ubuntu, you can get it from the juju devel ppa:
>
> sudo add-apt-repository ppa:juju/devel
> sudo apt-get update; sudo apt-get install juju-2.0
>
> Or install it from the snap store
>
> snap install juju --beta --devmode
>
> Windows, Centos, and OS X users can get a corresponding installer at:
>
> https://launchpad.net/juju/+milestone/2.0-rc1
>
>
> ## Feedback Appreciated!
>
> We encourage everyone to subscribe the mailing list at
> juju@lists.ubuntu.com and join us on #juju on freenode. We would love
> to hear your feedback and usage of juju.
>
>
> ## Anything else?
>
> You can read more information about what's in this release by viewing
> the release notes here:
>
> https://jujucharms.com/docs/devel/temp-release-notes
>
>
> --
> Curtis Hovey
> Canonical Cloud Development and Operations
> http://launchpad.net/~sinzui
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.0-rc1 is here!

2016-09-21 Thread Andrew Wilkins
On Wed, Sep 21, 2016 at 1:56 PM Curtis Hovey-Canonical 
wrote:

> A new development release of Juju, 2.0-rc1, is here!
>

Woohoo!


> ## What's New in RC1
>
> * The Juju client now works on any Linux flavour. When bootstrapping
>   with local tools, it's now possible to create a controller of any
>   supported Linux series regardless of the Linux flavour the client
>   is running on.
> * Juju resolved command retries failed hooks by default:
>   juju resolved  // marks unit errors resolved and retries failed
> hooks
>   juju resolved --no-retry  //marks unit errors resolved w/o
> retrying hooks
> * MAAS 2.0 Juju provider has been updated to use MAAS API 2.0's owner
>   data for instance tagging.
> * Networking fixes for containers in MAAS 2.0 when the parent device is
>   unconfigured. (#1566791)
> * Azure provider performance has been enhanced, utilising Azure Resource
>   Manager templates, and improved parallelisation.
> * Azure provider now supports an "interactive" auth-type, making it much
>   easier to set up credentials for bootstrapping. The "userpass"
>   auth-type has been deprecated, and replaced with
>   "service-principal-secret".
>

In case anyone jumps right on this, please note that
https://streams.canonical.com/juju/public-clouds.syaml isn't yet updated.
It will be updated soon, but in the mean time, if you want to try out the
azure interactive add-credential, make sure you:
 - delete ~/.local/share/juju/public-clouds.yaml (if it exists)
 - *don't* run "juju update-clouds" until that file is updated
Then Juju will use the cloud definitions built into the client.

Cheers,
Andrew


> ## How do I get it?
>
> If you are running Ubuntu, you can get it from the juju devel ppa:
>
> sudo add-apt-repository ppa:juju/devel
> sudo apt-get update; sudo apt-get install juju-2.0
>
> Or install it from the snap store
>
> snap install juju --beta --devmode
>
> Windows, Centos, and OS X users can get a corresponding installer at:
>
> https://launchpad.net/juju/+milestone/2.0-rc1
>
>
> ## Feedback Appreciated!
>
> We encourage everyone to subscribe the mailing list at
> j...@lists.ubuntu.com and join us on #juju on freenode. We would love
> to hear your feedback and usage of juju.
>
>
> ## Anything else?
>
> You can read more information about what's in this release by viewing
> the release notes here:
>
> https://jujucharms.com/docs/devel/temp-release-notes
>
>
> --
> Curtis Hovey
> Canonical Cloud Development and Operations
> http://launchpad.net/~sinzui
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Upcoming Azure auth changes

2016-09-20 Thread Andrew Wilkins
On Thu, Sep 15, 2016 at 9:56 AM Andrew Wilkins <andrew.wilk...@canonical.com>
wrote:

> On Thu, Sep 15, 2016 at 9:15 AM Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> Hi folks,
>>
>> Just a heads up, where will be some changes to authentication in the
>> Azure provider. When https://github.com/juju/juju/pull/6247 lands (if
>> you're working off master), or otherwise when rc1 is out, you will need to
>> remove "tenant-id" from your credentials.yaml.
>>
>
> Slight change of plans. I'm going to deprecate the "userpass"
> authentication type, but keep it working until rc1 is out.
>
> There will be two new auth-types: "service-principal-secret", and
> "interactive". The former is a replacement for userpass, and has exactly
> the same attributes as today, minus the tenant-id. "Interactive" is a work
> in progress. I'll write back about it once the work is progressed enough
> that I can explain how to use it.
>

The "interactive" auth-type for Azure has just landed on master, so it
should be in 2.0 RC1. This is now the default auth-type when using "juju
add-credential azure".

To add a credential for Azure, you now do the following:
 - run "juju add-credential azure"
 - enter credential name of your choosing
 - select the "interactive" auth-type
 - enter your subscription ID [0]
 - you will now be prompted to open a URL and enter a code, then proceed to
authenticate with Azure and authorise Juju to create credentials on your
behalf. Once you have given your consent, Juju will do the rest.

Cheers,
Andrew

[0] your subscription ID can be found in the Azure Portal, under the
Subscriptions blade:
https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Upcoming Azure auth changes

2016-09-14 Thread Andrew Wilkins
On Thu, Sep 15, 2016 at 9:15 AM Andrew Wilkins <andrew.wilk...@canonical.com>
wrote:

> Hi folks,
>
> Just a heads up, where will be some changes to authentication in the Azure
> provider. When https://github.com/juju/juju/pull/6247 lands (if you're
> working off master), or otherwise when rc1 is out, you will need to remove
> "tenant-id" from your credentials.yaml.
>

Slight change of plans. I'm going to deprecate the "userpass"
authentication type, but keep it working until rc1 is out.

There will be two new auth-types: "service-principal-secret", and
"interactive". The former is a replacement for userpass, and has exactly
the same attributes as today, minus the tenant-id. "Interactive" is a work
in progress. I'll write back about it once the work is progressed enough
that I can explain how to use it.


> There is more work underway to improve the bootstrap/credentials
> experience for Azure.
>
> Cheers,
> Andrew
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Upcoming Azure auth changes

2016-09-14 Thread Andrew Wilkins
On Thu, Sep 15, 2016 at 9:15 AM Andrew Wilkins <andrew.wilk...@canonical.com>
wrote:

> Hi folks,
>
> Just a heads up, where will be some changes to authentication in the Azure
> provider. When https://github.com/juju/juju/pull/6247 lands (if you're
> working off master), or otherwise when rc1 is out, you will need to remove
> "tenant-id" from your credentials.yaml.
>

Slight change of plans. I'm going to deprecate the "userpass"
authentication type, but keep it working until rc1 is out.

There will be two new auth-types: "service-principal-secret", and
"interactive". The former is a replacement for userpass, and has exactly
the same attributes as today, minus the tenant-id. "Interactive" is a work
in progress. I'll write back about it once the work is progressed enough
that I can explain how to use it.


> There is more work underway to improve the bootstrap/credentials
> experience for Azure.
>
> Cheers,
> Andrew
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Upcoming Azure auth changes

2016-09-14 Thread Andrew Wilkins
Hi folks,

Just a heads up, where will be some changes to authentication in the Azure
provider. When https://github.com/juju/juju/pull/6247 lands (if you're
working off master), or otherwise when rc1 is out, you will need to remove
"tenant-id" from your credentials.yaml.

There is more work underway to improve the bootstrap/credentials experience
for Azure.

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Upcoming Azure auth changes

2016-09-14 Thread Andrew Wilkins
Hi folks,

Just a heads up, where will be some changes to authentication in the Azure
provider. When https://github.com/juju/juju/pull/6247 lands (if you're
working off master), or otherwise when rc1 is out, you will need to remove
"tenant-id" from your credentials.yaml.

There is more work underway to improve the bootstrap/credentials experience
for Azure.

Cheers,
Andrew
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Reviews on Github

2016-09-14 Thread Andrew Wilkins
On Thu, Sep 15, 2016 at 6:22 AM Rick Harding 
wrote:

> I think that the issue is that someone has to maintain the RB and the
> cost/time spent on that does not seem commensurate with the bonus features
> in my experience.
>

Agreed and +1. I propose we all try it for a couple of weeks, and see how
we feel about it then. RB isn't going to go anywhere soon - it's just a
matter of whether we keep our instance alive.

In case anyone's wondering about pipelines: it looks like you can review on
individual commits, so that's covered.

On Wed, Sep 14, 2016 at 6:13 PM Ian Booth  wrote:
>
>> One thing review board does better is use gutter indicators so as not to
>> interrupt the flow of reading the code with huge comment blocks. It also
>> seems
>> much better at allowing previous commits with comments to be viewed in
>> their
>> entirety. And it allows the reviewer to differentiate between issues and
>> comments (ie fix this vs take note of this), plus it allows the notion of
>> marking stuff as fixed vs dropped, with a reason for dropping if needed.
>> So the
>> github improvements are nice but there's still a large and significant
>> gap that
>> is yet to be filled. I for one would miss all the features reviewboard
>> offers.
>> Unless there's a way of doing the same thing in github that I'm not aware
>> of.
>>
>> On 15/09/16 07:22, Tim Penhey wrote:
>> > I'm +1 if we can remove the extra tools and we don't get email per
>> comment.
>> >
>> > On 15/09/16 08:03, Nate Finch wrote:
>> >> In case you missed it, Github rolled out a new review process.  It
>> >> basically works just like reviewboard does, where you start a review,
>> >> batch up comments, then post the review as a whole, so you don't just
>> >> write a bunch of disconnected comments (and get one email per review,
>> >> not per comment).  The only features reviewboard has is the edge case
>> >> stuff that we rarely use:  like using rbt to post a review from a
>> random
>> >> diff that is not connected directly to a github PR. I think that is
>> easy
>> >> enough to give up in order to get the benefit of not needing an
>> entirely
>> >> separate system to handle reviews.
>> >>
>> >> I made a little test review on one PR here, and the UX was almost
>> >> exactly like working in reviewboard:
>> https://github.com/juju/juju/pull/6234
>> >>
>> >> There may be important edge cases I'm missing, but I think it's worth
>> >> looking into.
>> >>
>> >> -Nate
>> >>
>> >>
>> >
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Deploying Xenial charms in LXD? Read this

2016-09-08 Thread Andrew Wilkins
On Thu, Sep 8, 2016 at 9:23 PM Marco Ceppi 
wrote:

> Hey everyone,
>
> An issue was identified late yesterday for those deploying Xenial charms
> to either the LXD provider or LXD instances in a cloud.
>
> The symptoms for this manifest as the LXD machine running is in a running
> state (and has an IP address assigned) but the agent does not start. This
> leaves the workload permanently stuck in a "Waiting for agent to
> initialize" state. This problem originates from a problem in cloud-init and
> systemd, being triggered by an update to the snapd package for xenial.
>

FYI this is not LXD-specific. I've just now seen the same thing happen on
Azure.

Thanks to James Beedy, from the community, for posting this workaround
> which appears to be working consistently for the moment:
>
> juju set-model-config enable-os-refresh-update=false
> juju set-model-config enable-os-upgrade=false
>
>
> This should bypass the section of the cloud-init process that's causing
> the hang at the moment. For those interested in tracking the bugs I believe
> these are the two related ones for this problem:
>
> - https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1621229
> - https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1576692
>
> I'll make sure to post an update when this has been resolved.
>
> Thanks,
> Marco Ceppi
> --
> canonical-juju mailing list
> canonical-j...@lists.canonical.com
> Modify settings or unsubscribe at:
> https://lists.canonical.com/mailman/listinfo/canonical-juju
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0-beta17 is here!

2016-09-01 Thread Andrew Wilkins
On Fri, Sep 2, 2016 at 3:26 AM Marco Ceppi 
wrote:

> On Thu, Sep 1, 2016 at 3:10 PM Nicholas Skaggs <
> nicholas.ska...@canonical.com> wrote:
>
>> A new development release of Juju, 2.0-beta17, is here!
>>
>> ## What's new?
>>
>> * add-model now takes region name as an optional positional argument,
>>to be consistent with bootstrap. The --region flag has been removed.
>>
>
> If I read this correctly, I can bootstrap  eastus2 controller on Azure and
> create a model on westus? I was always under the impression that most
> clouds don't have cross region communication.
>

This was possible in beta16, just using a different CLI syntax. When you
create models in different regions, their agents will communicate with the
controller across the public network.

* show-controller now includes the agent version
>> * show-controllers has been removed as an alias to show-controller
>>
>> ## How do I get it?
>>
>> If you are running Ubuntu, you can get it from the juju devel ppa:
>>
>>  sudo add-apt-repository ppa:juju/devel
>>  sudo apt update; sudo apt install juju-2.0
>>
>> Or install it from the snap store
>>
>> snap install juju --beta --devmode
>>
>> Windows, Centos, and OS X users can get a corresponding installer at:
>>
>>  https://launchpad.net/juju/+milestone/2.0-beta17
>>
>>
>> ## Feedback Appreciated!
>>
>> We encourage everyone to subscribe the mailing list at
>> juju@lists.ubuntu.com and join us on #juju on freenode. We would love to
>> hear
>> your feedback and usage of juju.
>>
>>
>> ## Anything else?
>>
>> You can read more information about what's in this release by viewing the
>> release notes here:
>>
>> https://jujucharms.com/docs/devel/temp-release-notes
>>
>>
>>
>> --
>> Juju-dev mailing list
>> juju-...@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.0-beta17 is here!

2016-09-01 Thread Andrew Wilkins
On Fri, Sep 2, 2016 at 3:26 AM Marco Ceppi 
wrote:

> On Thu, Sep 1, 2016 at 3:10 PM Nicholas Skaggs <
> nicholas.ska...@canonical.com> wrote:
>
>> A new development release of Juju, 2.0-beta17, is here!
>>
>> ## What's new?
>>
>> * add-model now takes region name as an optional positional argument,
>>to be consistent with bootstrap. The --region flag has been removed.
>>
>
> If I read this correctly, I can bootstrap  eastus2 controller on Azure and
> create a model on westus? I was always under the impression that most
> clouds don't have cross region communication.
>

This was possible in beta16, just using a different CLI syntax. When you
create models in different regions, their agents will communicate with the
controller across the public network.

* show-controller now includes the agent version
>> * show-controllers has been removed as an alias to show-controller
>>
>> ## How do I get it?
>>
>> If you are running Ubuntu, you can get it from the juju devel ppa:
>>
>>  sudo add-apt-repository ppa:juju/devel
>>  sudo apt update; sudo apt install juju-2.0
>>
>> Or install it from the snap store
>>
>> snap install juju --beta --devmode
>>
>> Windows, Centos, and OS X users can get a corresponding installer at:
>>
>>  https://launchpad.net/juju/+milestone/2.0-beta17
>>
>>
>> ## Feedback Appreciated!
>>
>> We encourage everyone to subscribe the mailing list at
>> j...@lists.ubuntu.com and join us on #juju on freenode. We would love to
>> hear
>> your feedback and usage of juju.
>>
>>
>> ## Anything else?
>>
>> You can read more information about what's in this release by viewing the
>> release notes here:
>>
>> https://jujucharms.com/docs/devel/temp-release-notes
>>
>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0-beta16 is here!

2016-08-25 Thread Andrew Wilkins
On Fri, Aug 26, 2016 at 4:05 AM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> A new development release of Juju, 2.0-beta16, is here!
>
> ## How do I get it?
>
> If you are running Ubuntu, you can get it from the juju devel ppa:
>
> sudo add-apt-repository ppa:juju/devel
> sudo apt update; sudo apt install juju-2.0
>
> Or install it from the snap store
>
> snap install juju --beta --devmode
>
> Windows, Centos, and OS X users can get a corresponding installer at:
>
> https://launchpad.net/juju/+milestone/2.0-beta16
>
> ## What's new?
>
> This release fixed 61 bugs and added some new and expanded features. Some
> of the notable changes include:
>
> * Juju rejects LXD 2.1 https://bugs.launchpad.net/bugs/1614559
> * debug-log usability changes
>  - only tails by default when running in a terminal
> --no-tail can be used to not tail from a terminal
> --tail can be used for force tailing when not on a terminal
>  - time output now defaults to local time (--utc flag added to show times
> in utc)
>  - filename and line number no longer shown by default (--location flag
> added to include location in the output)
>  - dates no longer shown by default (--date flag added to include dates in
> output)
> --ms flag added to show timestamps to millisecond precision
>  - severity levels and location now shown in color in the terminal
> --color option to force ansi color codes to pass to 'less -R'
> * controllers models, and users commands now show current controller and
> model respectively using color as well as the asterix
> * removal of smart formatter for CLI commands. Where 'smart' used to be
> the default, now it is 'yaml'.
> * controllers, models, and users commands now print the access level users
> have against each model/controller
> * juju whoami command prints the current controller/model/logged in user
> details
> * fix for LXD image aliases so that the images auto update (when
> bootstrapping a new LXD cloud, images will be downloaded again the first
> time, even if existing ones exist)
> * Expanded controller and model permissions.
>
> Also, juju-2 has moved on launchpad to launchpad.net/juju.
> launchpad.net/juju-core will continue to be utilized for juju-1
> milestones and bug tracking.
>
>
> ## Feedback Appreciated!
>
> We encourage everyone to subscribe the mailing list at
> juju@lists.ubuntu.com and join us on #juju on freenode. We would love to
> hear
> your feedback and usage of juju.
>
>
> ## Anything else?
>

There is something else I had meant to email the list about yesterday, but
slipped my mind. The manual provider's "bootstrap-user" model config
attribute has been removed. Instead, you can now do:

juju bootstrap controller-name manual/user@host

And in your clouds.yaml, you can include "user@" in the endpoint for
manual-type clouds.

Cheers,
Andrew

You can read more information about what's in this release by viewing the
> release notes here:
>
> https://jujucharms.com/docs/devel/temp-release-notes
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: DestroyModel, ModelInfo API deprecation

2016-08-15 Thread Andrew Wilkins
On Mon, Aug 15, 2016 at 2:13 PM Andrew Wilkins <andrew.wilk...@canonical.com>
wrote:

> Hi,
>
> We're making a handful of changes around model- and cloud-related APIs.
> Some of these will be broken in beta16, and some will be deprecated then
> and broken directly after.
>
> The first change is to ModelManager.DestroyModel. This method is now
> deprecated, and replaced with the pluralised ModelManger.DestroyModels,
> which takes a set of model tags. The old method will be removed after
> beta16 is out.
>
> The second change is to Client.ModelInfo. This was superseded a while ago
> by ModelManager.ModelInfo, which takes model tags. The Client.ModelInfo
> method will be removed after beta16 is out.
>
> There's a couple of other changes in the works:
>  - Cloud.CloudDefaults will be renamed to DefaultCloud, and just return
> the tag of a cloud
>

This has now been done as described. You can continue

 - Cloud.Credentials will be updated or replaced to avoid exposing
> credentials to the client
>

I will be updating the existing API to return new credential tags. Since
this is a new API that nobody should yet be using, I'll save the list from
more emails about it.

There will be one more small change, which is to the CloudCredential field
in the ModelManager.CreateModel parameters. If you're not filling in the
field (most likely), you don't need to do anything. If you are, then let me
know and I'll send you more details when they're finalised.

Cheers,
Andrew

I will reply to this thread when the details of these two have been
> finalised.
>
> Cheers,
> Andrew
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: DestroyModel, ModelInfo API deprecation

2016-08-15 Thread Andrew Wilkins
On Mon, Aug 15, 2016 at 2:13 PM Andrew Wilkins <andrew.wilk...@canonical.com>
wrote:

> Hi,
>
> We're making a handful of changes around model- and cloud-related APIs.
> Some of these will be broken in beta16, and some will be deprecated then
> and broken directly after.
>
> The first change is to ModelManager.DestroyModel. This method is now
> deprecated, and replaced with the pluralised ModelManger.DestroyModels,
> which takes a set of model tags. The old method will be removed after
> beta16 is out.
>
> The second change is to Client.ModelInfo. This was superseded a while ago
> by ModelManager.ModelInfo, which takes model tags. The Client.ModelInfo
> method will be removed after beta16 is out.
>
> There's a couple of other changes in the works:
>  - Cloud.CloudDefaults will be renamed to DefaultCloud, and just return
> the tag of a cloud
>

This has now been done as described. You can continue

 - Cloud.Credentials will be updated or replaced to avoid exposing
> credentials to the client
>

I will be updating the existing API to return new credential tags. Since
this is a new API that nobody should yet be using, I'll save the list from
more emails about it.

There will be one more small change, which is to the CloudCredential field
in the ModelManager.CreateModel parameters. If you're not filling in the
field (most likely), you don't need to do anything. If you are, then let me
know and I'll send you more details when they're finalised.

Cheers,
Andrew

I will reply to this thread when the details of these two have been
> finalised.
>
> Cheers,
> Andrew
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


DestroyModel, ModelInfo API deprecation

2016-08-15 Thread Andrew Wilkins
Hi,

We're making a handful of changes around model- and cloud-related APIs.
Some of these will be broken in beta16, and some will be deprecated then
and broken directly after.

The first change is to ModelManager.DestroyModel. This method is now
deprecated, and replaced with the pluralised ModelManger.DestroyModels,
which takes a set of model tags. The old method will be removed after
beta16 is out.

The second change is to Client.ModelInfo. This was superseded a while ago
by ModelManager.ModelInfo, which takes model tags. The Client.ModelInfo
method will be removed after beta16 is out.

There's a couple of other changes in the works:
 - Cloud.CloudDefaults will be renamed to DefaultCloud, and just return the
tag of a cloud
 - Cloud.Credentials will be updated or replaced to avoid exposing
credentials to the client
I will reply to this thread when the details of these two have been
finalised.

Cheers,
Andrew
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


DestroyModel, ModelInfo API deprecation

2016-08-15 Thread Andrew Wilkins
Hi,

We're making a handful of changes around model- and cloud-related APIs.
Some of these will be broken in beta16, and some will be deprecated then
and broken directly after.

The first change is to ModelManager.DestroyModel. This method is now
deprecated, and replaced with the pluralised ModelManger.DestroyModels,
which takes a set of model tags. The old method will be removed after
beta16 is out.

The second change is to Client.ModelInfo. This was superseded a while ago
by ModelManager.ModelInfo, which takes model tags. The Client.ModelInfo
method will be removed after beta16 is out.

There's a couple of other changes in the works:
 - Cloud.CloudDefaults will be renamed to DefaultCloud, and just return the
tag of a cloud
 - Cloud.Credentials will be updated or replaced to avoid exposing
credentials to the client
I will reply to this thread when the details of these two have been
finalised.

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Upcoming CLI change for beta15

2016-08-08 Thread Andrew Wilkins
Hi folks,

We've just landed a change that will be part of beta15 which changes how
you refer to models owned by other users.

Model names may be reused by different users, e.g. alex and billie can each
have a model called "foo". Until beta15, there is no way for either alex or
billie to refer to each other's foo after being granted access.

As of beta15, you can now refer to others' models with the syntax
"owner/model". Just saying "foo" is short-hand for foo within the logged-in
user's namespace. e.g. If I'm alex, I can get the status of billie's model
foo with "juju status -m billie/foo".

Cheers,
Andrew
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Small script to connect to Juju's mongo in LXD

2016-07-27 Thread Andrew Wilkins
On Thu, Jul 28, 2016 at 12:32 AM John Meinel  wrote:

> Did you intend to attach the script to the email? It does sound like
> something useful. I know when we were investigating at some client sites we
> had a small snippet of a bash function to dig the content out of agent.conf
> and launch mongo with the right options. It would be nice to have that in a
> more official place so it doesn't get forgotten.
>

Kapil wrote a plugin for inspecting Mongo:
https://github.com/kapilt/juju-dbinspect. It's almost certainly broken in
Juju 2.0. I've found it handy in the past, it'd be good to have that
brought up to date.

Cheers,
Andrew


> John
> =:->
>
>
> On Wed, Jul 27, 2016 at 6:19 PM, Katherine Cox-Buday <
> katherine.cox-bu...@canonical.com> wrote:
>
>> I frequently need to connect to Juju's Mongo instance to poke around and
>> see if something I've done is having the desired effect. Back when we were
>> using LXC, I had a script that would pull the password from agent.conf and
>> open a shell. When we switched to LXD my script broke, and I never updated
>> it. I finally got frustrated enough to modify[1] it, and thought others
>> might find this useful for poking around Mongo.
>>
>> Let me know if you have any suggestions.
>>
>> --
>> Katherine
>>
>> [1] - http://pastebin.ubuntu.com/21155985/
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: authorized-keys is now optional

2016-07-08 Thread Andrew Wilkins
On Fri, Jul 8, 2016 at 8:30 PM Rick Harding <rick.hard...@canonical.com>
wrote:

> Thanks Andrew. Do we hvae some hinting error messages in place for when a
> user attempts to juju ssh, juju run, etc and the ssh key is not set for the
> user that leads them to the add-ssh-key commands?
>

No. I've filed https://bugs.launchpad.net/juju-core/+bug/1600221.


> Thanks
>
> Rick
>
> On Fri, Jul 8, 2016 at 12:15 AM Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> Hi users of the add-model API,
>>
>> The authorized-keys config is now only required at bootstrap time,
>> because bootstrapping involves an SSH step. This means you no longer need
>> to specify authorized-keys in your config for add-model.
>>
>> The Juju CLI will now automatically read ~/.ssh/id_rsa.pub and friends
>> into authorized-keys when adding a model, just as bootstrap does. If no
>> public keys are found, a warning will be displayed. You can still add keys
>> later using the "juju add-ssh-key" command.
>>
>> If you've been specifying a nonsense authorized-keys value just to get
>> add-model to work (hi Juju GUI), then please change your code to not pass
>> anything. At the moment we do not validate the input, but we may want to
>> change that later on.
>>
>> (This is on master, and will go into 2.0-beta12.)
>>
>> Cheers,
>> Andrew
>>
> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


authorized-keys is now optional

2016-07-07 Thread Andrew Wilkins
Hi users of the add-model API,

The authorized-keys config is now only required at bootstrap time, because
bootstrapping involves an SSH step. This means you no longer need to
specify authorized-keys in your config for add-model.

The Juju CLI will now automatically read ~/.ssh/id_rsa.pub and friends into
authorized-keys when adding a model, just as bootstrap does. If no public
keys are found, a warning will be displayed. You can still add keys later
using the "juju add-ssh-key" command.

If you've been specifying a nonsense authorized-keys value just to get
add-model to work (hi Juju GUI), then please change your code to not pass
anything. At the moment we do not validate the input, but we may want to
change that later on.

(This is on master, and will go into 2.0-beta12.)

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: "juju attach"

2016-06-19 Thread Andrew Wilkins
On Sun, Jun 19, 2016 at 1:51 AM Mark Shuttleworth  wrote:

> On 18/06/16 16:45, Marco Ceppi wrote:
> >
> > Why not just "add" which is already pretty well used. Throughout the
> > Juju CLI.
> >
>
> Well, I think the context here is more attach-a-disk. I really like
> Roger's suggestion of 'mount', it immediately brings all the right
> elements into focus, and it's only one syllable.
>

I concur that add is not really appropriate here, and that mount makes a
lot of sense for storage (thanks for the suggestion, Roger). For me,
"attach" does not have similar connotations re resources, but at least
there's no looming danger of conflict with storage. I can't think of a
better single word, either.

We've got hooks called "storage-attached", and "storage-detaching", which
are in use by existing charms, and baked into charm helpers and probably
elsewhere. I don't think we can/should change these, or we'll have charms
that work with 2.0 but not 1.25, and vice versa. I think we should change
the statuses though, to keep the user-visible parts aligned.

Cheers,
Andrew


> Mark
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: "juju attach"

2016-06-19 Thread Andrew Wilkins
On Sun, Jun 19, 2016 at 1:51 AM Mark Shuttleworth  wrote:

> On 18/06/16 16:45, Marco Ceppi wrote:
> >
> > Why not just "add" which is already pretty well used. Throughout the
> > Juju CLI.
> >
>
> Well, I think the context here is more attach-a-disk. I really like
> Roger's suggestion of 'mount', it immediately brings all the right
> elements into focus, and it's only one syllable.
>

I concur that add is not really appropriate here, and that mount makes a
lot of sense for storage (thanks for the suggestion, Roger). For me,
"attach" does not have similar connotations re resources, but at least
there's no looming danger of conflict with storage. I can't think of a
better single word, either.

We've got hooks called "storage-attached", and "storage-detaching", which
are in use by existing charms, and baked into charm helpers and probably
elsewhere. I don't think we can/should change these, or we'll have charms
that work with 2.0 but not 1.25, and vice versa. I think we should change
the statuses though, to keep the user-visible parts aligned.

Cheers,
Andrew


> Mark
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


"juju attach"

2016-06-17 Thread Andrew Wilkins
Hi folks,

A couple of days ago I started looking at charming TitanDB. I was looking
at using resources to store the TitanDB distribution, and while looking at
the docs [0] I found a rather surprisingly named command: "juju attach".

The verb "attach" by itself tells me that, presumably, it will attach
something to something else. It doesn't tell me what. And because it's
non-specific, it either rules out the possibility of multiple unrelated
attach-type commands.

We always intended to give the user a means of attaching floating storage
to a unit/machine. i.e. I should be able to detach and reattach storage to
units in a model, so long as the storage provider supports it. I think the
commands would naturally be called "attach-storage" and "detach-storage".

Can we rename attach to attach-resource? Note that there's also "charm
attach". That may benefit from a similar rename too, at least to be
consistent.

Cheers,
Andrew

[0] https://jujucharms.com/docs/devel/developer-resources
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju: how to get Admin password for windows

2016-06-14 Thread Andrew Wilkins
On Tue, Jun 14, 2016 at 11:45 PM Геннадий Дубина 
wrote:

> Hi everybody,
>
> We are using juju(v.2.0) to deploy windows machine to openstack(nova)
> We use windows 2012 r2 image from cloudbase.
>
> Machine was created successfully.
> but i don't know how to get Admin user password.
>
> i have found one option: "nova get-password 
> "
> but it doesn't work in juju case. seems it doesn't provide private key for
> this machine.
> "key name" field is "None" for this machine on openstack details page.
>
> also "juju ssh " doesn't work for window machine too
>
> so how to get password for this machine? we need to get RDP.
>

You should be able to do something like:
juju run --machine  "net user JujuAdministrator "

Thanks,
> --
> Gennadiy Dubina
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Regarding Synnefo environment provider for Juju

2016-06-13 Thread Andrew Wilkins
On Mon, Jun 13, 2016 at 7:45 PM Thodoris Sotiropoulos <
theo...@admin.grnet.gr> wrote:

> I checked the entries of the database after the error. Specifically,
> this is the document for the Model collection.
>
> [
>  {
>  "_id" : "d96ef8ae-36ca-4af7-870f-7914d844df68",
>  "life" : 0,
>  "migration-mode" : "",
>  "name" : "test",
>  "owner" : "admin@local",
>  "server-uuid" : "d96ef8ae-36ca-4af7-870f-7914d844df68",
>  "txn-queue" : [
>  "575e976cba3f0466055d8654_fff99943",
>  "575e976fba3f0466055d865a_33f7ddf5"
>  ],
>  "txn-revno" : NumberLong(2)
>  }
> ]
>
> Therefore, there is already one record (this is the controller model,
> right?) on db before the command `err = newState.runTransaction(ops) `
> on `state/model.go` line 192 runs.
> Is the error associated with the name of the controller model? I imagine
> that it shouldn't be named "test", right?
>

Correct. The controller model should be called "controller".

I just bootstrapped with:
juju bootstrap --upload-tools --debug ctrl lxd -d test

and "juju list-models" gives me:
MODEL   OWNERSTATUS LAST CONNECTION
controller  admin@local  available  never connected
test*   admin@local  available  3 seconds ago

I'm curious where the "test" is coming from in your case. I couldn't see
anything obvious in the provider code.



> Thodoris
>
> On 06/13/2016 01:33 AM, Tim Penhey wrote:
> > Hi Thodoris,
> >
> > I had a quick look through the current code in the branch, and nothing
> > is obvious there.
> >
> > What is the command line that you are running?
> >
> > The failure is due to the database already thinking there is a model
> > that exists for the same user with the same name. Although now what we
> > normally create is two models on bootstrap:
> >  * controller - where the machines running Juju live (apiserver and db)
> >  * default - where the user is left to deploy workloads.
> >
> > The error is that the model name is "test". This shouldn't be the case
> > as the database should be empty.
> >
> > Tim
> >
> > On 08/06/16 23:28, Thodoris Sotiropoulos wrote:
> >> Hi all,
> >>
> >> You may remember previous e-mails sent by my partner Stavros Sachtouris
> >> regarding
> >> the case of implementing a Juju environment provider for our open source
> >> IaaS called
> >> Synnefo.
> >>
> >> We have started implementation of the basics (configuration schema,
> >> instance creation,
> >> instance queries, preparation of environment, etc). Our goal is to make
> >> a proof
> >> of concept implementation of the bootstrap command and that's why we
> >> have ignored
> >> networking and storage configuration, a.k.a mocked.
> >>
> >> So far, we have managed to create and communicate with a machine
> >> instance. However,
> >> during the last step of bootstrap process (insertion of the admin model
> >> to database)
> >> we are facing an unexpected problem (method `NewModel` of
> >> `state/model.go`).
> >>
> >> Here is the corresponding log file:
> >> https://pithos.okeanos.grnet.gr/public/8EHM5jpEm2W7bSwly9wFG
> >>
> >> I tried to investigate the problem but I cannot figure out why I get the
> >> `model %s for %q already exists`. What am I missing? I would
> >> appreciate any
> >> help or guidance.»
> >>
> >> Thank you in advance
> >>
> >> Thodoris Sotiropoulos
> >> Developer @ GRNET
> >> theo...@grnet.gr
> >>
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Model config

2016-06-08 Thread Andrew Wilkins
On Wed, Jun 8, 2016 at 9:01 PM Mark Shuttleworth <m...@ubuntu.com> wrote:

>
>  juju set-model-defaults
>

I was mostly wondering whether we should have model defaults, or things
that that we'd set at a common level *without* the ability to set on a
per-model basis to keep things compartmentalised. Once
cloud/region/endpoint and credential attributes are separated from model
config, there aren't *that* many things that make sense to have common
across models.

Anyway, based on Nicolas's response and other discussions the dev team has
had internally, we'll go ahead with defaults with the ability to override.

Thanks,
Andrew


>  juju set-model-config
>  juju set-controller-config
>
> Have we a strong preference for get/set names, or could we just use
> "model-config" and "model-defaults" as read/write commands?
>
>
> Mark
>
>
> On 08/06/16 18:41, Andrew Wilkins wrote:
>
> Hi folks,
>
> We're in the midst of making some changes to model configuration in Juju
> 2.0, separating out things that are not model specific from those that are. 
> For
> many things this is very clear-cut, and for other things not so much.
>
> For example, api-port and state-port are controller-specific, so we'll be
> moving them from model config to a new controller config collection. The
> end goal is that you'll no longer see those when you type "juju
> get-model-config" (there will be a separate command to get controller
> attributes such as these), though we're not quite there yet.
>
> We also think there are some attributes that people will want to set
> across all models, but are not necessarily related to the *controller*. For
> example, http-proxy, apt-http-proxy, and their siblings. I expect that if
> anyone is setting these particular attributes, they are doing so for *all*
> models, as they're operating within a private cloud with limited network
> access.
>
> Does anyone have a real, uncontrived use-case for configuring proxy
> settings on a per-model basis?
>
> Cheers,
> Andrew
>
>
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Model config

2016-06-08 Thread Andrew Wilkins
Hi folks,

We're in the midst of making some changes to model configuration in Juju
2.0, separating out things that are not model specific from those that are. For
many things this is very clear-cut, and for other things not so much.

For example, api-port and state-port are controller-specific, so we'll be
moving them from model config to a new controller config collection. The
end goal is that you'll no longer see those when you type "juju
get-model-config" (there will be a separate command to get controller
attributes such as these), though we're not quite there yet.

We also think there are some attributes that people will want to set across
all models, but are not necessarily related to the *controller*. For
example, http-proxy, apt-http-proxy, and their siblings. I expect that if
anyone is setting these particular attributes, they are doing so for *all*
models, as they're operating within a private cloud with limited network
access.

Does anyone have a real, uncontrived use-case for configuring proxy
settings on a per-model basis?

Cheers,
Andrew
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Machine agents uninstall themselves upon worker.ErrTerminateAgent.

2016-05-09 Thread Andrew Wilkins
On Mon, May 9, 2016 at 4:38 PM William Reade <william.re...@canonical.com>
wrote:

> On Mon, May 9, 2016 at 9:56 AM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> The reason it's done at the last moment is to avoid having dangling
>>> database entries. If we uninstall the agent (i.e. delete /var/lib/juju,
>>> remove systemd/upstart), then if the agent fails before we get to
>>> EnsureDead, then the entity will never be removed from state.
>>>
>>
>> The *only* thing that should happen after setting dead is the uninstall
>> -- anything else that's required to happen before cleanup *must* happen
>> before setting dead, which *means* "all my responsibilities are 100%
>> fulfilled".
>>
>
> > I don't think I suggested above that we should do anything else other
> than uninstall?
>
> Sorry I misunderstood -- I was focused on the loop-device stuff that had
> grown in the uninstall logic.
>
>
>> The *only* justification for the post-death logic in the manual case is
>>> because there's no responsible provisioner component to hand over to -- and
>>> frankly I wish we'd just written that to SSH in and clean up, instead of
>>> taking on this ongoing hassle.
>>>
>>
>>>
>> As an alternative, we could (should) only ever write the
>>>> /var/lib/juju/uninstall-agent file from worker/machiner, first checking
>>>> there's no assigned units, and no storage attached.
>>>>
>>>
>>> Why would we *ever* want to write it at runtime? We know if it's a
>>> manual machine at provisioning time, so we can write the File Of Death
>>> OAOO. All the other mucking about with it is the source of these (serious!)
>>> bugs.
>>>
>>
>> The point is not to distinguish between manual vs. non-manual. Yes, we
>> can write something that records that fact OAOO.
>>
>> The point of "write from the machiner" was to signal that the machine is
>> actually dead, and removed from state, vs. "my credentials are invalid,
>> better shut down now".
>>
>
> Yeah, we definitely shouldn't return ErrTerminateAgent from apicaller on
> mere invalid-credentials. (No worker should return ErrTerminateAgent *at
> all*, really -- not their job. Having a couple of different workers that
> can return ErrAgentEntityDead, which can then be interpreted by the next
> layer? Fine :).)
>
>
>> So we can write a file to confine uninstall to manual machines -- that
>> much is easy, I don't think anyone will disagree with doing that. But we
>> should not ignore the bug that prompted this thread, even if it's confined
>> to manual machines.
>>
>
> Right; and, yeah, it's almost certainly better to leak manual machines
> than it is to uninstall them accidentally -- so long as we know that's the
> tradeoff we're making ;).
>
>
>> Andrew, I think you had more detail last time we discussed this: is there
>>>>> anything else in uninstall (besides loop-device stuff) that needs to run
>>>>> *anywhere* except a manual machine? and, what will we actually need to 
>>>>> sync
>>>>> with in the machiner? (or, do you have alternative ideas?)
>>>>>
>>>>
>>>> No, I don't think there is anything else to be done in uninstall, apart
>>>> from loop detach and manual machine cleanup. I'm not sure about moving the
>>>> uninstall logic to the machiner, for reasons described above. We could
>>>> improve the current state of affairs, though, by only writing the
>>>> uninstall-agent file from the machiner
>>>>
>>>
>>> Strong -1 on moving uninstall logic: if it has to happen (which it does,
>>> in *rare* cases that are *always* detectable pre-provisioning), uninstall
>>> is where it should happen, post-machine-death; and also strong -1 on
>>> writing uninstall-agent in *any* circumstances except manual machine
>>> provisioning, we have had *way* too many problems with this "clever"
>>> feature being invoked when it shouldn't be.
>>>
>>
>> I don't want to belabour the point, but just to be clear: the
>> uninstall-agent file exists to record the fact that he machine is in fact
>> Dead, and uninstall should go ahead. That logic was put in specifically to
>> prevent the referenced bug. We can and should improve it to only do this
>> for manual machines.
>>
>
> Meta: a name like "uninstall-agent" is really misleading if it actually
> means "machine-X is definitely d

Re: Machine agents uninstall themselves upon worker.ErrTerminateAgent.

2016-05-09 Thread Andrew Wilkins
On Mon, May 9, 2016 at 2:28 PM William Reade <william.re...@canonical.com>
wrote:

> On Mon, May 9, 2016 at 3:28 AM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> On Sat, May 7, 2016 at 1:37 AM William Reade <william.re...@canonical.com>
>> wrote:
>>
>>> On Fri, May 6, 2016 at 5:50 PM, Eric Snow <eric.s...@canonical.com>
>>> wrote:
>>>
>>>> See https://bugs.launchpad.net/juju-core/+bug/1514874.
>>>
>>>
>> So I think this issue is fixed in 2.0, but looks like the changes never
>> got backported to 1.25. From your options, we do have (the opposite of) a
>> DO_NOT_UNINSTALL file (it's actually called
>> "/var/lib/juju/uninstall-agent"; only if it exists do we uninstall).
>>
>> (And now that I think of it, we're only writing uninstall-agent for the
>> manual provider's bootstrap machine, and not other manual machines, so
>> we're currently leaving Juju bits behind on manual machines added to an
>> environment.)
>>
>
> Except we're *also* writing it on every machine, for Very Bad Reasons,
> right? So we *are* still cleaning up all machines, but there's a latent
> manual provider bug that'll need addressing.
>

Yes, sorry, it does appear that we're doing it on all machines. Disregard
my parenthetical remark. And yes, we should really only write that file for
manual machines.

But... I've just looked at the 1.25 branch again, and the fix *was* made
there. And from Jorge's comment
https://bugs.launchpad.net/juju-core/+bug/1514874/comments/4, we can see
that the uninstall logic isn't actually running (see `uninstall file
"/var/lib/juju/uninstall-agent" does not exist`
https://github.com/juju/juju/blob/1.25/cmd/jujud/agent/machine.go#L1741)

I'm not sure what to make of that. Eric, have you confirmed that that code
is what's causing the issue? Are we sure we're not barking up the wrong
tree?

The reason it's done at the last moment is to avoid having dangling
>> database entries. If we uninstall the agent (i.e. delete /var/lib/juju,
>> remove systemd/upstart), then if the agent fails before we get to
>> EnsureDead, then the entity will never be removed from state.
>>
>
> The *only* thing that should happen after setting dead is the uninstall --
> anything else that's required to happen before cleanup *must* happen before
> setting dead, which *means* "all my responsibilities are 100% fulfilled".
>

I don't think I suggested above that we should do anything else other than
uninstall?

The *only* justification for the post-death logic in the manual case is
> because there's no responsible provisioner component to hand over to -- and
> frankly I wish we'd just written that to SSH in and clean up, instead of
> taking on this ongoing hassle.
>

>
As an alternative, we could (should) only ever write the
>> /var/lib/juju/uninstall-agent file from worker/machiner, first checking
>> there's no assigned units, and no storage attached.
>>
>
> Why would we *ever* want to write it at runtime? We know if it's a manual
> machine at provisioning time, so we can write the File Of Death OAOO. All
> the other mucking about with it is the source of these (serious!) bugs.
>

The point is not to distinguish between manual vs. non-manual. Yes, we can
write something that records that fact OAOO.

The point of "write from the machiner" was to signal that the machine is
actually dead, and removed from state, vs. "my credentials are invalid,
better shut down now".

So we can write a file to confine uninstall to manual machines -- that much
is easy, I don't think anyone will disagree with doing that. But we should
not ignore the bug that prompted this thread, even if it's confined to
manual machines.

Andrew, I think you had more detail last time we discussed this: is there
>>> anything else in uninstall (besides loop-device stuff) that needs to run
>>> *anywhere* except a manual machine? and, what will we actually need to sync
>>> with in the machiner? (or, do you have alternative ideas?)
>>>
>>
>> No, I don't think there is anything else to be done in uninstall, apart
>> from loop detach and manual machine cleanup. I'm not sure about moving the
>> uninstall logic to the machiner, for reasons described above. We could
>> improve the current state of affairs, though, by only writing the
>> uninstall-agent file from the machiner
>>
>
> Strong -1 on moving uninstall logic: if it has to happen (which it does,
> in *rare* cases that are *always* detectable pre-provisioning), uninstall
> is where it should happen, post-machine-death; and also strong -1 on
> writing uninstall-agent in *any* circumstances except manual machine
&

Re: Machine agents uninstall themselves upon worker.ErrTerminateAgent.

2016-05-08 Thread Andrew Wilkins
On Sat, May 7, 2016 at 1:37 AM William Reade 
wrote:

> On Fri, May 6, 2016 at 5:50 PM, Eric Snow  wrote:
>
>> See https://bugs.launchpad.net/juju-core/+bug/1514874.
>
>
So I think this issue is fixed in 2.0, but looks like the changes never got
backported to 1.25. From your options, we do have (the opposite of) a
DO_NOT_UNINSTALL file (it's actually called
"/var/lib/juju/uninstall-agent"; only if it exists do we uninstall).

(And now that I think of it, we're only writing uninstall-agent for the
manual provider's bootstrap machine, and not other manual machines, so
we're currently leaving Juju bits behind on manual machines added to an
environment.)

Mainly, we just shouldn't do it. The only *actual reason* for us to do this
> is to support the manual provider; any other machine that happens to be
> dead will be cleaned up by the responsible provisioner in good time.
> There's a file we write to enable the suicidal behaviour when we enlist a
> manual machine, and we *shouldn't* have ever written it for any other
> reason.
>
> But then people started adding post-death cleanup logic to the agent, and
> the only way to trigger that was to write that file; so they started
> writing that file on normal shutdown paths, so that they could trigger the
> post-death logic in all cases, and I can only assume they decided that
> nuking the installation was acceptable precisely *because* the provisioner
> would be taking down the machine anyway. And note that because there *is* a
> responsible provisioner that will be taking the machine down, that shutdown
> logic might or might not happen, rendering it ultimately pretty unreliable
> [0] as well as spectacularly terrible in the lp:1514874 cases. /sigh
>
> So, narrowly, fixing this involves relocating the more-widely-useful
> (in-container loop device detach IIRC?) code inside MachineAgent.uninstall;
> and then just not ever writing the file that triggers actual uninstall,
> putting it back to where it was intended: a nasty but constrained hack to
> avoid polluting manual machines (which are only "borrowed") any more than
> necessary.
>
> (A bit more broadly: ErrTerminateAgent is kinda dumb, but not
> *particularly* harmful in practice [1] without the file of death backing it
> up. The various ways in which, e.g. api connection failure can trigger it
> are a bit enthusiastic, perhaps, but if *all it did was terminate the
> agent* it'd be fine: someone who, e.g. changed their agent.conf's password
> would invoke jujud and see it stop -- can't do anything with this config --
> but be perfectly able to work again if the conf were fixed. Don't know what
> the deal is with the correct-password-refused thing, though: that should be
> investigated further.)
>
> The tricky bit is relocating the cleanup code in uninstall -- I suspect
> the reason this whole sorry saga kicked off is because whoever added it was
> in a hurry and thought they didn't have a place to put this logic (inside
> the machiner, *before* setting Dead, would be closest to correct -- but
> that'd require us to synchronise with any other workers that might use the
> resources we want to clean up, and I suspect that would have been the
> roadblock).
>

The reason it's done at the last moment is to avoid having dangling
database entries. If we uninstall the agent (i.e. delete /var/lib/juju,
remove systemd/upstart), then if the agent fails before we get to
EnsureDead, then the entity will never be removed from state.

As an alternative, we could (should) only ever write the
/var/lib/juju/uninstall-agent file from worker/machiner, first checking
there's no assigned units, and no storage attached.

Andrew, I think you had more detail last time we discussed this: is there
> anything else in uninstall (besides loop-device stuff) that needs to run
> *anywhere* except a manual machine? and, what will we actually need to sync
> with in the machiner? (or, do you have alternative ideas?)
>

No, I don't think there is anything else to be done in uninstall, apart
from loop detach and manual machine cleanup. I'm not sure about moving the
uninstall logic to the machiner, for reasons described above. We could
improve the current state of affairs, though, by only writing the
uninstall-agent file from the machiner

FWIW, the loop stuff can be dropped when the LXC container support is
removed. Nobody ever added support for loop in the LXD provider, and I
think we should implement support for it differently to how it was done for
LXC anyway (losetup on host, expose to container; as opposed to expose all
loop devices to all LXD containers and losetup in container).

Cheers,
Andrew

Cheers
> William
>
> [0] although with the watcher resolution of 5s, it's actually got a pretty
> good chance of running.
> [1] still utterly terrible in theory, it would be lovely to see explicit
> handover between contexts -- e.g. it is *not* the machiner's job to return
> ErrTerminateAgent, it is 

Reminder: write tests fail first

2016-05-04 Thread Andrew Wilkins
See: https://bugs.launchpad.net/juju-core/+bug/1578456

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Static Analysis tests

2016-05-02 Thread Andrew Wilkins
On Tue, May 3, 2016 at 10:43 AM John Meinel  wrote:

>
> >
> > We should investigate using go/build.Context.UseAllFiles to ignore the
> build tags. As long as there are no "//+build ignore" files lurking, and as
> long as we're not trying to do type-checking, that should be fine.
> >
>
> I'm curious how this works. As often we use build flags to create a
> different version of a function for different platforms. (Eg for lxd we had
> a few functions that had real implementations with a "+build go1.3" flag
> and then stubs that returned false with the !go1.3 flag.) I believe we did
> the same thing for windows/osx support on some of the version code)
>

IIANM, Nate's test operates at the AST level, and doesn't really need to
operate on a package as a whole (it's checking for direct references to a
tls.Config I think?). If it doesn't need to be a valid package, then we can
analyse the individual files, thus platform-specific alternatives,
separately.

Having said that, we should not discount the possibility of higher level
analyses that do want valid packages. Those will probably need to be
per-platform, as Nate says.

Cheers,
Andrew

>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Static Analysis tests

2016-05-02 Thread Andrew Wilkins
On Tue, May 3, 2016 at 2:47 AM John Meinel <j...@arbash-meinel.com> wrote:

> Interestingly, putting it in "verify.sh" does have some fallout as well.
> At least on my machines verify is already slow enough to cause github to
> timeout while I'm pushing revisions. So I actually have to run it manually,
> and then git push --no-verify if I actually want it to complete. Adding 16s
> to that is a pretty serious overhead (I don't know the specific, but I
> think github times out at around 30s).
>

Agreed, it is getting a bit slow.
verify.sh -short? :)

John
> =:->
>
>
> On Mon, May 2, 2016 at 6:10 PM, Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> I think it's a good point that we may only want to run some tests once,
>> not once per OS/Arch.  However, I believe these particular tests are still
>> limited in scope by the OS/arch of the host machine.  The go/build package
>> respects the build tags in files, so in theory, one run of this on Ubuntu
>> could miss code that is +build windows.  (we could give go/build different
>> os/arch constraints, but then we're back to running this N times for N
>> os/arch variants)
>>
>> I'm certainly happy to have these tests run in verify.sh, but it sounds
>> like they may need to be run per-OS testrun as well.
>>
>> On Sun, May 1, 2016 at 10:27 PM Andrew Wilkins <
>> andrew.wilk...@canonical.com> wrote:
>>
>>> On Thu, Apr 28, 2016 at 11:48 AM Nate Finch <nate.fi...@canonical.com>
>>> wrote:
>>>
>>>> Maybe we're not as far apart as I thought at first.
>>>>
>>>
>>>
>>>> My thought was that they'd live under github.com/juju/juju/devrules
>>>> (or some other name) and therefore only get run during a full test run or
>>>> if you run them there specifically.  What is a full test run if not a test
>>>> of all our code?  These tests just happen to test all the code at once,
>>>> rather than piece by piece.  Combining with the other thread, if we also
>>>> marked them as skipped under -short, you could easily still run go test
>>>> ./... -short from the root of the juju repo and not incur the extra 16.5
>>>> seconds (gocheck has a nice feature where if you call c.Skip() in the
>>>> SetUpSuite, it skips all the tests in the suite, which is particularly
>>>> appropriate to these tests, since it's the SetUpSuite that takes all the
>>>> time).
>>>>
>>>
>>> I'm not opposed to using the Go testing framework in this instance,
>>> because it makes most sense to write the analysis code in Go. That may not
>>> always be the case, though, and I don't want to have a rule of "everything
>>> as Go tests" that means we end up shoe-horning things. This is just
>>> academic until we need something that doesn't live in the Go ecosystem,
>>> though.
>>>
>>> Most importantly, I don't want to lose the ability to distinguish the
>>> types of tests. As an example: where we run static analysis should
>>> never matter, so we can cut a merge job short by performing all of the
>>> static analysis checks up front. That doesn't matter much if we only gate
>>> merges on running the tests on one Ubuntu series/arch; but what if we want
>>> to start gating on Windows, CentOS, or additional architectures? It would
>>> not make sense to run them all in parallel if they're all going to fail on
>>> the static analysis tests. And then if we've run them up front, it would be
>>> ideal to not have to run them on the individual test machines.
>>>
>>> So I think it would be OK to have a separate "devrules" package, or
>>> whatever we want to call it. I would still like these tests to be run by
>>> verify.sh, so we have one place to go to check that the source code is
>>> healthy, without also running the unit tests or feature tests. If we have a
>>> separate package like this, test tags are not really necessary in the short
>>> term -- the distinction is made by separating the tests into their own
>>> package. We could still mark them as short/long, but that's orthogonal to
>>> separation-by-purpose.
>>>
>>> Cheers,
>>> Andrew
>>>
>>> Mostly, I just didn't want them to live off in a separate repo or run
>>>> with a separate tool.
>>>>
>>>> On Wed, Apr 27, 2016 at 11:39 PM Andrew Wilkins <
>>>> andrew.wilk...@canonical.com> wrote:
>>>>
>>>>> On Thu, Apr 28, 2016 at 11:14 AM Nat

Re: Static Analysis tests

2016-05-02 Thread Andrew Wilkins
On Mon, May 2, 2016 at 10:10 PM Nate Finch <nate.fi...@canonical.com> wrote:

> I think it's a good point that we may only want to run some tests once,
> not once per OS/Arch.  However, I believe these particular tests are still
> limited in scope by the OS/arch of the host machine.  The go/build package
> respects the build tags in files, so in theory, one run of this on Ubuntu
> could miss code that is +build windows.  (we could give go/build different
> os/arch constraints, but then we're back to running this N times for N
> os/arch variants)
>
> I'm certainly happy to have these tests run in verify.sh, but it sounds
> like they may need to be run per-OS testrun as well.
>

We should investigate using go/build.Context.UseAllFiles to ignore the
build tags. As long as there are no "//+build ignore" files lurking, and as
long as we're not trying to do type-checking, that should be fine.


> On Sun, May 1, 2016 at 10:27 PM Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> On Thu, Apr 28, 2016 at 11:48 AM Nate Finch <nate.fi...@canonical.com>
>> wrote:
>>
>>> Maybe we're not as far apart as I thought at first.
>>>
>>
>>
>>> My thought was that they'd live under github.com/juju/juju/devrules (or
>>> some other name) and therefore only get run during a full test run or if
>>> you run them there specifically.  What is a full test run if not a test of
>>> all our code?  These tests just happen to test all the code at once, rather
>>> than piece by piece.  Combining with the other thread, if we also marked
>>> them as skipped under -short, you could easily still run go test ./...
>>> -short from the root of the juju repo and not incur the extra 16.5 seconds
>>> (gocheck has a nice feature where if you call c.Skip() in the SetUpSuite,
>>> it skips all the tests in the suite, which is particularly appropriate to
>>> these tests, since it's the SetUpSuite that takes all the time).
>>>
>>
>> I'm not opposed to using the Go testing framework in this instance,
>> because it makes most sense to write the analysis code in Go. That may not
>> always be the case, though, and I don't want to have a rule of "everything
>> as Go tests" that means we end up shoe-horning things. This is just
>> academic until we need something that doesn't live in the Go ecosystem,
>> though.
>>
>> Most importantly, I don't want to lose the ability to distinguish the
>> types of tests. As an example: where we run static analysis should never
>> matter, so we can cut a merge job short by performing all of the static
>> analysis checks up front. That doesn't matter much if we only gate merges
>> on running the tests on one Ubuntu series/arch; but what if we want to
>> start gating on Windows, CentOS, or additional architectures? It would not
>> make sense to run them all in parallel if they're all going to fail on the
>> static analysis tests. And then if we've run them up front, it would be
>> ideal to not have to run them on the individual test machines.
>>
>> So I think it would be OK to have a separate "devrules" package, or
>> whatever we want to call it. I would still like these tests to be run by
>> verify.sh, so we have one place to go to check that the source code is
>> healthy, without also running the unit tests or feature tests. If we have a
>> separate package like this, test tags are not really necessary in the short
>> term -- the distinction is made by separating the tests into their own
>> package. We could still mark them as short/long, but that's orthogonal to
>> separation-by-purpose.
>>
>> Cheers,
>> Andrew
>>
>> Mostly, I just didn't want them to live off in a separate repo or run
>>> with a separate tool.
>>>
>>> On Wed, Apr 27, 2016 at 11:39 PM Andrew Wilkins <
>>> andrew.wilk...@canonical.com> wrote:
>>>
>>>> On Thu, Apr 28, 2016 at 11:14 AM Nate Finch <nate.fi...@canonical.com>
>>>> wrote:
>>>>
>>>>> From the other thread:
>>>>>
>>>>> I wrote a test that parses the entire codebase under
>>>>> github.com/juju/juju to look for places where we're creating a new
>>>>> value of crypto/tls.Config instead of using the new helper function that I
>>>>> wrote that creates one with more secure defaults.  It takes 16.5 seconds 
>>>>> to
>>>>> run on my machine.  There's not really any getting around the fact that
>>>>> parsing the whole tree takes a long time.
>>&

  1   2   3   4   >