Re: Timeout added to kill-controller

2016-09-26 Thread Mark Shuttleworth

"--with-fire" :)

Mark

On 27/09/16 00:54, Tim Penhey wrote:
> Hi all,
>
> NOTE: we do very much consider it a bug if the models don't die properly.
>
> I have just landed a fix for a kill-controller issue where it would
> just sit there for a long time with nothing apparent happening.
>
> Now kill-controller has a default timeout of 5 minutes. If there is no
> change in the timeout period, the command switches to a direct
> destruction mode where it will contact the cloud provider on behalf of
> each remaining model and destroy it that way.
>
> The following examples all used LXD, and were a single controller
> machine with ubuntu deployed in the default model.
>
> $ time juju kill-controller kill-test-fine -y
> Destroying controller "kill-test-fine"
> Waiting for resources to be reclaimed
> Waiting on 1 model, 1 machine, 1 application
> Waiting on 1 model, 1 machine, 1 application
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model
> All hosted models reclaimed, cleaning up controller machines
>
> real0m27.443s
>
>
> Nothing much changes here. Everything died nicely.
> You can specify a timeout with --timeout (or -t). Valid formats are
> things like "2m" for two minutes, "30s" for thirty seconds.
> Zero also works:
>
> $ time juju kill-controller kill-test-no-delay -t 0 -y
> Destroying controller "kill-test-no-delay"
> Waiting for resources to be reclaimed
> Killing admin@local/default directly
>   done
> All hosted models destroyed, cleaning up controller machines
>
> real0m2.492s
>
>
> I had to throw a wrench in the works to get the provisioner to not
> kill the machine (wrench is a test facility we have). This allows me
> to show you a model that doesn't die like it should. I just specify a
> one minute timeout. The polling time was moved from two seconds to
> five seconds. Now you will see a countdown starting after 30 seconds
> of no change.
>
> $ juju kill-controller kill-test -t 1m -y
> Destroying controller "kill-test"
> Waiting for resources to be reclaimed
> Waiting on 1 model, 1 machine, 1 application
> Waiting on 1 model, 1 machine, 1 application
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine
> Waiting on 1 model, 1 machine, will kill machines directly in 29s
> Waiting on 1 model, 1 machine, will kill machines directly in 24s
> Waiting on 1 model, 1 machine, will kill machines directly in 19s
> Waiting on 1 model, 1 machine, will kill machines directly in 14s
> Waiting on 1 model, 1 machine, will kill machines directly in 9s
> Waiting on 1 model, 1 machine, will kill machines directly in 4s
> Killing admin@local/default directly
>   done
> All hosted models destroyed, cleaning up controller machines
>
>
> Tim
>


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.0-rc1 is here!

2016-09-26 Thread Menno Smits
Hi Chance,

Sorry that you're experiencing this issue.

I've filed a ticket here: https://bugs.launchpad.net/juju/+bug/1627894.
Could you possibly add details of the commands you ran leading up to this
error? If possible, can you also please attach Juju's logs? The most
straightforward way to get all available logs for the Juju model is:

juju debug-log --replay | gzip > model.log.gz

This assumes the Juju model experiencing the problem is currently selected.
Use "juju switch" if necessary.
The Juju controller logs would also be helpful:

juju debug-log -m controller --replay | gzip > controller.log.gz

In the mean time, we're investigating at this end.

- Menno



On 23 September 2016 at 09:58, Chance Ellis  wrote:

> Fresh install of juju 2.0 rc1 on a maas 2.0 cloud.
>
>
>
> I have been tracking the container network interface issues and hoping
> they would be fixed in rc1 for an openstack deployment. I tried deploying
> the openstack on this new install. The machines spin up in maas but the
> containers fail. In the juju debug log I keep seeing these errors over and
> over:
>
>
>
> machine-3: 17:34:26 ERROR juju.worker exited "3-container-watcher": worker
> "3-container-watcher" exited: panic resulted in: runtime error: invalid
> memory address or nil pointer dereference
>
> machine-1: 17:34:27 ERROR juju.worker exited "1-container-watcher": worker
> "1-container-watcher" exited: panic resulted in: runtime error: invalid
> memory address or nil pointer dereference
>
> machine-0: 17:34:27 ERROR juju.worker exited "0-container-watcher": worker
> "0-container-watcher" exited: panic resulted in: runtime error: invalid
> memory address or nil pointer dereference
>
> machine-2: 17:34:29 ERROR juju.worker exited "2-container-watcher": worker
> "2-container-watcher" exited: panic resulted in: runtime error: invalid
> memory address or nil pointer dereference
>
>
>
> If I ssh into the juju machine and “lxc list” there are no containers
> created.
>
>
>
> What info can I provide to help troubleshoot this?
>
>
>
>
>
>
>
> *From: * on behalf of Andrew Wilkins <
> andrew.wilk...@canonical.com>
> *Date: *Wednesday, September 21, 2016 at 2:07 AM
> *To: *Curtis Hovey-Canonical , Juju email list <
> juju@lists.ubuntu.com>, "juju-...@lists.ubuntu.com" <
> juju-...@lists.ubuntu.com>
> *Subject: *Re: Juju 2.0-rc1 is here!
>
>
>
> On Wed, Sep 21, 2016 at 1:56 PM Curtis Hovey-Canonical <
> cur...@canonical.com> wrote:
>
> A new development release of Juju, 2.0-rc1, is here!
>
>
>
> Woohoo!
>
>
>
>
> ## What's New in RC1
>
> * The Juju client now works on any Linux flavour. When bootstrapping
>   with local tools, it's now possible to create a controller of any
>   supported Linux series regardless of the Linux flavour the client
>   is running on.
> * Juju resolved command retries failed hooks by default:
>   juju resolved  // marks unit errors resolved and retries failed
> hooks
>   juju resolved --no-retry  //marks unit errors resolved w/o
> retrying hooks
> * MAAS 2.0 Juju provider has been updated to use MAAS API 2.0's owner
>   data for instance tagging.
> * Networking fixes for containers in MAAS 2.0 when the parent device is
>   unconfigured. (#1566791)
> * Azure provider performance has been enhanced, utilising Azure Resource
>   Manager templates, and improved parallelisation.
> * Azure provider now supports an "interactive" auth-type, making it much
>   easier to set up credentials for bootstrapping. The "userpass"
>   auth-type has been deprecated, and replaced with
>   "service-principal-secret".
>
>
>
> In case anyone jumps right on this, please note that https://streams.
> canonical.com/juju/public-clouds.syaml isn't yet updated. It will be
> updated soon, but in the mean time, if you want to try out the azure
> interactive add-credential, make sure you:
>
>  - delete ~/.local/share/juju/public-clouds.yaml (if it exists)
>
>  - *don't* run "juju update-clouds" until that file is updated
>
> Then Juju will use the cloud definitions built into the client.
>
>
>
> Cheers,
>
> Andrew
>
>
>
>
> ## How do I get it?
>
> If you are running Ubuntu, you can get it from the juju devel ppa:
>
> sudo add-apt-repository ppa:juju/devel
> sudo apt-get update; sudo apt-get install juju-2.0
>
> Or install it from the snap store
>
> snap install juju --beta --devmode
>
> Windows, Centos, and OS X users can get a corresponding installer at:
>
> https://launchpad.net/juju/+milestone/2.0-rc1
>
>
> ## Feedback Appreciated!
>
> We encourage everyone to subscribe the mailing list at
> juju@lists.ubuntu.com and join us on #juju on freenode. We would love
> to hear your feedback and usage of juju.
>
>
> ## Anything else?
>
> You can read more information about what's in this release by viewing
> the release notes here:
>
> https://jujucharms.com/docs/devel/temp-release-notes
>
>
> --
> Curtis Hovey
> Canonical Cloud Development and 

Timeout added to kill-controller

2016-09-26 Thread Tim Penhey

Hi all,

NOTE: we do very much consider it a bug if the models don't die properly.

I have just landed a fix for a kill-controller issue where it would just 
sit there for a long time with nothing apparent happening.


Now kill-controller has a default timeout of 5 minutes. If there is no 
change in the timeout period, the command switches to a direct 
destruction mode where it will contact the cloud provider on behalf of 
each remaining model and destroy it that way.


The following examples all used LXD, and were a single controller 
machine with ubuntu deployed in the default model.


$ time juju kill-controller kill-test-fine -y
Destroying controller "kill-test-fine"
Waiting for resources to be reclaimed
Waiting on 1 model, 1 machine, 1 application
Waiting on 1 model, 1 machine, 1 application
Waiting on 1 model, 1 machine
Waiting on 1 model, 1 machine
Waiting on 1 model
All hosted models reclaimed, cleaning up controller machines

real0m27.443s


Nothing much changes here. Everything died nicely.
You can specify a timeout with --timeout (or -t). Valid formats are 
things like "2m" for two minutes, "30s" for thirty seconds.

Zero also works:

$ time juju kill-controller kill-test-no-delay -t 0 -y
Destroying controller "kill-test-no-delay"
Waiting for resources to be reclaimed
Killing admin@local/default directly
  done
All hosted models destroyed, cleaning up controller machines

real0m2.492s


I had to throw a wrench in the works to get the provisioner to not kill 
the machine (wrench is a test facility we have). This allows me to show 
you a model that doesn't die like it should. I just specify a one minute 
timeout. The polling time was moved from two seconds to five seconds. 
Now you will see a countdown starting after 30 seconds of no change.


$ juju kill-controller kill-test -t 1m -y
Destroying controller "kill-test"
Waiting for resources to be reclaimed
Waiting on 1 model, 1 machine, 1 application
Waiting on 1 model, 1 machine, 1 application
Waiting on 1 model, 1 machine
Waiting on 1 model, 1 machine
Waiting on 1 model, 1 machine
Waiting on 1 model, 1 machine
Waiting on 1 model, 1 machine
Waiting on 1 model, 1 machine
Waiting on 1 model, 1 machine, will kill machines directly in 29s
Waiting on 1 model, 1 machine, will kill machines directly in 24s
Waiting on 1 model, 1 machine, will kill machines directly in 19s
Waiting on 1 model, 1 machine, will kill machines directly in 14s
Waiting on 1 model, 1 machine, will kill machines directly in 9s
Waiting on 1 model, 1 machine, will kill machines directly in 4s
Killing admin@local/default directly
  done
All hosted models destroyed, cleaning up controller machines


Tim

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Currently experiencing outages with jujucharms.com

2016-09-26 Thread Brad Crittenden
The outage has been resolved and the jujucharms.com site and juju deploys are 
working again. 

Best,

Brad Crittenden


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Proposal: DNS for Juju Charms

2016-09-26 Thread Simon Davy
On Sun, Sep 25, 2016 at 2:25 AM, Casey Marshall <
casey.marsh...@canonical.com> wrote:

> Awesome idea! Probably more of a wishlist thing at this point.. but can we
> also add SSHFP records for all the units?
>

​Great idea!
​


> -Casey
>
> On Sat, Sep 24, 2016 at 11:47 AM, Marco Ceppi 
> wrote:
>
>> Hey everyone,
>>
>> I'm currently working on a charm for NS1, which is a DNS service
>> accessible via API. There are quite a few of these types of services
>> available and I'd like to develop a best practice about how the Juju model
>> is presented as DNS. My hope is this would eventually be something that
>> Juju includes in it's model, but for now charms seem to be a clean way to
>> present this.
>>
>> My proposal for how public DNS would be configured for mapping a juju
>> deployment to resolvable DNS records is as follows:
>>
>> Given a root TLD: example.tld which represents the root of a model, the
>> following bundle would be represented as such:
>>
>> haproxy/0104.196.197.94
>> mariadb/0104.196.50.123
>> redis/0  104.196.105.166
>> silph-web/0  104.196.42.224
>> silph-web/1  104.196.117.185
>> silph-web/2  104.196.117.134
>>
>> I'd expect the following for DNS values
>>
>> haproxy.example.tld - 104.196.197.94
>> 0.haproxy.example.tld - 104.196.197.94
>> mariadb.example.tld - 104.196.50.123
>> 0.mariadb.example.tld - 104.196.50.123
>> redis.example.tld - 104.196.105.166
>> 0.redis.example.tld - 104.196.105.166
>> silph-web.example.tld - 104.196.42.224, 104.196.117.185, 104.196.117.134
>> 0.silph-web.example.tld - 104.196.42.224
>> 1.silph-web.example.tld - 104.196.117.185
>> 2.silph-web.example.tld - 104.196.117.134
>>
>>
​+1 to the scheme, and +100 to the idea of the controller being a DNS
resolver for units.

I have exactly the same scheme in my local juju dns tool (which uses a
dnsmasq zone file), minus the RR entries. My root domain is just
.juju.

Having a charm I can add to just give me that in dev would be awesome, even
more awesome if I have the option to use same charm in CI/prod too. It
would make configuration, scripting and debugging much easier, as well as
provide a basic cross-protocol load balancing OOTB (well, charm would need
updating to use the DNS, I guess).

A few questions:

1) Any thoughts on TTL for units? I guess it's possibly not too much of a
problem, as charms will have explicit info as to other units, but there may
be some fun corner cases.

2) Could we add a charmhelpers function that could turn a unit name string
into its canonical dns? Like we do already for url/path-safe unit names,
IIRC? Even better, down the line, if juju could provide them in the hook
env, that would be sweet.

3) In the client interface layer for the charm, do you think it might be
possible to enable short-term dns caching for the unit? We just found yet
another performance issue where this would have really helped. I realise
this might be the wrong place to do it, but I thought I'd check.


Thanks

--
​
Simon​
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


LDAP no op charm

2016-09-26 Thread Tom Barber
Okay so I discussed this with a few folk in Pasadena but I think its worth
documenting on the list to find out if something exists in secret, or if
there is any technical reason why I shouldn't write this.

Taking some inspiration  from  the Nagios External Master charm, it strikes
me as a good idea to have an LDAP interface and LDAP no op charm that can
allow charms to connect to external  LDAP  sources with minimal effort.

I have a long term goal to charm up openldap or whatever but in the short
term, it also strikes me that a lot of implementing companies would already
have an AD server or OpenLDAP server running somewhere that they wouldn't
want to migrate which is completely understandable. So an LDAP charm that
just tells charms the useful information like url, port, ssl, basedn,
search mask etc would be a good way to let Saiku, Gitlab, Hadoop, HTTPD etc
hook up to corporate LDAP servers to provide proper user management.
Similarly, if I was looking to setup a scalable PAAS/SAAS setup I would
want to centralise my stuff instead of having a bunch of disparate
applications.

Comments and suggestions please.

Tom
--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju