Juju supported 1.22.6 is proposed
# juju-core 1.22.6 A new proposed supported release of Juju, juju-core 1.22.6, is now available. This release may replace version 1.22.5 on Tuesday June 23. Juju 1.22.x is an older supported version of juju. The current Juju version is 1.24.0 . ## Getting Juju juju-core 1.22.6 is available for vivid and backported to earlier series in the following PPA: https://launchpad.net/~juju/+archive/proposed Windows and OS X users will find installers at: https://launchpad.net/juju-core/+milestone/1.22.6 Proposed releases use the proposed simple-streams. You must configure the `agent-stream` option in your environments.yaml to use the matching juju agents. ## Notable Changes This releases addresses stability and performance issues. ## Resolved issues * State: availability zone upgrade fails if containers are present Lp 1441478 * Lxc provisioning fails on joyent Lp 1461150 * Worker/diskmanager sometimes goes into a restart loop due to failing to update state Lp 1461871 * Package github.com/juju/txn has conflicting licences Lp 1463455 Finally We encourage everyone to subscribe the mailing list at juju-...@lists.canonical.com, or join us on #juju-dev on freenode. -- Curtis Hovey Canonical Cloud Development and Operations http://launchpad.net/~sinzui -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: link to review request in launchpad
Seriously speaking -- and let me preface this by saying this is probably very low on the priority list -- how hard would it be to take advantage of Launchpad's new Git support to import branches from Github so that we could tag bugs to branches/commits? Having never used any of these features, I don't have any good feel for the level of effort. It sure would be nice... On Wed, Jun 17, 2015 at 3:34 PM, Tim Penhey tim.pen...@canonical.com wrote: On 18/06/15 03:11, Eric Snow wrote: All, After posting a review request (i.e. a PR), please be sure to add a link to the review request/PR (as a comment) to the lp issue the patch addresses. Otherwise it's a pain trying to track down which commits actually relate to the issue. Similarly, be sure to include a link to the related issue in the PR description. Ya know... if we were using branches on launchpad, this would be done automagically. *nudge* *nudge* Tim -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: link to review request in launchpad
On Wed, Jun 17, 2015 at 11:12 PM Eric Snow eric.s...@canonical.com wrote: All, After posting a review request (i.e. a PR), please be sure to add a link to the review request/PR (as a comment) to the lp issue the patch addresses. Otherwise it's a pain trying to track down which commits actually relate to the issue. Similarly, be sure to include a link to the related issue in the PR description. On a vaguely related note, do you know what happened to the GitHub/PR links on reviewboard? Recently they're showing up as unrendered Markdown. It's handy having clickable links when going to land. -eric -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: I'm concerned
OK, found it. And it has nothing to do with leases. I'm just proposing the fix now, but it has taken me most of the day to diagnose and fix. The certupdater worker was making the mistake of trusting a watcher. It was blindly getting the addresses and updating the certificate. The cases where the agent failed to stop was when for some reason, the address watcher fired twice after the apiserver worker had shut down, but before the certupdater worker was signalled to die (or before it noticed). The certupdater worker communicates with the apiserver worker through a buffered channel (with a one item buffer). It was the second notification that triggered the blocking channel send. I added a memory to the cert updater, so it doesn't blindly update the cert, but only when the addresses do in fact change. I had a failure rate of between 20 and 40% before this change, and it appears to be fixed now. Tim On 17/06/15 22:01, William Reade wrote: ...but I think that axw actually addressed that already. Not sure then; don't really have the bandwidth to investigate deeply right now. Sorry noise. On Wed, Jun 17, 2015 at 10:52 AM, William Reade william.re...@canonical.com mailto:william.re...@canonical.com wrote: I think the problem is in the implicit apiserver-leasemgr-state dependencies; if the lease manager is stopped at the wrong moment, the apiserver will never shut down because it's waiting on a blocked leasemgr call. I'll propose something today. On Wed, Jun 17, 2015 at 7:33 AM, David Cheney david.che...@canonical.com mailto:david.che...@canonical.com wrote: This should be achievable. go test sends SIGQUIT on timeout, we can setup a SIGQUIT handler in the topmost suite (or import it as a side effect package), do whatever cleanup is needed, then os.Exit, unhandle the signal and try to send SIGQUIT to ourselves, or just panic. On Wed, Jun 17, 2015 at 3:25 PM, Tim Penhey tim.pen...@canonical.com mailto:tim.pen...@canonical.com wrote: Hey team, I am getting more and more concerned about the length of time that master has been cursed. It seems that sometime recently we have introduced serious instability in cmd/jujud/agent, and it is often getting wedged and killed by the test timeout. I have spent some time looking, but I have not yet found a definitive cause. At least some of the time the agent is failing to stop and is deadlocked. This is an intermittent failure, but intermittent enough that often at least one of the unit test runs fails with this problem cursing the entire run. One think I have considered to aid in the debugging is to add some code to the juju base suites somewhere (or in testing) that adds a goroutine that will dump the gocheck log just before the test gets killed due to timeout - perhaps a minute before. Not sure if we have access to the timeout or not, but we can at least make a sensible guess. This would give us at least some logging to work through on these situations where the test is getting killed due to running too long. If no one looks at this and fixes it overnight, I'll start poking it with a long stick tomorrow. Cheers, Tim -- Juju-dev mailing list Juju-dev@lists.ubuntu.com mailto:Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com mailto:Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: I'm concerned
I think the problem is in the implicit apiserver-leasemgr-state dependencies; if the lease manager is stopped at the wrong moment, the apiserver will never shut down because it's waiting on a blocked leasemgr call. I'll propose something today. On Wed, Jun 17, 2015 at 7:33 AM, David Cheney david.che...@canonical.com wrote: This should be achievable. go test sends SIGQUIT on timeout, we can setup a SIGQUIT handler in the topmost suite (or import it as a side effect package), do whatever cleanup is needed, then os.Exit, unhandle the signal and try to send SIGQUIT to ourselves, or just panic. On Wed, Jun 17, 2015 at 3:25 PM, Tim Penhey tim.pen...@canonical.com wrote: Hey team, I am getting more and more concerned about the length of time that master has been cursed. It seems that sometime recently we have introduced serious instability in cmd/jujud/agent, and it is often getting wedged and killed by the test timeout. I have spent some time looking, but I have not yet found a definitive cause. At least some of the time the agent is failing to stop and is deadlocked. This is an intermittent failure, but intermittent enough that often at least one of the unit test runs fails with this problem cursing the entire run. One think I have considered to aid in the debugging is to add some code to the juju base suites somewhere (or in testing) that adds a goroutine that will dump the gocheck log just before the test gets killed due to timeout - perhaps a minute before. Not sure if we have access to the timeout or not, but we can at least make a sensible guess. This would give us at least some logging to work through on these situations where the test is getting killed due to running too long. If no one looks at this and fixes it overnight, I'll start poking it with a long stick tomorrow. Cheers, Tim -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: I'm concerned
...but I think that axw actually addressed that already. Not sure then; don't really have the bandwidth to investigate deeply right now. Sorry noise. On Wed, Jun 17, 2015 at 10:52 AM, William Reade william.re...@canonical.com wrote: I think the problem is in the implicit apiserver-leasemgr-state dependencies; if the lease manager is stopped at the wrong moment, the apiserver will never shut down because it's waiting on a blocked leasemgr call. I'll propose something today. On Wed, Jun 17, 2015 at 7:33 AM, David Cheney david.che...@canonical.com wrote: This should be achievable. go test sends SIGQUIT on timeout, we can setup a SIGQUIT handler in the topmost suite (or import it as a side effect package), do whatever cleanup is needed, then os.Exit, unhandle the signal and try to send SIGQUIT to ourselves, or just panic. On Wed, Jun 17, 2015 at 3:25 PM, Tim Penhey tim.pen...@canonical.com wrote: Hey team, I am getting more and more concerned about the length of time that master has been cursed. It seems that sometime recently we have introduced serious instability in cmd/jujud/agent, and it is often getting wedged and killed by the test timeout. I have spent some time looking, but I have not yet found a definitive cause. At least some of the time the agent is failing to stop and is deadlocked. This is an intermittent failure, but intermittent enough that often at least one of the unit test runs fails with this problem cursing the entire run. One think I have considered to aid in the debugging is to add some code to the juju base suites somewhere (or in testing) that adds a goroutine that will dump the gocheck log just before the test gets killed due to timeout - perhaps a minute before. Not sure if we have access to the timeout or not, but we can at least make a sensible guess. This would give us at least some logging to work through on these situations where the test is getting killed due to running too long. If no one looks at this and fixes it overnight, I'll start poking it with a long stick tomorrow. Cheers, Tim -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
link to review request in launchpad
All, After posting a review request (i.e. a PR), please be sure to add a link to the review request/PR (as a comment) to the lp issue the patch addresses. Otherwise it's a pain trying to track down which commits actually relate to the issue. Similarly, be sure to include a link to the related issue in the PR description. -eric -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Juju stable 1.24.0 is released
# juju-core 1.24.0 A new stable release of Juju, juju-core 1.24.0, is now available. This release replaces version 1.23.3. ## Getting Juju juju-core 1.24.0 is available for vivid and backported to earlier series in the following PPA: https://launchpad.net/~juju/+archive/stable Windows and OS X users will find installers at: https://launchpad.net/juju-core/+milestone/1.24.0 ## Notable Changes * VMWare (vSphere) Provider * Resource Tagging (EC2, OpenStack) * MAAS root-disk Constraint * Service Status * CentOS 7 Preview * Storage (experimental) ### VMWare (vSphere) Provider Juju now supports VMWare's vSphere (Software-Defined Data Center) installations as a targetable cloud. It uses the vSphere API to interact with the vCenter server. The vSphere provider uses the OVA images provided by Ubuntu's official repository. API authentication credentials, as well as other config options, must be added to your environments.yaml file before running 'juju bootstrap'. The different options are described below. The basic config options in your environments.yaml will look like this: my-vsphere: type: vsphere host: 192.168.1.10 user: some-user password: some-password datacenter: datacenter-name external-network: external-network-name The values in angle brackets need to be replaced with your vSphere information. 'host' must contain the IP address or DNS name of vSphere API endpoint. 'user' and 'password' are fields that must contain your vSphere user credentials. 'datacenter' field must contain the name of your vSphere virtual datacenter. 'external-network' is an optional field. If set, it must contain name of the network that will be used to obtain public IP addresses for each virtual machine provisioned by juju. An IP pool must be configured in this network and all available public IP addresses must be added to this pool. For more information on IP pools, see official documentation: https://pubs.vmware.com/vsphere-51/index.jsp?topic=2Fcom.vmware.vsphere.vm_admin.doc%2FGUID-5B3AF10D-8E4A-403C-B6D1-91D9171A3371.html NOTE that using the vSphere provider requires an existing vSphere installation. Juju does not set up vSphere for you. The OVA images we use support VMWare's Hardware Version 8 (or newer). This should not be a problem for most vSphere installations. ### Resource Tagging (EC2, OpenStack) Juju now tags instances and volumes created by the EC2 and OpenStack providers with the Juju environment UUID. Juju also adds any user- specified tags set via the resource-tags environment setting. The format of this setting is a space-separated list of key=value pairs: resource-tags: key1=value1 [key2=value2 ...] These tags may be used, for example, to set up chargeback accounting. Any tags that Juju manages will be prefixed with juju-; users must avoid modifying these. Instances and volumes are now named consistently across EC2 and OpenStack, using the scheme juju-env-resource-type-resource- ID, where env is the human-readable name of the environment as specified in environments.yaml; resource-type is the type of the resource (machine or volume) and resource-ID is the numeric ID of the Juju machine or volume corresponding to the IaaS resource. ### MAAS root-disk Constraint The MAAS provider now honours the root-disk constraint, if the targeted MAAS supports disk constraints. Support for disk constraints was added to MAAS 1.8. ### Service Status Juju provides new hooks for charm authors to report service status, and 'juju status' now includes the service status. This new functionality allows charms to explicitly inform Juju of their status, rather than Juju guessing. Charm authors have access to 2 new hook tools, and the status report includes more information. The 'status-set' hook tool allows a charm to report its status to Juju. This is known as the workload status and is meant to reflect the state of the software deployed by the charm. Charm authors are responsible for setting the workload's status to Active when the charm is ready to run its workload, and Blocked when it needs user intervention to resolve a problem. status-set: status-set maintenance | blocked | waiting | active message The 'status-get' hook tool allows a charm to query the current workload status recorded in Juju. Without arguments, it just prints the workload status value eg maintenance. With '--include-data' specified, it prints YAML which contains the status value plus any data associated with the status. status-get: status-get [--include-data] Charms that do not make use of these hook tools will still work as before, but Juju will not provide details about the workload status. The above commands set the status of the individual units. Unit leaders may also set and get the status of the service to which they belong: print the status of all units of the service and the service itself: status-get --service set the status of