Juju, 2.0-beta18 is here!
A new development release of Juju, 2.0-beta18, is here! ## What's new? * juju model-defaults command(s) have been collapsed to one command * juju model-config command(s) have been collapsed to one command * juju list-controllers displays model count, machine count, and HA status * juju show-controllers contains more detailed information about controller machines (instance id, HA status) * juju list-models displays machine count and core count * "juju login" now supports external users. If you have identity-url configured, you must now explicitly specify a user name on the command line to log in as a local user * When a login expires for local users, you will now be automatically prompted, rather than getting an error back telling you to run "juju login" * Macaroons for local users are now stored in the cookie jar, as with external users. There is a known issue with logout (#1621375) which will be addressed in beta19. ## How do I get it? If you are running Ubuntu, you can get it from the Juju devel PPA: sudo add-apt-repository ppa:juju/devel sudo apt-get update; sudo apt-get install juju-2.0 Or install it from the snap store snap install juju --beta --devmode Windows, Centos, and OS X users can get a corresponding installer at: https://launchpad.net/juju/+milestone/2.0-beta18 ## Feedback Appreciated! We encourage everyone to subscribe the mailing list at juju@lists.ubuntu.com and join us on #juju on freenode. We would love to hear your feedback and usage of juju. ## Anything else? You can read more information about what's in this release by viewing the release notes here: https://jujucharms.com/docs/devel/temp-release-notes -- Curtis Hovey Canonical Cloud Development and Operations http://launchpad.net/~sinzui -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Follow-up on unit testing layered charms
Hi Stuart, > To reset, you just iterate over sys.modules and reset everything that is a MagicMock (or anything with a reset_mock() method). There is no need to figure which to reset, since you want all of them reset every test to preserve test isolation. I haven't actually tried this bit yet. Unfortunately, reset_mock doesn't reset as much as one would like it to. It resets the call attributes on the mock, but it won't reset side_effects and return_values. So if you rely on something like the options object in the base layer being replaced by a MagicMock in most of your tests, but you replace it with a fixture with some actual values in one of your tests, you run into trouble pretty quickly. :-( ~ PeteVG On Fri, Sep 9, 2016 at 8:53 PM Stuart Bishop wrote: > On 9 September 2016 at 01:03, Pete Vander Giessen < > pete.vandergies...@canonical.com> wrote: > >> Hi All, >> >> > Stuart Bishop wrote: >> > The tearDown method could reset the mock easily enough. >> >> If only it were that simple :-) >> >> To patch imports, the harness was actually providing a context that you >> could use the wrap the imports at the top of your test module. That solved >> the immediate issue of executing imports without errors, but it created a >> very complex situation when you went to figure out which references to >> cleanup or update when you wanted to reset mocks. You also weren't able to >> clean them up in tearDown, or even tearDownClass, because you had to handle >> the situation where you had multiple test classes in a module. >> >> One workaround is to do your imports inside of the setUp for a test. That >> didn't feel like the correct way to do things in a library meant for >> general use, where I'd prefer to stick to things that don't make Guido sad. >> I wouldn't necessarily object to the technique if it came up in a code >> review for a specific charm, though :-) >> > > I'm thinking you insert a MagicMock into sys.modules instead of an import > statement (this is how we do it in the telegraf charm, and I'm sure helpers > could make this nicer): > > # Mock layer modules > import charms > promreg = MagicMock() > charms.promreg = promreg > sys.modules['charms.promreg'] = promreg > > To reset, you just iterate over sys.modules and reset everything that is a > MagicMock (or anything with a reset_mock() method). There is no need to > figure which to reset, since you want all of them reset every test to > preserve test isolation. I haven't actually tried this bit yet. > > If you are using the standard Python unittest feature set, I think you > would need a TestCase subclass to do the reset. If you are using py.test, I > think it has features that can do this magically. > > A moduleTearDown would be required if you want to remove the mocks from > sys.modules (or py.test magic). But I don't think we need to bother. > > > > -- > Stuart Bishop > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Follow-up on unit testing layered charms
On 9 September 2016 at 01:03, Pete Vander Giessen < pete.vandergies...@canonical.com> wrote: > Hi All, > > > Stuart Bishop wrote: > > The tearDown method could reset the mock easily enough. > > If only it were that simple :-) > > To patch imports, the harness was actually providing a context that you > could use the wrap the imports at the top of your test module. That solved > the immediate issue of executing imports without errors, but it created a > very complex situation when you went to figure out which references to > cleanup or update when you wanted to reset mocks. You also weren't able to > clean them up in tearDown, or even tearDownClass, because you had to handle > the situation where you had multiple test classes in a module. > > One workaround is to do your imports inside of the setUp for a test. That > didn't feel like the correct way to do things in a library meant for > general use, where I'd prefer to stick to things that don't make Guido sad. > I wouldn't necessarily object to the technique if it came up in a code > review for a specific charm, though :-) > I'm thinking you insert a MagicMock into sys.modules instead of an import statement (this is how we do it in the telegraf charm, and I'm sure helpers could make this nicer): # Mock layer modules import charms promreg = MagicMock() charms.promreg = promreg sys.modules['charms.promreg'] = promreg To reset, you just iterate over sys.modules and reset everything that is a MagicMock (or anything with a reset_mock() method). There is no need to figure which to reset, since you want all of them reset every test to preserve test isolation. I haven't actually tried this bit yet. If you are using the standard Python unittest feature set, I think you would need a TestCase subclass to do the reset. If you are using py.test, I think it has features that can do this magically. A moduleTearDown would be required if you want to remove the mocks from sys.modules (or py.test magic). But I don't think we need to bother. -- Stuart Bishop -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: JUJU Charm Certification
Hi, I have a question. Can we do [MAAS+OpenStack base bundle] Setup on Virtual machines?. (Or) Can We do [MAAS+OpenStack base bundle] Setup on Single server? If we can , please provide me the deatailed information. Because We created Our "cinder-storagedriver" charm with one of our storage array. We integrated our charm with Openstack base bundle. After integration we made that bundle as our own bundle to populate to charm store. But before populating we want to test our bundle from local. Thanks, Siva. On Wed, Sep 7, 2016 at 9:50 PM, SivaRamaPrasad Ravipati wrote: > Thank you very much. We will try with the proposed use-case by you. > > > Thanks, > Siva. > > On Wed, Sep 7, 2016 at 9:36 PM, Marco Ceppi > wrote: > >> Hi Siva, >> >> In Juju, and especially with Cinder plugins, you can deploy multiple >> copies of the Juju charm and relate them. Each application deployed is >> equivalent to the scope of a SAN cluster: >> >> juju deploy cinder >> juju deploy your-charm san1 >> juju deploy your-charm san2 >> >> juju add-relation cinder san1 >> juju add-relation cinder san2 >> >> Now, you can configure each of the new applications, which are teh same >> copy of the charm deployed multiple times. This will add a unique backend >> per charm copy which seems to be your intended use case. >> >> Thanks, >> Marco Ceppi >> >> On Wed, Sep 7, 2016 at 12:03 PM SivaRamaPrasad Ravipati >> wrote: >> >>> For example, We have different storage arrays of same type with unique >>> config parameter values.[Like San IP, SAN password, San user]. >>> Assume that our charm has been deployed with some configuration values >>> and we added relation to cinder. Our charm will modify cinder.cong with the >>> storage array driver. Next time we want to redeploy our charm to append >>> only the new configuration changes. But we don't want to destroy already >>> existing changes. >>> >>> Upto which extension, "juju set-config" and "juju upgrade-charm" will >>> be used here. Please give me a simple example if it possible. >>> >>> For this Scenario, Which use-case will be generally used. Please let me >>> know that in a detailed manner. >>> >>> >>> Thanks, >>> >>> Siva. >>> >>> On Wed, Sep 7, 2016 at 4:54 PM, SivaRamaPrasad Ravipati < >>> si...@vedams.com> wrote: >>> OK. Thank you. I have One more Question. Knowing answer for this question is very important for us. We have developed a JUJU Charm for configuring cinder to use one of our Storage array as the backend. So How to redeploy the Charm to add more storage arrays to configure cinder without destroying/removing the current deployed charm. [For example, We don't want to remove the current configured storage arrays from the Cinder configuration.] Thanks, Siva. On Wed, Sep 7, 2016 at 3:37 PM, Adam Collard < adam.coll...@canonical.com> wrote: > Hi Siva, > > On Wed, 7 Sep 2016 at 10:58 SivaRamaPrasad Ravipati > wrote: > >> Hi, >> >> I have installed the openstack cloud using openstack Autopilot. I am >> trying to deploy juju-gui in the internal juju environment. >> >> >> >> >> I did the following. >> >> >> ->From MAAS node >> >> $export JUJU_HOME=~/.cloud-install/juju >> >> -> Connecting Landscape server to deploy our charm and add relation to >> cinder charm. >> >> $juju ssh landscape-server/0 sudo >> 'JUJU_HOME=/var/lib/landscape/juju-homes/`sudo ls -rt >> /var/lib/landscape/juju-homes/ | tail -1` sudo -u landscape -E bash' >> >> -> From Landscape Server >> >> landscape@juju-machine-0-lxc-1:~$ juju deploy cs:juju-gui-134 >> >> Added charm "cs:trusty/juju-gui-134" to the environment. >> >> >> ubuntu@juju-machine-0-lxc-1:~$ juju status >> >> "4": >> agent-state: error >> agent-state-info: 'cannot run instances: cannot run instances: >> gomaasapi: got >> error back from server: 409 CONFLICT (No available node matches >> constraints: >> zone=region1)' >> instance-id: pending >> series: trusty >> >> >> >> >> >> juju-gui: >> charm: cs:trusty/juju-gui-134 >> exposed: false >> service-status: >> current: unknown >> message: Waiting for agent initialization to finish >> since: 07 Sep 2016 06:46:22Z >> units: >> juju-gui/1: >> workload-status: >> current: unknown >> message: Waiting for agent initialization to finish >> since: 07 Sep 2016 06:46:22Z >> agent-status: >> current: allocating >> since: 07 Sep 2016 06:46:22Z >> agent-state: pending >> machine: "4" >> >> >> JUJU Version >>