Re: Juju 2.2-rc2 has been released

2017-06-09 Thread Casey Marshall
On Fri, Jun 9, 2017 at 12:29 PM, Uros Jovanovic <
uros.jovano...@canonical.com> wrote:

> Quick instructions on how the new azure credentials flow works in Juju
> 2.2-RC2:
>
> # install az client using a snap
> $ sudo snap install azure-cli --classic --edge
>

I've pushed the snap to stable, so you can drop the --edge flag:

$ sudo snap install azure-cli --classic


> # login to azure
> $ az login
>
> # install latest 2.2-rc2 Juju
> $ sudo snap install juju --classic --candidate
>
> # verify version
> $ juju version
> 2.2-rc2-xenial-amd64
> # if it's not 2.2-rc2, PATH needs to be set so that snapped juju comes
> first
> $ export PATH=/snap/bin:$PATH
>
> $ juju add-credential azure
> # select some name for the credentials,
> # then select interactive as default choice,
> # and when asked for "subscription id (optional)" just press enter.
> # the process uses the default subscription
>
> # if you’d like to select between multiple subscriptions instead
> $ juju autoload-credentials
>
> $ juju credentials
> # shows new creds for azure
>
> Done.
>
> You can also install Azure CLI Client from from https://github.com/Azure/
> azure-cli like this:
> $ curl -L https://aka.ms/InstallAzureCli | bash
>
>
> PS: a link to instructions how to get packages for other platforms should
> be https://jujucharms.com/docs/devel/reference-install#
> getting-development-releases
>
>
> On Fri, Jun 9, 2017 at 1:25 PM, Chris Lee  wrote:
>
>> # Juju 2.2-rc2 Release Notes
>>
>>
>>
>> We are delighted to announce the release of Juju and conjure-up 2.2-rc2!
>> In this release, Juju greatly improves memory and storage consumption,
>> works on KVM containers, and improves network modelling. conjure-up now
>> supports Juju as a Service (JAAS), provides a MacOS client, and adds
>> support for repeatable spell deployments.
>>
>>
>>
>> The best way to get your hands on this release of Juju and conjure-up is
>> to install them via snap packages (see https://snapcraft.io/ for more
>> info on snaps).
>>
>>
>>
>> snap install juju --classic --candidate
>>
>> snap install conjure-up --classic --candidate
>>
>>
>>
>> Other packages are available for a variety of platforms. Please see the
>> online documentation at https://jujucharms.com/docs/de
>> vel/reference-releases#development
>>
>>
>>
>> Please note that if you are upgrading an existing controller, please make
>> sure there is at least 6G of free disk space. The upgrade step for the logs
>> can take a while, in the vicinity of 10 or more minutes if the current logs
>> collection is at its maximum size.
>>
>>
>>
>> Since 2.2-rc1
>>
>>
>> ## New and Improved
>>
>> --
>>
>>
>>
>> Better support credential management in the Azure provider
>>
>> * support autoload-credentials and juju add-credential in the azure
>> provider when Azure CLI is installed.
>>
>> (this removes the requirement that the user discover their subscription
>> ID before creating credentials)
>>
>>
>>
>> Rate limit login and connection requests to the controller(s) on busy
>> systems.
>>
>>
>>
>> ## Fixes
>>
>> --
>>
>>
>>
>> Fix issue where status history logs were not pruned:
>>
>>   https://bugs.launchpad.net/juju/+bug/1696491
>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>> an/listinfo/juju-dev
>>
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Opaque automatic hook retries from API

2017-01-05 Thread Casey Marshall
^^ s/immutability/idempotency

On Thu, Jan 5, 2017 at 12:39 PM, Casey Marshall <
casey.marsh...@canonical.com> wrote:

> On Thu, Jan 5, 2017 at 3:33 AM, Adam Collard <adam.coll...@canonical.com>
> wrote:
>
>> Hi,
>>
>> The automatic hook retries[0] that landed as part of 2.0 (are documented
>> as) run indefinitely[1] - this causes problems as an API user:
>>
>> Imagine you are driving Juju using the API, and when you perform an
>> operation (e.g. set the configuration of a service, or reboot the unit, or
>> add a relation..) - you want to show the status of that operation.
>>
>> Prior to the automatic retries, you simply perform your operation, and
>> watch the delta streams for the corresponding change to the unit - the
>> success or otherwise of the operation is reflected in the unit
>> agent-status/workload-status pair.
>>
>> Now, with retries, if you see a unit in the error state, you can't
>> accurately reflect the status of the operation, since the unit will
>> undoubtedly retry the hook again. Maybe it succeeds, maybe it fails again.
>> How can one say after receiving the first delta of a unit error if the
>> operation succeeded or failed?
>>
>> With no visibility up front on the retry strategy that Juju will perform
>> (e.g. something representing the exponential backoff and a fixed number of
>> retries before Juju admits defeat) it is impossible to say at any point in
>> the delta stream what the result of a failed-at-least-once operation is.
>>
>
> I think the retry strategy is great -- it leverages the immutability we
> expect hooks to provide, to deliver a robust result over unreliable
> substrates -- and all substrates are unreliable where there's
> internetworking involved!
>
> However I see your point about the retry strategy muddling status. I've
> noticed this sometimes when watching openstack or k8s bundles "shake out"
> the errors as they come up. I don't think this is always a charm quality
> issue, it's maybe because we're trying to show two different things with
> status?
>
>
>> What if Juju made a clearer distinction between result-state ("what I'm
> doing most recently or last attempted to do") vs. goal-state ("what I'm
> trying to get done") in the status? Would that help?
>
>
>> Can retries be limited to a small number, with a backoff algorithm
>> explicitly documented and stuck to by Juju, with the retry attempt number
>> included in the delta stream?
>>
>> Thanks,
>>
>> Adam
>>
>> [0] https://jujucharms.com/docs/2.0/reference-release-notes
>> [1] https://jujucharms.com/docs/2.0/models-config#retrying-failed-hooks
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>> an/listinfo/juju-dev
>>
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Opaque automatic hook retries from API

2017-01-05 Thread Casey Marshall
On Thu, Jan 5, 2017 at 3:33 AM, Adam Collard 
wrote:

> Hi,
>
> The automatic hook retries[0] that landed as part of 2.0 (are documented
> as) run indefinitely[1] - this causes problems as an API user:
>
> Imagine you are driving Juju using the API, and when you perform an
> operation (e.g. set the configuration of a service, or reboot the unit, or
> add a relation..) - you want to show the status of that operation.
>
> Prior to the automatic retries, you simply perform your operation, and
> watch the delta streams for the corresponding change to the unit - the
> success or otherwise of the operation is reflected in the unit
> agent-status/workload-status pair.
>
> Now, with retries, if you see a unit in the error state, you can't
> accurately reflect the status of the operation, since the unit will
> undoubtedly retry the hook again. Maybe it succeeds, maybe it fails again.
> How can one say after receiving the first delta of a unit error if the
> operation succeeded or failed?
>
> With no visibility up front on the retry strategy that Juju will perform
> (e.g. something representing the exponential backoff and a fixed number of
> retries before Juju admits defeat) it is impossible to say at any point in
> the delta stream what the result of a failed-at-least-once operation is.
>

I think the retry strategy is great -- it leverages the immutability we
expect hooks to provide, to deliver a robust result over unreliable
substrates -- and all substrates are unreliable where there's
internetworking involved!

However I see your point about the retry strategy muddling status. I've
noticed this sometimes when watching openstack or k8s bundles "shake out"
the errors as they come up. I don't think this is always a charm quality
issue, it's maybe because we're trying to show two different things with
status?


> What if Juju made a clearer distinction between result-state ("what I'm
doing most recently or last attempted to do") vs. goal-state ("what I'm
trying to get done") in the status? Would that help?


> Can retries be limited to a small number, with a backoff algorithm
> explicitly documented and stuck to by Juju, with the retry attempt number
> included in the delta stream?
>
> Thanks,
>
> Adam
>
> [0] https://jujucharms.com/docs/2.0/reference-release-notes
> [1] https://jujucharms.com/docs/2.0/models-config#retrying-failed-hooks
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: A (Very) Minimal Charm

2016-12-01 Thread Casey Marshall
On Thu, Dec 1, 2016 at 6:53 AM, Marco Ceppi 
wrote:

> On Thu, Dec 1, 2016 at 5:00 AM Adam Collard 
> wrote:
>
>> On Thu, 1 Dec 2016 at 04:02 Nate Finch  wrote:
>>
>> On IRC, someone was lamenting the fact that the Ubuntu charm takes longer
>> to deploy now, because it has been updated to exercise more of Juju's
>> features.  My response was - just make a minimal charm, it's easy.  And
>> then of course, I had to figure out how minimal you can get.  Here it is:
>>
>> It's just a directory with a metadata.yaml in it with these contents:
>>
>> name: min
>> summary: nope
>> description: nope
>> series:
>>   - xenial
>>
>> (obviously you can set the series to whatever you want)
>> No other files or directories are needed.
>>
>>
>> This is neat, but doesn't detract from the bloat in the ubuntu charm.
>>
>
> I'm happy to work though changes to the Ubuntu charm to decrease "bloat".
>
>
>> IMHO the bloat in the ubuntu charm isn't from support for Juju features,
>> but the switch to reactive plus conflicts in layer-base wanting to a)
>> support lots of toolchains to allow layers above it to be slimmer and b) be
>> a suitable base for "just deploy me" ubuntu.
>>
>
> But it is to support the reactive framework, where we utilize newer Juju
> features, like status and application-version to make the charm rich
> despite it's minimal goal set. Honestly, a handful of cached wheelhouses
> and some apt packages don't strike me as bloat, but I do want to make sure
> the Ubuntu charm works for those using it. So,
>

I think a minimal wheelhouse to provide a consistent charm hook runtime is
very reasonable and definitely not the problem here.

There are too many packages that get installed by default with the reactive
framework that most charms don't need. When I deploy the ubuntu charm (but
this applies to any charm built with reactive and layer:basic), I also get:

2016-12-01 17:45:47 INFO install The following NEW packages will be
installed:
2016-12-01 17:45:47 INFO install   binutils build-essential cpp cpp-5
dpkg-dev fakeroot g++ g++-5 gc
c gcc-5
2016-12-01 17:45:47 INFO install   libalgorithm-diff-perl
libalgorithm-diff-xs-perl libalgorithm-mer
ge-perl
2016-12-01 17:45:47 INFO install   libasan2 libatomic1 libc-dev-bin
libc6-dev libcc1-0 libcilkrts5 l
ibdpkg-perl
2016-12-01 17:45:47 INFO install   libexpat1-dev libfakeroot
libfile-fcntllock-perl libgcc-5-dev libgomp1
2016-12-01 17:45:47 INFO install   libisl15 libitm1 liblsan0 libmpc3
libmpx0 libpython3-dev libpython3.5-dev
2016-12-01 17:45:47 INFO install   libquadmath0 libstdc++-5-dev libtsan0
libubsan0 linux-libc-dev make
2016-12-01 17:45:47 INFO install   manpages-dev python-pip-whl python3-dev
python3-pip python3-setuptools
2016-12-01 17:45:47 INFO install   python3-wheel python3.5-dev

None of my charms need build-essential or a g++ compiler, that's a lot of
unnecessary dependencies! Can we get rid of most of these? Would installing
the bare minimum with --no-install-recommends help?


> What's the real problem with the Ubuntu charm today?
> How does it not achieve it's goal of providing a relatively blank Ubuntu
> machine? What are people using the Ubuntu charm for?
>
> Other than demos, hacks/workarounds, and testing I'm not clear on the
> purpose of an Ubuntu charm in a model serves.
>
> Marco
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Github Reviews vs Reviewboard

2016-10-14 Thread Casey Marshall
+1, as I work on many other Github projects besides Juju and it's familiar.
It's not perfect by any means but I can work with it.

I thought the ReviewBoard we had was pretty ugly and buggy, but it was
reasonably easy to use. Gerrit is cleaner and clearer to me -- though I
feel like Gerrit is also kind of rough on the uninitiated. Maybe if a newer
version of RB was sufficiently improved and it was charmed up well, its
operation would be more manageable, and it'd be OK?

-Casey

On Fri, Oct 14, 2016 at 12:34 PM, Andrew McDermott <
andrew.mcderm...@canonical.com> wrote:

>
> On 14 October 2016 at 16:26, Mick Gregg  wrote:
>
>> I would probably chose gerrit over either, but that's not the question
>> today.
>>
>
> Oooh, yes to gerrit. +2
>
>
>
> --
> Andrew McDermott 
> Juju Core Sapphire team 
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-14 Thread Casey Marshall
I'm halfway through my first Github review (different project though) on
the new system, and so far I'm loving it. Also consider the issues we've
had with rbt being unable to handle diffs with files
added/removed/relocated. +1 from me!

-Casey

On Wed, Sep 14, 2016 at 3:23 PM, Reed O'Brien 
wrote:

> Also +1 for a single source of truth.
>
> On Wed, Sep 14, 2016 at 1:20 PM, Rick Harding 
> wrote:
>
>> /me is always +1 on reducing the number of things we have to maintain and
>> keeping things simpler.
>>
>> On Wed, Sep 14, 2016 at 4:04 PM Nate Finch 
>> wrote:
>>
>>> In case you missed it, Github rolled out a new review process.  It
>>> basically works just like reviewboard does, where you start a review, batch
>>> up comments, then post the review as a whole, so you don't just write a
>>> bunch of disconnected comments (and get one email per review, not per
>>> comment).  The only features reviewboard has is the edge case stuff that we
>>> rarely use:  like using rbt to post a review from a random diff that is not
>>> connected directly to a github PR. I think that is easy enough to give up
>>> in order to get the benefit of not needing an entirely separate system to
>>> handle reviews.
>>>
>>> I made a little test review on one PR here, and the UX was almost
>>> exactly like working in reviewboard: https://github.co
>>> m/juju/juju/pull/6234
>>>
>>> There may be important edge cases I'm missing, but I think it's worth
>>> looking into.
>>>
>>> -Nate
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>>> an/listinfo/juju-dev
>>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>> an/listinfo/juju-dev
>>
>>
>
>
> --
> Reed O'Brien
> ✉ reed.obr...@canonical.com
> ✆ 415-562-6797
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-09-08 Thread Casey Marshall
I discovered another trick that works: set the streams and urls to invalid
values in your bootstrap config. This will force Juju to use an already
compiled jujud in your $PATH. For example, bootstrap --config with:

image-metadata-url: http://localhost
image-stream: nope
agent-metadata-url: http://localhost
agent-stream: nope

I'd be happier if I could get this effect with a --use-agent=/path/to/jujud
option, but this works.

On Wed, Sep 7, 2016 at 9:28 PM, Nate Finch  wrote:

> Just a note, because it wasn't clear to me - there are a couple cases
> where the automatic upload tools won't do what you want, if you use a
> version of juju you built locally.
>
> If you're not a developer or someone who builds juju from source, it's
> safe to ignore this email.
>
> *1. If the version of juju you're building is available in streams and you
> *want* to upload, you have to use --build-agent, because by default we
> won't upload.  *
> This happens if you're purposely building an old release, or if QA just
> released a new build and you haven't pulled yet and/or master hasn't been
> updated with a new version number.  --build-agent works like the old
> upload-tools, except it always rebuilds jujud, even if there's an existing
> jujud binary.
>
> *2. If you're building a version of the code that is not available in
> streams, and you *don't* want to upload, you must use
> --agent-version=.*
> This can happen if you want to deploy a known-good server version, but
> only have a dev client around.  I use this to make sure I can reproduce
> bugs before testing my fixes.  --agent-version works basically like the old
> default (non-upload) behavior, except you have to explicitly specify a juju
> version that exists in streams to deploy (e.g. --agent-version=2.0-beta17)
>
>
> Note that if you want to be *sure* that juju bootstrap always does what
> you expect it to, IMO you should always use either --build-agent (to
> upload) or --agent-version (to not upload), depending on your intent.  The
> behavior of the bare juju bootstrap can change without warning (from
> uploading to not uploading) if a new version of jujud is pushed to streams
> that matches what you're building locally (which happens every time a new
> build is pushed to streams, until master is updated and you git pull and
> rebuild), and that can be really confusing if you are expecting your
> changes to be uploaded, and they're not.  It also changes behavior if you
> switch from master (which always uploads) to a release tag (which never
> uploads), which while predictable, can be easy to forget.
>
> Related note, beta18 (which is in current master) and later versions of
> the client can't bootstrap with --agent-version 2.0-beta17 or earlier, due
> to a breaking change (you'll get an error about mismatching UUIDs).  This
> type of breakage is rare, and generally should only happen during betas (or
> pre-beta), but it impacts us right now, so... yeah.
>
> -Nate
>
>
>
> --
> canonical-juju mailing list
> canonical-j...@lists.canonical.com
> Modify settings or unsubscribe at: https://lists.canonical.com/
> mailman/listinfo/canonical-juju
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: kill-controller, unregister, destroy-controller

2016-09-02 Thread Casey Marshall
My main use case for killing controllers is development & testing. For
this, I have a script that force deletes all the juju-* LXC containers, and
then unregisters all controllers with cloud: lxd. It's *much* faster than
waiting for each juju controller to tear itself down. It's also nothing I'd
provide casually to end users.

I think it should be possible, but not trivial, to destroy controllers and
everything in them. It's not a bad thing to have to type a long command or
flag name to do something so destructive -- or write a script to do
precisely what I want. Feels like my use case for destroying controllers
isn't really as a normal Juju user -- I'm (ab)using Juju with a developer &
QA tester's mindset.

-Casey

On Fri, Sep 2, 2016 at 6:15 AM, roger peppe 
wrote:

> It seems to me that this kind of thing is exactly what "blocks" are
> designed for. An explicit unblock command seems better to me than either an
> explicit flag or an extra prompt, both of which are vulnerable to typing
> without thinking. Particularly if "throwaway" controllers created for
> testing purposes are not blocked by default, so you don't get used to to
> typing "unblock" all the time.
>
> On 1 Sep 2016 16:14, "Mark Ramm-Christensen (Canonical.com)" <
> mark.ramm-christen...@canonical.com> wrote:
>
> I get the desire to remove friction everywhere we can, but unless
> destroying controllers is a regular activity, I actually think SOME
> friction is valuable here.
>
> If controllers are constantly getting into a wedged state where they must
> be killed, that's likely a product of a fast moving set of betas, and
> should be addressed *directly* rather than as they say "applying lipstick
> to a pig"
>
> On Thu, Sep 1, 2016 at 4:04 PM, Marco Ceppi 
> wrote:
>
>> On Thu, Sep 1, 2016 at 9:59 AM Mark Ramm-Christensen (Canonical.com) <
>> mark.ramm-christen...@canonical.com> wrote:
>>
>>> I believe keeping the --destroy-all-models flag is helpful in keeping
>>> you from accidentally destroying a controller that is hosting important
>>> models for someone without thinking.
>>>
>>
>> What happens if I destroy-controller without that flag? Do I have to go
>> into my cloud portal to kill those instances? Is there any way to recover
>> from that to get juju reconnected? If not, it's just a slower death.
>>
>>
>>> On Thu, Sep 1, 2016 at 3:40 PM, Marco Ceppi 
>>> wrote:
>>>
 Hey everyone,

 I know we've had discussions about this over the past few months, but
 it seems we have three commands that overlap pretty aggressively.

 Using Juju beta16, and trying to 'destroy' a controller it looks like
 this now:

 ```
 root@ubuntu:~# juju help destroy-controller
 Usage: juju destroy-controller [options] 

 ...

 Details:
 All models (initial model plus all workload/hosted) associated with the
 controller will first need to be destroyed, either in advance, or by
 specifying `--destroy-all-models`.

 Examples:
 juju destroy-controller --destroy-all-models mycontroller

 See also:
 kill-controller
 unregister
 ```

 When would you ever want to destroy-controller and not
 destroy-all-models? I have to specify that flag everytime, it seems it
 should just be the default behavior. Kill-controller seems to do what
 destroy-controller --destroy-all-models does but more aggressively?

 Finally, unregister and destroy-controller (without
 --destroy-all-models) does the same thing. Can we consider dropping the -
 very long winded almost always required - flag for destroy-controller?

 Finally, there used to be a pretty good amount of feedback during
 destroy-controller, while it was rolling text, I at least knew what was
 happening. Now it's virtually silent. Given it runs for quite a long time,
 can we get some form of feedback to the user back into the command?

 ```
 root@ubuntu:~# juju destroy-controller --destroy-all-models cabs
 WARNING! This command will destroy the "cabs" controller.
 This includes all machines, applications, data and other resources.

 Continue? (y/N):y
 ERROR failed to destroy controller "cabs"

 If the controller is unusable, then you may run

 juju kill-controller

 to forcibly destroy the controller. Upon doing so, review
 your cloud provider console for any resources that need
 to be cleaned up.

 ERROR cannot connect to API: unable to connect to API: websocket.Dial
 wss://10.0.0.4:17070/api: dial tcp 10.0.0.4:17070: getsockopt: no
 route to host
 root@ubuntu:~# juju kill-controller cabs
 WARNING! This command will destroy the "cabs" controller.
 This includes all machines, applications, data and other resources.

 Continue? (y/N):y
 Unable to open API: open connection timed 

Re: Faster LXD bootstraps and provisioning

2016-08-16 Thread Casey Marshall
I decided it'd be easier & safer to host squid-deb-proxy in a LXD container
rather than the host. My host doesn't route inbound to LXD from other
networks, and all the Juju machines can see it.

On Tue, Aug 16, 2016 at 12:30 AM, John Meinel 
wrote:

> ...
>>
>
>
>> +### tuple ### allow any 8000 0.0.0.0/0 any 0.0.0.0/0 in
>> +-A ufw-user-input -p tcp --dport 8000 -j ACCEPT
>> +-A ufw-user-input -p udp --dport 8000 -j ACCEPT
>> +
>>
>>
> If I'm reading this one correctly, it also means that anyone from *any* IP
> address (not restricted to your local network). So anyone that can get to
> port 8000 on your machine can proxy to any other public website. Now, I'd
> guess that you also run a NAT router so this may not actually be opening up
> an open proxy for the world to access, but it seems a little bit iffy to
> put into a general guide.
>
> John
> =:->
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Faster LXD bootstraps and provisioning

2016-08-15 Thread Casey Marshall
Menno,
This is great and thanks for sharing!

In case anyone else runs into this.. charms that install from PPAs will
fail with this squid-deb-proxy setup. You'll need to allow archive mirrors
for this to work. See
https://1337.tips/ubuntu-cache-packages-using-squid-deb-proxy/ for an
example.

On Mon, Aug 15, 2016 at 9:31 AM, Rafael Gonzalez <
rafael.gonza...@canonical.com> wrote:

> Hi Menno,
>
> Thanks for putting this together, great tips.  I recently ran into an
> issue which others could see as well.
>
> One may need to adjust the following for large bundle deployments on LXD.
> A bundle deployment fails with errors about "Too many files open."  This
> will increase number of max open files:
>
> echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf &&
> sudo sysctl -p
>
>
> Regards,
>
> Rafael O. Gonzalez
> Canonical, Solutions Architect
> rgo...@canonical.com
> 1-646-481-7232
>
>
>
> On Sun, Aug 14, 2016 at 8:07 PM, Menno Smits 
> wrote:
>
>> I've put together a few tips on the wiki for speeding up bootstrap and
>> provisioning times when using the Juju lxd provider. I find these
>> techniques helpful when checking my work or investigating bugs - situations
>> where you end up bootstrapping and deploying many times.
>>
>> https://github.com/juju/juju/wiki/Faster-LXD
>>
>> If you have your own techniques, or improvements to what I'm doing,
>> please update the article.
>>
>> - Menno
>>
>>
>>
>>
>>
>>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>> an/listinfo/juju-dev
>>
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: So I wanted to update a dependency . .

2016-08-11 Thread Casey Marshall
On Thu, Aug 11, 2016 at 5:44 PM, Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> This is a simple story of a man and a simple mission. Eliminate the final
> 2 dependencies that are in bazaar and launchpad. It makes juju and it's
> dependencies live completely in git. A notable goal, and one that I desired
> for getting snaps to build with launchpad.
>
> I don't feel I need to explain the pain that making a no-source change
> update to a dependency (point juju and friends to it's new location) has
> been. I've had to carefully craft a set of PR's, and land them in a certain
> order. I've encountered contention (because I have to change hundreds of
> imports), unit test issues (because juju's dependencies aren't tested when
> merged, so they can be incompatible with the latest juju without knowing
> it), and circular dependencies that require some magic and wishful thinking
> to workaround.
>
> I'm still not finished landing the change, but I hope to do so *soon*. It
> must be close now!
>
> All of this to say, I think it would be useful to have a discussion on how
> we manage dependencies within the project. From a release perspective, it
> can be quite cumbersome as dependencies are picked up and dropped. It's
> also recently made a release (And critical bugfix) really difficult at
> times. From my newly experience contributor perspective, I would really
> think twice before attempting to make an update :-) I suspect I'm not alone.
>
> I've heard ideas in the past about cleaning this up, and some things like
> circular dependencies between romulus and juju are probably best described
> as tech debt. But there also is some pain in the larger scheme of things.
> For example, we are currently hacking a patch to juju's source for the mgo
> dependency since updating the source or vendoring or any other option is
> way too painful. It's time to really fix this. Ideas?
>

My team's been chipping away at romulus and it'll be sorted out soon
enough. We've already moved the terms API client out to
github.com/juju/terms-client, and we'll be doing something similar for the
other APIs. As for the commands.. these probably need to find a better home
closer to the command base types they extend, in cmd/juju/...

One thing that occurred to me today though is most of our dependencies also
have tests (well, they should!). We don't often run *those* tests as part
of Juju CI, but you could run into some cases where some dependencies share
common dependencies, but are tested with different common dependency
versions than those specified by Juju's dependencies.tsv.


> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Automatic hook retries in Juju 2.0

2016-06-30 Thread Casey Marshall
What is the intended behavior for automatic hook retries in Juju 2.0?

Specifically, I'd like to know, as a Juju user:

Are errors in hooks all retried with the same policy, or are some retried
with a different policy / strategy than others (install, for example)?

Is there a limit to the number of times Juju will retry a hook error before
"giving up"?

What kind of delay can I expect between retries?

Thanks,
Casey
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Awful dependency problem caused by romulus

2016-05-19 Thread Casey Marshall
Matty, this sounds like a great idea.

Dave, I understand and thanks for clarifying. Please give us some time to
coordinate the package relocation in our next iteration (begins next week).

Thanks,
Casey


On Thu, May 19, 2016 at 2:57 AM, David Cheney 
wrote:

> Thanks Matty!
>
> On Thu, May 19, 2016 at 6:51 PM, Matthew Williams
>  wrote:
> > Yeah - the mistake we made was starting with pure intentions but over
> time
> > starting to think of it as just another part of core. I'll have to
> discuss
> > it with Casey when he's up
> >
> > On Thu, May 19, 2016 at 9:47 AM, David Cheney <
> david.che...@canonical.com>
> > wrote:
> >>
> >> I think that would be the best solution, I don't see how we can undo
> >> the dependencies between cmd/juju and romulus -- they're so tightly
> >> coupled they should probably live in the same repository.
> >>
> >> On Thu, May 19, 2016 at 6:45 PM, Matthew Williams
> >>  wrote:
> >> > Really sorry about this Dave, I'd not realised just how much they
> relied
> >> > on
> >> > each other. Surely there's an argument for romulus being merged into
> >> > core?
> >> >
> >> > On Thu, May 19, 2016 at 8:55 AM, David Cheney
> >> > 
> >> > wrote:
> >> >>
> >> >> On Thu, May 19, 2016 at 5:04 PM, roger peppe
> >> >> 
> >> >> wrote:
> >> >> > On 19 May 2016 at 07:02, David Cheney 
> >> >> > wrote:
> >> >> >> Hello,
> >> >> >>
> >> >> >> github.com/juju/juju/cmd/juju/commands:
> >> >> >>   github.com/juju/romulus/cmd/commands:
> >> >> >> github.com/juju/romulus/cmd/setplan: <
> >> >> >>   github.com/juju/juju/api/service:
> >> >> >>   github.com/juju/juju/cmd/modelcmd:
> >> >> >>
> >> >> >> cmd/juju depends on the romulus repository, and the romulus
> >> >> >> repository
> >> >> >> depends on juju.
> >> >> >>
> >> >> >> This is terrible. It means we _cannot_ change the public api of
> the
> >> >> >> juju that romulus depends on because then juju won't compile, and
> we
> >> >> >> cannot land the fix to romulus without breaking juju.
> >> >> >
> >> >> > I agree that this is unfortunate, but "cannot" is a strong word. I
> >> >> > believe
> >> >> > that there is a (somewhat painful) workaround for this - we've been
> >> >> > in
> >> >> > similar situations
> >> >> > before.
> >> >> >
> >> >> > Say you want to change the public API of juju in a backwardly
> >> >> > incompatible
> >> >> > way. Here's how you can do it.
> >> >> >
> >> >> > First change the API and fix romulus to work with the new API,
> >> >> > without
> >> >> > merging either change into their repos.
> >> >> >
> >> >> > Then push the romulus change to the romulus repo in a *feature
> >> >> > branch*
> >> >> > rather onto master. Tests will not pass in this branch because it
> >> >> > depends
> >> >> > on as-yet-to-be-landed changes in juju, but the code is now
> available
> >> >> > in
> >> >> > the romulus repo.
> >> >> >
> >> >> > Then propose the Juju changes with the feature-branch revision
> >> >> > of romulus as a dependency. Tests should pass OK because godeps
> >> >> > doesn't care which branch its dependencies are pulled from.
> >> >> >
> >> >> > Once that's landed, land the romulus changes in romulus master
> >> >> > depending on the just-landed changes in juju.
> >> >> >
> >> >> > Then update juju to use the latest romulus dependency.
> >> >>
> >> >> Or I could just land the commits directly. I guess it depends if we
> >> >> want to play the CI game or not. My point is creating loops like this
> >> >> means we have to reach for even more creative measures to mitigate
> >> >> them.
> >> >>
> >> >> To be clear, this is a big mistake, it's fine for juju to depend on a
> >> >> project, we currently depend on 72 projects. What is not ok is for
> >> >> that project to then depend back on juju, that is poor software
> >> >> engineering.
> >> >>
> >> >> > As for the cyclic dependency itself, perhaps there's an argument
> for
> >> >> > moving the main juju command into a separate repo (or everything
> >> >> > *but*
> >> >> > the juju main command into a separate repo) so that it's possible
> >> >> > to include externally implemented commands without creating a
> cycle.
> >> >>
> >> >> I'd very much like to see this. It's clear that the juju command is
> >> >> going to have to serve multiple masters, and by breaking it off into
> a
> >> >> separate project this would force us (juju) to create a supported
> >> >> public API which we currently do not have.
> >> >>
> >> >> >
> >> >> >   cheers,
> >> >> > rog.
> >> >> >
> >> >> >> Casey, please fix this immediately. Either juju depends on
> romulus,
> >> >> >> or
> >> >> >> romulus depends on juju, but at the moment they both depend on
> each
> >> >> >> other and that is a showstopper,
> >> >> >>
> >> >> >> Thanks
> >> >> >>
> >> >> >> Dave
> >> >> >>
> >> >> >> --
> >> >> >> 

Re: Awful dependency problem caused by romulus

2016-05-19 Thread Casey Marshall
On Wed, May 18, 2016 at 11:02 PM, David Cheney 
wrote:

> Hello,
>
> github.com/juju/juju/cmd/juju/commands:
>   github.com/juju/romulus/cmd/commands:
> github.com/juju/romulus/cmd/setplan: <
>   github.com/juju/juju/api/service:
>   github.com/juju/juju/cmd/modelcmd:
>
> cmd/juju depends on the romulus repository, and the romulus repository
> depends on juju.
>
> This is terrible. It means we _cannot_ change the public api of the
> juju that romulus depends on because then juju won't compile, and we
> cannot land the fix to romulus without breaking juju.
>

Why do you want to introduce breaking changes in the API?


>
> Casey, please fix this immediately. Either juju depends on romulus, or
> romulus depends on juju, but at the moment they both depend on each
> other and that is a showstopper,
>
> Thanks
>
> Dave
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Connecting to the controller database w/mongodb 3.2

2016-05-13 Thread Casey Marshall
On Fri, May 13, 2016 at 10:52 AM, Horacio Duran <horacio.du...@canonical.com
> wrote:

> By connect I assume you mean to use the shell, for this you need the
> mongodb-org papa and install mongodb-org-shell, I am currently on the phone
> but as soon as I get to a computer I'll send you links
>
>
Well, for use case #1, yes, use the shell.

For use case #2, I'd be using github.com/dcu/mongodb_exporter, which uses
mgo.v2 and just needs a mongodb URI
<https://github.com/dcu/mongodb_exporter/blob/master/mongodb_exporter.go#L26>
.


> On Friday, 13 May 2016, Casey Marshall <casey.marsh...@canonical.com>
> wrote:
>
>> I seem to be unable to connect to the Juju 2.0 controller database
>> lately. I'm thinking this might be related to the move to mongodb 3.2.
>>
>> Can someone in the know please share how to do this? While most users
>> should never, ever connect directly to the controller's database, I have
>> two good use cases for it:
>>
>> 1. It's sometimes necessary for debugging the state of a live controller.
>> 2. I'd like to instrument a controller's mongodb with prometheus.io, but
>> in order to do this, I need to derive the new connection info.
>>
>> Much thanks,
>> Casey
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Connecting to the controller database w/mongodb 3.2

2016-05-13 Thread Casey Marshall
I seem to be unable to connect to the Juju 2.0 controller database lately.
I'm thinking this might be related to the move to mongodb 3.2.

Can someone in the know please share how to do this? While most users
should never, ever connect directly to the controller's database, I have
two good use cases for it:

1. It's sometimes necessary for debugging the state of a live controller.
2. I'd like to instrument a controller's mongodb with prometheus.io, but in
order to do this, I need to derive the new connection info.

Much thanks,
Casey
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reminder: write tests fail first

2016-05-04 Thread Casey Marshall
An excellent demonstration of writing tests that fail first that I've seen
recently: https://www.youtube.com/watch?v=PEbnzuMZceA

On Wed, May 4, 2016 at 9:24 PM, Andrew Wilkins  wrote:

> See: https://bugs.launchpad.net/juju-core/+bug/1578456
>
> Cheers,
> Andrew
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD v2.0.0-rc8 does not work with Juju v2.0-beta3

2016-04-06 Thread Casey Marshall
On Wed, Apr 6, 2016 at 2:51 PM, Alexis Bruemmer <
alexis.bruem...@canonical.com> wrote:

>
> Hi All,
>
> As recently highlighted in bug https://bugs.launchpad.net/bugs/1566589 the
> latest LXD will not work with Juju 2.0-beta3.  This is a result of LXD
> moving to use a default bridge of lxdbr0 and Juju expecting lxcbr0.  Thanks
> to the heads up and help from the LXD team there is a fix for this in Juju
> master that will be available in the release next week.  However, until
> then Juju 2.0-beta3 will not work with the latest LXD (v2.0.0-rc8).
>

If you `dpkg-reconfigure lxd` and name the bridge "lxcbr0", does this work
for beta3? I've been able to bootstrap with latest LXD and current Juju
master (beta4) by configuring LXD this way.


>
> Alexis
>
> --
> Alexis Bruemmer
> Juju Core Manager, Canonical Ltd.
> (503) 686-5018
> alexis.bruem...@canonical.com
>
> --
> Juju mailing list
> j...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: utils/fslock needs to DIAF

2015-11-30 Thread Casey Marshall
How about github.com/camlistore/lock ?

On Mon, Nov 30, 2015 at 5:43 PM, Tim Penhey 
wrote:

> Hi folks,
>
> The fslock was a mistake that I added to the codebase some time back. It
> provided an overly simplistic solution to a more complex problem.
>
> Really the filesystem shouldn't be used as a locking mechanism.
>
> Most of the code that exists for the fslock now is working around its
> deficiencies. Instead we should be looking for a better replacement.
>
> Some "features" that were added to fslock were added to work around the
> issue that the lock did not die with the process that created it, so
> some mechanism was needed to determine whether the lock should be broken
> or not.
>
> What we really need is a good OS agnostic abstraction that provides the
> ability to create a "named" lock, acquire the lock, release the lock,
> and make sure that the lock dies when the process dies, so another
> process that is waiting can acquire the lock. This way no "BreakLock"
> functionality is required, nor do we need to try and do think like
> remember which process owns the lock.
>
> So...
>
> We have three current operating systems we need to support:
>
> Linux - Ubuntu and CentOS
> MacOS - client only - bit the CLI uses a lock for the local cache
> Windows
>
> For Linux, and possibly MacOS, flock is a possibility, but can we do
> better? Is there something that is better suited?
>
> For Windows, while you can create global semaphores or mutex instances,
> I'm not sure of entities that die with the process. Can people recommend
> solutions?
>
> Cheers,
> Tim
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Graduated Juju core reviewer: Aleš Stimec

2015-10-07 Thread Casey Marshall
All,
I'm pleased to announce Aleš Stimec is now a graduated Juju core reviewer.
His recent contributions and improvements to the Juju unit agent,
command-line infrastructure and API login clearly demonstrate a depth and
breadth of Juju core knowledge befitting this role.

Welcome Aleš, and well done!

-Casey
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Fix for LP: #1174610 landing (unit ids should be unique)

2015-04-27 Thread Casey Marshall
Just a friendly heads-up.. a fix for this longstanding bug will be
landing into master shortly:

LP: #1174610, unit ids should be unique

What this fix essentially does is assign each deployed workload a
distinct unit ID (incrementing sequentially) within the scope of an
environment. Example:

1. juju deploy mysql
   Deployed workload gets an ID, mysql/0

2. juju destroy-service mysql

3. juju deploy mysql
   Deployed workload gets a distinct ID, mysql/1

I do not anticipate any negative impact to normal Juju usage from this
bugfix, but I'd like to raise awareness here proactively, in the event
that there is potential breakage in scripts that invoke juju with
hard-coded assumptions on how Juju assigns unit IDs -- instead of
retrieving actual assigned unit IDs from status.

-Casey



signature.asc
Description: OpenPGP digital signature
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Graduated reviewer: Domas Monkus

2015-04-23 Thread Casey Marshall
Juju developers,
I would like to announce Domas Monkus is a fully graduated Juju core
reviewer. This announcement is really long overdue.. Domas is careful
and thoughtful in his reviews, his feedback is useful, actionable and
relevant, and he's landed several significant improvements that
demonstrate a deep understanding of Juju core design and internals.

So let's give praise, thanks (and reviews) to Domas, for a job well done!

-Casey



signature.asc
Description: OpenPGP digital signature
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Proposal: feature flag implementation for Juju

2014-11-26 Thread Casey Marshall
+1 for feature flags in general and +1 for using environment variables
in upstart to get them to the servers  agents.

I think it'd be nice to have an environment variable per flag, with a
common prefix JUJU_FEATURE_. That way, if you need to check one in a
package init(), you don't have to parse the whole list of flags to find
the one you care about -- or depend on a globally initialized parsing of
that list.

env config seems reasonable, but dealing with parsing, errors and then
making that config globally available at init seems complex and not
always feasible.

How about defining them free form in an env config field, which is
then used to emit the env vars as described above to the upstart config
during bootstrap?

-Casey

On 11/25/2014 10:16 PM, Ian Booth wrote:
 I like feature flags so am +1 to the overall proposal. I also agree with the
 approach to keep them immutable, given the stated goals and complexity
 associated with making them not so.
 
 I think the env variable implementation is ok too - this keeps everything very
 loosely coupled and avoids polluting a juju environment with an extra config
 attribute.
 
 On 26/11/14 08:47, Tim Penhey wrote:
 Hi all,

 There are often times when we want to hook up features for testing that
 we don't want exposed to the general user community.

 In the past we have hooked things up in master, then when the release
 branch is made, we have had to go and change things there.  This is a
 terrible way to do it.

 Here is my proposal:

 http://reviews.vapour.ws/r/531/diff/#

 We have an environment variable called JUJU_FEATURE_FLAGS. It contains
 comma delimited strings that are used as flags.

 The value is read when the program initializes and is not mutable.

 Simple checks can be used in the code:

 if featureflag.Enabled(foo) {
   // do foo like things
 }

 Thoughts and suggestions appreciated, but I don't want to have the
 bike-shedding go on too long.

 Tim

 



signature.asc
Description: OpenPGP digital signature
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reminder: juju-core github migration

2014-06-02 Thread Casey Marshall
On 06/02/2014 11:57 AM, Martin Packman wrote:
 On 02/06/2014, roger peppe rogpe...@gmail.com wrote:

 What is the policy around rebasing before commenting with $$merge$$ ?
 Does this need to be done? If not, how does the merge procedure
 decide what commit message gets attached to the final merge?
 (Or are we going to leave one commit message for every
 stage in the review?)
 
 Generally, try to rebase to tidiness before proposing, and not
 afterwards. That may mean multiple commits where review comments need
 addressing, we may need to discuss as a team how we feel about that.
 The commit message(s) come from the revisions in the branch proposed,
 so those do want to be nicely formatted before doing the pull request.
 

I agree. Rebases rewrite history. You shouldn't be allowed to rewrite
history once it becomes a shared history -- including pull requests.
Only use rebase for your own private branches, to track upstream.

I think the landing bot should squash merge the proposed branch on
approval. There's value in keeping a full record of the changes from the
review.

 Has anyone put together a bzr-git cheatsheet for those of us still
 suffering
 from git *how* many ways to do it?! overload?
 
 I have a few migration tips to send to the list, but for general
 workflow conversion we probably want to try it out and share the pain
 and learning together.
 
 Martin
 

-Casey



signature.asc
Description: OpenPGP digital signature
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev