Re: lxd and constraints

2017-01-13 Thread Nate Finch
I just feel like we're entering a minefield that our application and CLI
aren't really built to handle.  I think we *should* handle it, but it needs
to be well planned out, instead of just doing a tiny piece at a time and
only figuring out later if we did the right thing.

There's a few problems I can see:

1.) you can have 10 lxd containers with memory limit of 2GB on a machine
with 4GB of RAM.  Deploying 10 applications to those containers that each
have a constraint of mem=2GB will not work as you expect.  We could add
extra bookkeeping for this, and warn you that you appear to be
oversubscribing the host, but that's more work.

2.) What happens if you try to deploy a container without a memory limit on
a host that already has a container on it?

For example:
4GB host
2GB lxd container
try to deploy a new service in a container on this machine.
Do we warn?  We have no clue how much RAM the service will use.  Maybe
it'll be fine, maybe it won't.

3.) Our CLI doesn't really work well with constraints on containers:

juju deploy mysql --constraints mem=2G --to lxd

Does this deploy a machine with 2GB of ram and a container with a 2GB ram
limit on it?  Or does it deploy a machine with 2GB of ram and a container
with no limit on it?  It has to be one or the other, and currently we have
no way of indicating which we want to do, and no way to do the other one
without using multiple commands.

This is a more likely use case, creating a bigger machine that can hold
multiple containers:
juju add-machine --constraints mem=4GB
// adds machine, let's say 5
// create a container on machine 5 with 2GB memory limit
juju deploy mysql --constraints mem=2GB --to lxd:5

At least in this case, the deploy command is clear, there's only one thing
they can possibly mean.  Usually, the placement directive would override
the constraint, but in this case, it does what you would want... but it is
a littler weird that --to lxd:5 uses the constraint, but --to 5 ignores it.

Note that you can't just write a simple script to do the above, because the
machine number is variable, so you have to parse our output and then use
that for the next command.  It's still scriptable, obviously, but it's more
complicated script than just two lines of bash.

Also note that using this second method, you can't deploy more than one
unit at a time, unless you want multiple units on containers on the same
machine (which I think would be pretty odd).



On Fri, Jan 13, 2017 at 3:48 AM Rick Harding <rick.hard...@canonical.com>
wrote:

In the end, you say you want an instance with 2gb of ram and if the cloud
has an instance with that exact limit it is in fact an exact limit. The key
things here is the clouds don't have infinite malleable instance types like
containers (this works for kvm and for lxd). So I'm not sure the mis-match
is as far apart as it seems. root disk means give me a disk this big, if
you ask for 2 core as long as you can match an instance type with 2 cores
it's exactly the max you get.

It seems part of this can be more adjusting the language from "minimum" to
something closer to "requested X" where request gives it more of a "I want
X" without the min/max boundaries.



On Fri, Jan 13, 2017 at 3:14 AM John Meinel <j...@arbash-meinel.com> wrote:

So we could make it so that constraints are actually 'exactly' for LXD,
which would then conform to both minimum and maximum, and would still be
actually useful for people deploying to containers. We could certainly
probe the host machine and say "you asked for 48 cores, and the host
machine doesn't have it".

However, note that explicit placement also takes precedence over
constraints anyway. If you do:
  juju deploy mysql --constraints mem=4G
today, and then do:
 juju add-unit --to 2
We don't apply the constraint limitations to that specific unit. Arguably
we should at *least* be warning that the constraints for the overall
application don't appear to be valid for this instance.

I guess I'd rather see constraints still set limits for containers, because
people really want that functionality, and that we warn any time you do a
direct placement and the constraints aren't satisfied. (but warn isn't
failing the attempt)

John
=:->

On Fri, Jan 13, 2017 at 10:09 AM, Stuart Bishop <stuart.bis...@canonical.com
> wrote:

On 13 January 2017 at 02:20, Nate Finch <nate.fi...@canonical.com> wrote:

I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.



I'm not really sure how to resolve this problem.  Maybe it's not a
problem.  Maybe constraints just have a different meaning for containers?
You have to specify the machine number you're deploying to for any
deployment past the first anyway, so you're already manually choosing the
machine, at which point, constraints don't really make sense anyway.


I don't think Juju can handle this. Either constraints h

Re: lxd and constraints

2017-01-12 Thread Nate Finch
Merlijn:
I definitely agree that having the same term mean different things on
different platforms is a really bad idea.  I don't think we can change the
concept of constraints as minimums at this point, but maybe a new concept
of limits (to match lxd terminology) could be added.  Limits really only
make sense for containers, so we would either have to error out, or warn
the user and ignore it if you specified a limit when deploying a base
machine.

*Mike:*
The problem with trying to figure out how much "unused" RAM a host has is
that it gets thrown out the window if you ever deploy any unit to the host
machine, or if you deploy a unit in a container without a RAM constraint.
Those units may then use as much RAM as they want.


On Thu, Jan 12, 2017 at 3:22 PM Mike Pontillo <mike.ponti...@canonical.com>
wrote:

On Thu, Jan 12, 2017 at 11:20 AM, Nate Finch <nate.fi...@canonical.com>
wrote:

I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.

It seems as though lxd limits (what in juju we would call constraints) are
implemented as maximums, not minimums.  For containers sharing a host, this
makes sense.  If you say "don't use more than 2 gigs of ram" then you know
the max that container will use, and thus how much leftover you can expect
the host to have for other containers.


Is the second part (how much leftover you can expect the host to have for
other containers) captured somewhere? Because it seems to me that the
important question Juju needs to be asking is, "how over-provisioned is the
host I'm about to deploy on?", so that containers can be intelligently
load-balanced across the infrastructure.

Assuming Juju has full control over the hosts it is deploying containers
onto[2], I think one thing to do might be to allow the admin to specify
ratios (maybe separate for each of CPU, RAM, disk) to indicate how
over-provisioned to allow hosts and containers to be to be.

Let's take as an example a host with 16 GB of RAM, where you want to deploy
16 containers with a constraint of "at least 1G of RAM". There could be two
relevant over-provisioning ratios: one to specify how over-provisioned a
container hypervisor can be, and the other to specify how much more RAM
than the constraint specifies the container can be allowed to use. This
idea is perhaps a little naive; I'm not sure where one would specify these
values.

If that sounds confusing, maybe it's easier to look at an example[1]:

Host RAM (GB) Minimum RAM Constraint (GB) Host Over-provisioning Ratio
Container
Over-provisioning Ratio Allowed number of containers for host RAM Limit per
Container (GB)
16 2 1.00 1.00 8 2
16 1 1.00 1.00 16 1
16 1 2.00 1.00 32 1
16 1 1.00 2.00 16 2
16 1 2.00 2.00 32 2
16 1 2.00 4.00 32 4
16 1 16.00 4.00 256 4
16 1 16.00 1.00 256 1


Regards,
Mike

[1]:
https://docs.google.com/spreadsheets/d/1j6-98nB5AA_viHK9nF42MPpbZf35wv2N2gDi8DrhepU/view


[2]: (so that the numbers aren't thrown off by other juju deployments, or
non-juju deployments to the same hypervisor)
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


lxd and constraints

2017-01-12 Thread Nate Finch
I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.

It seems as though lxd limits (what in juju we would call constraints) are
implemented as maximums, not minimums.  For containers sharing a host, this
makes sense.  If you say "don't use more than 2 gigs of ram" then you know
the max that container will use, and thus how much leftover you can expect
the host to have for other containers.

However, in Juju's case, we expect constraints to be minimums, so that we
know the unit on that machine will have enough RAM to function (e.g. "give
me a machine with at least 2GB of RAM, since I know my application won't
work well with less").

This impedance mismatch is tricky to untangle.  With a naive implementation
of Juju constraints for lxd as a straight up setting of lxd limits, then
you can add a lxd container and specify a memory constraint that is higher
than the host machine's memory, and lxd will happily let you do that
because it knows that container won't exceed that amount of memory (by
definition it cannot).  But it means that juju will then let you deploy a
unit that needs more memory than the container has access to.

Note that this is also the case for setting cores.  On my local lxd
environment I can juju add-machine --constraints cores=48 and the container
will be created just fine.

I'm not really sure how to resolve this problem.  Maybe it's not a
problem.  Maybe constraints just have a different meaning for containers?
You have to specify the machine number you're deploying to for any
deployment past the first anyway, so you're already manually choosing the
machine, at which point, constraints don't really make sense anyway.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: using a bundle with manually added machines

2017-01-03 Thread Nate Finch
FWIW, I think this *could* work, but it may be that we just haven't coded
it up.   Just as *juju deploy mysql -n 4* will use existing clean machines
in the model where appropriate, deploying a bundle should as well
(presuming the existing machines match constraints etc as usual).

On Tue, Jan 3, 2017 at 9:08 AM Rick Harding 
wrote:

> Hi Vance, you can deploy a bundle that uses existing machines in the model
> with the juju-deployer [1] tool.
>
> The built in juju deploy method is the first stage in a generic tool and
> does not allow pointing at existing machines because it means the bundles
> are not sharable and that you need your model in a specific state before
> you can use the bundle. Since Juju doesn't allow you to rename machines it
> means that you have to keep the bundle in sync with the model.
>
> 1: https://pypi.python.org/pypi/juju-deployer/
>
> On Thu, Dec 29, 2016 at 9:09 AM Vance Morris  wrote:
>
>
> Hi all,
>
> Is it possible to use a bundle.yaml in conjunction with manually added
> machines?
>
> $ juju version
> 2.0.2-xenial-s390x
>
> $ juju status
> << snip >>
> Machine  StateDNS  Inst id  Series  AZ
> 0started  REDACTED  manual:REDACTED   xenial
> 1started  REDACTED  manual:REDACTED   xenial
> 2started  REDACTED  manual:REDACTED   xenial
> << snip >>
>
> When I go to deploy a bundle that defines machines 0, 1, and 2 and
> describes deployment of services to these machines, the charms are
> deployed, but no units are created.
>
> ERROR cannot deploy bundle: cannot create machine for holding aodh,
> ceilometer, ceph-mon, ceph-osd, cinder, glance, keystone, mongodb, mysql,
> neutron-api, neutron-gateway, nova-cloud-controller, nova-compute,
> openstack-dashboard, rabbitmq-server, swift-proxy and swift-storage-z1
> units: cannot add a new machine: use "juju add-machine ssh:[user@]"
> to provision machines
>
> If I manually deploy charms to the manually added machines, it's happy to
> oblige.
>
>
> VANCE MORRIS
> --
> Phone:  1-720-349-9450 <(720)%20349-9450>
> E-mail: vmor...@us.ibm.com
> [image: IBM]
> 
> 
> 
> 1177 S Belt Line Rd
> Coppell, TX 75019-4642
> United States
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Representative tests failing

2016-12-14 Thread Nate Finch
Seems like there's a likely out of disk space problem with the LXD and
windows machines that run representative tests.  For example:

http://juju-ci.vapour.ws/job/github-check-merge-juju/441/
http://juju-ci.vapour.ws/job/github-check-merge-juju/442/

Sounds like this happens often enough to be a problem.

Is this something we can automate by resetting to a snapshot or something
along those lines?
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: One instance manual provider

2016-12-05 Thread Nate Finch
Except you can't expose anything deployed to lxd to the network, right?

On Mon, Dec 5, 2016, 5:49 PM Marco Ceppi <marco.ce...@canonical.com> wrote:

> A one machine manual provider? Might as well just `ssh ; juju
> bootstrap lxd`.
>
> On Mon, Dec 5, 2016 at 10:09 AM Nate Finch <nate.fi...@canonical.com>
> wrote:
>
> The should be no reason you can't deploy to the controller machine using
> manual just like any other cloud.
>
> juju bootstrap manual/x.x.x.x  mycloud
> juju switch controller
> juju deploy  --to 0
>
> Switching to the controller model is probably what you were missing, since
> the default model comes with no machines.
>
> On Mon, Dec 5, 2016 at 9:27 AM Rick Harding <rick.hard...@canonical.com>
> wrote:
>
> I'll have to test it out but I would think that you could
>
> 1) bring up a machine, create a container on it, bootstrap to that
> container as the controller, create another container, and then add-machine
> it as a second machine and things should work ok.
>
> 2) I wonder if you can bootstrap to a machine, manually add container on
> that machine, and then add that container with add-machine.
>
> I'm guessing there's some bits about making sure the added containers have
> the ssh key you want to use for the ssh connection for add-machine.
>
> On Mon, Dec 5, 2016 at 3:18 PM Matthew Williams <
> matthew.willi...@canonical.com> wrote:
>
> Hey Folks,
>
> I notice the docs state that at least two instances are needed for the
> manual provider: https://jujucharms.com/docs/stable/clouds-manual. Some
> quick playing around suggests that this is indeed the case.
>
> Is there a technical reason why? I'd love to spin up a charm on [insert
> vps provider here] and only spend money for one instance
>
> Matty
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Hide resources from showing on the store

2016-12-02 Thread Nate Finch
I'm with Rick... the expected usage is to publish a dummy resource with the
charm.  Then, when you deploy, you deploy the real resource from your local
machine at deploy time.  If you forget, you can always add the resource
after deploy, as well.  The charm needs to take into consideration that
someone may forget to include the real resource at deploy time.

Yes, someone could download the dummy resource from the charmstore... I'm
not sure why this matters, though?  It should be pretty obvious right away
that it's not the real resource, and the README should also ensure to
clarify this.

-Nate

On Fri, Dec 2, 2016 at 9:08 AM Gabriel Samfira <
gsamf...@cloudbasesolutions.com> wrote:

> Say you have a charm that deploys a proprietary piece of software. That
> software has an EULA that prohibits anyone from redistributing it. The user
> is essentially forced to go to the software vendor website, login using
> their account, buy the software, and accept the EULA before being able to
> use the software.
>
> In this scenario, we cannot upload that piece of software as a resource
> into the charm store, and set that as the "default" resource for the charm.
> The license for that software does not allow us to. Most users that consume
> such software are aware of this, and is of no surprise to them that they
> have to "own" those resources before they can deploy them.
>
> There are 2 options here:
>
> 1) Canonical is given permission to redistribute the piece of software in
> question (Sharepoint, Exchange, MS SQL ISOs, etc), and validates that the
> user deploying the charm has the right to deploy that software.
> 2) Allow a charm to be published without resources, and validate during
> deployment if the resources are available or not. The charm can handle the
> missing resources and print a nice comprehensive status message. If the
> charm also includes a decent README on how to deploy the service, it should
> be enough to smooth the process. In this scenario, the responsibility of
> making sure all license terms are satisfied, falls onto the user.
>
> It is unfortunate that licensing gets in the way of simplicity, but it is
> an issue that we need to keep in mind.
>
> Cheers,
> Gabriel
>
> On Vi, 2016-12-02 at 13:33 +, Rick Harding wrote:
>
> On Fri, Dec 2, 2016 at 5:49 AM Konstantinos Tsakalozos <
> kos.tsakalo...@canonical.com> wrote:
>
> This works but has a side effect, the dummy binaries are available for
> downloading from the charm's page. Is there a way to hide the resources
> from the charm's readme page?
>
>
> I'm confused why the user cannot know there's a default resource there?
>
> In an ideal world, the author would provide a default resource that at
> least helps guide the user to how to use the charm if they forget or use
> the wrong type of local resource. We want to make sure the user is guided
> on how to get things working as smoothly as possible.
>
> The other side is charm testing. The charm should have enough of what it
> needs in the default resource to perform functional tests to pass via tools
> such as Matrix.
>
> I'm curious how the user even knowing a thing is there is bad for the
> user. At this time we don't have any pattern to assist this.
>
> --
> Juju mailing listj...@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/juju
>
> --
> Regards,
> Gabriel Adrian Samfira
> Cloudbase Solutions SRL
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Application names starting with digits

2016-12-02 Thread Nate Finch
I generally assume that "hard to type" doesn't apply when you're talking
about someone's native language.  Yes, you or I would have trouble
typing 數據庫 (database), but to someone in China, that's probably a word they
type all the time.  Forcing people to use an English translation for the
name of their software is not very welcoming in this day and age.

On Fri, Dec 2, 2016 at 10:46 AM Marco Ceppi <marco.ce...@canonical.com>
wrote:

> On Fri, Dec 2, 2016 at 10:31 AM Nate Finch <nate.fi...@canonical.com>
> wrote:
>
> One thing we *could* do to support non-english names that would not
> entirely open the door to emoji etc is to simply constrain the names to
> unicode letters and numbers.  Thus you could name something 數據庫 but
> not .
>
>
> I bothered Rick about this a while ago (half jokingly) since I own  http://
> ☁.ws <http://xn--l3h.ws> (poo cloud!) and was going to make a charm
> accompanying that name. Localized unicode characters - emoji or otherwise -
> are still a difficult UX compared to alphanumerics. It takes me 10 mins to
> find the emojis to type the damn domain in if I'm not on a phone.
>
> The only path for unicode names I could see happening, and it's a stretch,
> is if the application name can be set to a larger range of characters.
> Where you may want to name horizon deployed in your environment to
> something localized (or emoji) but the charm name should be flat and simple.
>
> On Fri, Dec 2, 2016 at 9:29 AM Mark Shuttleworth <m...@ubuntu.com> wrote:
>
> On 02/12/16 09:23, Adam Collard wrote:
>
> True, but we could do normalisation in the charm store to prevent
> malicious names. I think it's an important aspect of software in the modern
> world that it can support the wide array of languages that we humans use.
>
>
> This just transfers the definition of 'OK' to a different codebase.
>
> It's much better to have a simple rule that can be well documented,
> enforced the same way in store and client and snapd, and typed on any
> laptop without having to refer to the internet for assistance.
>
>
> Mark
>
> --
>
>
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Application names starting with digits

2016-12-02 Thread Nate Finch
One thing we *could* do to support non-english names that would not
entirely open the door to emoji etc is to simply constrain the names to
unicode letters and numbers.  Thus you could name something 數據庫 but
not .

On Fri, Dec 2, 2016 at 9:29 AM Mark Shuttleworth  wrote:

> On 02/12/16 09:23, Adam Collard wrote:
>
> True, but we could do normalisation in the charm store to prevent
> malicious names. I think it's an important aspect of software in the modern
> world that it can support the wide array of languages that we humans use.
>
>
> This just transfers the definition of 'OK' to a different codebase.
>
> It's much better to have a simple rule that can be well documented,
> enforced the same way in store and client and snapd, and typed on any
> laptop without having to refer to the internet for assistance.
>
>
> Mark
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Application names starting with digits

2016-12-02 Thread Nate Finch
There's no technical reason for the restriction, AFAIK.  I believe it's
just aesthetic.

On Fri, Dec 2, 2016, 5:50 AM James Page  wrote:

> Hi All
>
> Is there a specific rationale for application names being limited to not
> starting with a digit?  I get why they can't end with one but I don't see
> why juju should place the same restriction on the start of an app name?
> Example (where we tripped over this):
>
>juju deploy ubuntu 6wind-virtual-accelerator
>
> (and ubuntu just to test the name out of a proposed charm where we had
> testing problems due to this limitation)
>
> Thoughts?
>
> James
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: A (Very) Minimal Charm

2016-12-01 Thread Nate Finch
So, the whole point of my original email was to say - if you want a minimal
charm, just make one, it's easy.
I just published my above mentioned minimal charm as cs:~natefinch/nop

It's not showing up on jujucharms.com, maybe because charm proof is failing
because it's missing everything except the metadata?  I don't know, but it
still works with juju deploy.

On Thu, Dec 1, 2016 at 10:07 PM José Antonio Rey <j...@ubuntu.com> wrote:

> Wouldn't it be possible for us to implement a configuration flag, and
> have it as default? Going back to the general point, the idea behind the
> ubuntu charm is to have a vainilla Ubuntu where you can work on
> anything. I understand we're mostly using it for testing, and reactive
> is now a big part of the ecosystem. I find two simple approaches for this:
>
>   * Create a ubuntu-vainilla charm which doesn't install any of the
> required packages OR
>   * Implement a 'vainilla' boolean configuration flag, where you can
> specify True for the vainilla Ubuntu install, False to install all of
> the reactive/other dependencies.
>
> If we get to work around the pyyaml issue and implement a better
> solution in the long term, I think that would be amazing. However, we
> can't let one dependency drag us down, and in the meanwhile, we have to
> implement a workaround.
>
> On 12/01/2016 01:56 PM, Cory Johns wrote:
> > Marco,
> >
> > What is the issue you mentioned with using snaps where you mentioned
> > needing an "unconfined classic snap"?
> >
> > On Thu, Dec 1, 2016 at 1:13 PM, Marco Ceppi <marco.ce...@canonical.com
> > <mailto:marco.ce...@canonical.com>> wrote:
> >
> > On Thu, Dec 1, 2016 at 12:56 PM Casey Marshall
> > <casey.marsh...@canonical.com <mailto:casey.marsh...@canonical.com>>
> > wrote:
> >
> > On Thu, Dec 1, 2016 at 6:53 AM, Marco Ceppi
> > <marco.ce...@canonical.com <mailto:marco.ce...@canonical.com>>
> > wrote:
> >
> >     On Thu, Dec 1, 2016 at 5:00 AM Adam Collard
> > <adam.coll...@canonical.com
> > <mailto:adam.coll...@canonical.com>> wrote:
> >
> > On Thu, 1 Dec 2016 at 04:02 Nate Finch
> > <nate.fi...@canonical.com
> > <mailto:nate.fi...@canonical.com>> wrote:
> >
> > On IRC, someone was lamenting the fact that the
> > Ubuntu charm takes longer to deploy now, because it
> > has been updated to exercise more of Juju's
> > features.  My response was - just make a minimal
> > charm, it's easy.  And then of course, I had to
> > figure out how minimal you can get.  Here it is:
> >
> > It's just a directory with a metadata.yaml in it
> > with these contents:
> >
> > name: min
> > summary: nope
> > description: nope
> > series:
> >   - xenial
> >
> > (obviously you can set the series to whatever you
> want)
> > No other files or directories are needed.
> >
> >
> > This is neat, but doesn't detract from the bloat in the
> > ubuntu charm.
> >
> >
> > I'm happy to work though changes to the Ubuntu charm to
> > decrease "bloat".
> >
> >
> > IMHO the bloat in the ubuntu charm isn't from support
> > for Juju features, but the switch to reactive plus
> > conflicts in layer-base wanting to a) support lots of
> > toolchains to allow layers above it to be slimmer and b)
> > be a suitable base for "just deploy me" ubuntu.
> >
> >
> > But it is to support the reactive framework, where we
> > utilize newer Juju features, like status and
> > application-version to make the charm rich despite it's
> > minimal goal set. Honestly, a handful of cached wheelhouses
> > and some apt packages don't strike me as bloat, but I do
> > want to make sure the Ubuntu charm works for those using it.
> So,
> >
> >
> > I think a minimal wheelhouse to provide a consistent charm hook
> > runtime is very reasonable and definitely not the problem here.
> >
> > There are too many packages that ge

Re: A (Very) Minimal Charm

2016-12-01 Thread Nate Finch
So, the whole point of my original email was to say - if you want a minimal
charm, just make one, it's easy.
I just published my above mentioned minimal charm as cs:~natefinch/nop

It's not showing up on jujucharms.com, maybe because charm proof is failing
because it's missing everything except the metadata?  I don't know, but it
still works with juju deploy.

On Thu, Dec 1, 2016 at 10:07 PM José Antonio Rey <j...@ubuntu.com> wrote:

> Wouldn't it be possible for us to implement a configuration flag, and
> have it as default? Going back to the general point, the idea behind the
> ubuntu charm is to have a vainilla Ubuntu where you can work on
> anything. I understand we're mostly using it for testing, and reactive
> is now a big part of the ecosystem. I find two simple approaches for this:
>
>   * Create a ubuntu-vainilla charm which doesn't install any of the
> required packages OR
>   * Implement a 'vainilla' boolean configuration flag, where you can
> specify True for the vainilla Ubuntu install, False to install all of
> the reactive/other dependencies.
>
> If we get to work around the pyyaml issue and implement a better
> solution in the long term, I think that would be amazing. However, we
> can't let one dependency drag us down, and in the meanwhile, we have to
> implement a workaround.
>
> On 12/01/2016 01:56 PM, Cory Johns wrote:
> > Marco,
> >
> > What is the issue you mentioned with using snaps where you mentioned
> > needing an "unconfined classic snap"?
> >
> > On Thu, Dec 1, 2016 at 1:13 PM, Marco Ceppi <marco.ce...@canonical.com
> > <mailto:marco.ce...@canonical.com>> wrote:
> >
> > On Thu, Dec 1, 2016 at 12:56 PM Casey Marshall
> > <casey.marsh...@canonical.com <mailto:casey.marsh...@canonical.com>>
> > wrote:
> >
> > On Thu, Dec 1, 2016 at 6:53 AM, Marco Ceppi
> > <marco.ce...@canonical.com <mailto:marco.ce...@canonical.com>>
> > wrote:
> >
> >     On Thu, Dec 1, 2016 at 5:00 AM Adam Collard
> > <adam.coll...@canonical.com
> > <mailto:adam.coll...@canonical.com>> wrote:
> >
> > On Thu, 1 Dec 2016 at 04:02 Nate Finch
> > <nate.fi...@canonical.com
> > <mailto:nate.fi...@canonical.com>> wrote:
> >
> > On IRC, someone was lamenting the fact that the
> > Ubuntu charm takes longer to deploy now, because it
> > has been updated to exercise more of Juju's
> > features.  My response was - just make a minimal
> > charm, it's easy.  And then of course, I had to
> > figure out how minimal you can get.  Here it is:
> >
> > It's just a directory with a metadata.yaml in it
> > with these contents:
> >
> > name: min
> > summary: nope
> > description: nope
> > series:
> >   - xenial
> >
> > (obviously you can set the series to whatever you
> want)
> > No other files or directories are needed.
> >
> >
> > This is neat, but doesn't detract from the bloat in the
> > ubuntu charm.
> >
> >
> > I'm happy to work though changes to the Ubuntu charm to
> > decrease "bloat".
> >
> >
> > IMHO the bloat in the ubuntu charm isn't from support
> > for Juju features, but the switch to reactive plus
> > conflicts in layer-base wanting to a) support lots of
> > toolchains to allow layers above it to be slimmer and b)
> > be a suitable base for "just deploy me" ubuntu.
> >
> >
> > But it is to support the reactive framework, where we
> > utilize newer Juju features, like status and
> > application-version to make the charm rich despite it's
> > minimal goal set. Honestly, a handful of cached wheelhouses
> > and some apt packages don't strike me as bloat, but I do
> > want to make sure the Ubuntu charm works for those using it.
> So,
> >
> >
> > I think a minimal wheelhouse to provide a consistent charm hook
> > runtime is very reasonable and definitely not the problem here.
> >
> > There are too many packages that ge

A (Very) Minimal Charm

2016-11-30 Thread Nate Finch
On IRC, someone was lamenting the fact that the Ubuntu charm takes longer
to deploy now, because it has been updated to exercise more of Juju's
features.  My response was - just make a minimal charm, it's easy.  And
then of course, I had to figure out how minimal you can get.  Here it is:

It's just a directory with a metadata.yaml in it with these contents:

name: min
summary: nope
description: nope
series:
  - xenial

(obviously you can set the series to whatever you want)
No other files or directories are needed.

Figured this might be useful for others.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Controllers running out of disk space

2016-11-17 Thread Nate Finch
Resources are also stored in mongo and can be unlimited in size (not much
different than fat charms, except that at least they're only pulled down on
demand).

We should let admins configure their max log size... our defaults may not
be what they like, but I bet that's not really the issue, since we cap them
smallish (the logs stored in mongo are capped, right?)

Do we still store all logs from all machines on the controller?  Are they
capped?  That has been a problem in the past with large models.

On Thu, Nov 17, 2016 at 8:53 AM Rick Harding 
wrote:

> I'm definitely agreeing we need to provide some better tools for the admin
> of the controller to track and garden things such as charms and resources
> which can be quite large and grows over time.
>
> My main point with Uros was that we have this way due to model migrations
> to stick a controller/model into a quiet mode so that a migration can take
> place. It feels like a natural fit for a bit of a maintenance mode as
> operators manage large scale/long running Juju controllers over time. I
> wanted to look into the feasibility of allowing the operator to engage the
> quiet mode while they manage disk or other issues.
>
> The other question is, is this a temp issue? When you migrate models to
> another controller only the latest charm/resource revision goes with it.
> Maybe there's a place for a migration as part of good hygiene. It feels a
> bit forceful, but actually might be a safe practice.
>
> On Thu, Nov 17, 2016 at 4:13 AM John Meinel 
> wrote:
>
> So logs in mongo and logs on disk should be capped, and purged when they
> get above a certain size. 'audit.log' should never be automatically purged.
> Charms in the blobstore are potentially local data that we can't reproduce,
> so hard to automatically purge them. I think there has been some work done
> to properly reference count them, so we at least know if anything is
> currently referencing them. And we could purge things that are from a
> charmstore, since we know it is available elsewhere.
>
> I'm fine with a "pause" sort of mode so that you have an opportunity to
> move things out, and some sort of manually triggered garbage collection.
> But if we do know that some garbage collection is safe, we should probably
> just do it by default.
>
> John
> =:->
>
>
> On Thu, Nov 17, 2016 at 12:20 PM, Uros Jovanovic <
> uros.jovano...@canonical.com> wrote:
>
> Hi all,
>
> I'd like to start a discussion on how to handle storage issues on
> controller machines, especially what we can do when storage is getting 95%
> or even 98% full. There are many processes that are storing data, we have
> at least:
> - charms and resources in the blobstore
> - logs in mongo
> - logs on disk
> - audit.log
>
> Even if we put all models in the controller into read-only mode to prevent
> upload of new charms or resources to blobstore, we still have to deal with
> logs which can fill the space quickly.
>
> While discussing about this with Rick, given the work on model migrations,
> maybe the path forward would be to allow admin to put the whole controller
> in "maintenance" mode and put all agents on "hold".
>
> How to deal with storage issue after that is open to discussion, maybe we
> need tools to "clear" blobstore, or export/compress/truncate logs, etc.
>
> Thoughts?
>
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: "unfairness" of juju/mutex

2016-11-16 Thread Nate Finch
Just for historical reference.  The original implementation of the new OS
mutex used flock until Dave mentioned that it presented problems with file
management (files getting renamed, deleted, etc).

In general, I'm definitely on the side of using flock, though I don't think
that necessarily solves our starvation problem, it depends on how flock is
implemented and the specific behavior of our units.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Dependencies added for new jsonschema support

2016-11-01 Thread Nate Finch
Thanks, Katherine, I had intended to mention in the email that this was a
request for review by the tech board :)

On Tue, Nov 1, 2016, 5:14 PM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Nate Finch <nate.fi...@canonical.com> writes:
>
> > In support of my PR that adds an interactive mode for add-cloud, we
> needed to add
> > jsonschema for the cloud configurations that can be inspected at runtime
> to generate the
> > interactive queries.
>
> Thanks for the great write-up, Nate. I've added it to the tech board
> agenda[1] and we should review tonight.
>
> --
> Katherine
>
> [1] -
> https://docs.google.com/document/d/13nmOm6ojX5UUNtwfrkqr1cR6eC5XDPtnhN5H6pFLfxo/edit
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Dependencies added for new jsonschema support

2016-11-01 Thread Nate Finch
In support of my PR  that adds an
interactive mode for add-cloud, we needed to add jsonschema for the cloud
configurations that can be inspected at runtime to generate the interactive
queries.

Unfortunately, the jsonschema package we're currently using (gojsonschema)
does not expose the structure of the schema, so it's not able to be
inspected by internal code (
https://godoc.org/github.com/xeipuuv/gojsonschema#Schema).

In looking for a new jsonschema package, I got recommended
github.com/lestrrat/go-jsschema which is actively maintained, and does
expose the schema.

I have made a wrapper package for go-jsschema at github.com/juju/jsonschema
which adds some extra fields to the schema to support our particular use
case, but delegates most of the heavy lifting to the underlying package.

Currently, importing go-jsschema imports 5 other repos from the same author
in support of jsonschema, and one package by a third party (dave's
github.com/pkg/errors).  I am asking about making a PR to remove one of
those dependencies, but it still leaves us with 6 new dependencies (5 of
which are from the same author).

Here's the additions:
github.com/lestrrat/go-jsschema
github.com/lestrrat/go-jspointer
github.com/lestrrat/go-jsref
github.com/lestrrat/go-jsval
github.com/lestrrat/go-pdebug
github.com/lestrrat/go-structinfo
github.com/pkg/errors

The pdebug one is the one I can easily remove with a PR.  I'd like to
remove the structinfo one, but that might be less likely (would require
copying a few dozen lines of code, and I doubt the guy would be amenable...
but I can ask).

However, this shuld be able to be used as a more or less drop in
replacement for the existing forks of the gojsonschema package which we're
maintaining, which would remove 3 dependencies (albeit ones more or less
maintained by us).  In theory, we could also remove environschema at some
point and replace it with jsonschema, but that's a bigger job.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Long tests

2016-10-27 Thread Nate Finch
I ran the tests serially today to try to get a picture of what tests
actually take a long time without worrying about contention.  The full run
took almost 45 minutes (normally it takes like 15 testing packages in
parallel).

Here's the full output (sorry for the google drive link, pastebin got mad
at the length):
https://drive.google.com/file/d/0B-r4AW1RoHJNR3plVjhwSVM1Z0E/view

Unfortunately, the state tests panicked after a 10 minute timeout, but
there's still some good info in there.

Here's some especially bad low hanging fruit (note these are the times for
single tests, not full suites):

PASS: showoutput_test.go:59: ShowOutputSuite.TestRun 18.007s
PASS: upgradejuju_test.go:307: UpgradeJujuSuite.TestUpgradeJuju 10.722s
PASS: status_test.go:3292: StatusSuite.TestStatusAllFormats 12.255s
PASS: unit_test.go:217: UnitSuite.TestUpgradeFailsWithoutTools 10.200s
PASS: syslog_test.go:220: syslogSuite.TestConfigChange 31.715s
PASS: syslog_test.go:193: syslogSuite.TestLogRecordForwarded 31.731s
PASS: bakerystorage_test.go:72: BakeryStorageSuite.TestExpiryTime 58.037s
PASS: local_test.go:547:
localServerSuite.TestStopInstanceSecurityGroupNotDeleted 29.046s
PASS: kvm-broker_test.go:329:
kvmProvisionerSuite.TestContainerStartedAndStopped 10.098s
PASS: worker_test.go:69:
statusHistoryPrunerSuite.TestWorkerWontCallPruneBeforeFiringTimer 10.000s
PASS: uniter_test.go:1190: UniterSuite.TestActionEvents 39.242s
PASS: uniter_test.go:977: UniterSuite.TestUniterRelations 14.132s

There's about 112 tests that are faster than these, but still take more
than a second to run, shown here: http://pastebin.ubuntu.com/23391234/

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: What are the best practices for stop hook handling?

2016-10-19 Thread Nate Finch
1. The stop hook happens when the unit is being removed entirely.  It does
not run on reboot (and there's no reboot hook).  The docs on the start hook
mention this: "Note that the charm's software should be configured so as to
persist through reboots without further intervention on juju's part."  The
stop hook should clean up everything to the best of its ability, to make
the machine appear as it did before the unit was added to it (so, uninstall
the software, remove all config files, etc).

2. Don't colocate units if at all possible.  In separate containers on the
same machine, sure.  But there's absolutely no guarantee that colocated
units won't conflict with each other. What you're asking about is the very
problem colocation causes. If both units try to take over the same port, or
a common service, or write to the same file on disk, etc... the results
will very likely be bad.  Stop hooks should clean up everything they
started.  Yes, this may break other units that are colocated, but the
alternative is leaving machines in a bad state when they're not colocated.

3. Many charms don't do this (in fact, there's an email about this on our
internal mailing list right now). They absolutely should.   Many charms get
away with not doing cleanup because Juju's main use case is containers and
throw-away VMs that are discarded after the unit is removed... but there
are many cases where this does not happen, such as using the Manual
provider or colocated units.  Please write cleanup code.


On Wed, Oct 19, 2016 at 10:17 AM Rye Terrell 
wrote:

> I have a number of questions regarding how to handle stop hooks properly:
>
> 1. Background services - stop them or stop & disable them?
>
> The docs say "stop runs immediately before the end of the unit's
> destruction sequence. It should be used to ensure that the charm's software
> is not running, and will not start again on reboot."
>
> Can anyone verify that that is correct? If so, it seems clear that
> services should be stopped & disabled, but leaves me with another question
> - is there no hook that handles scenarios like host rebooting?
>
> If it's not correct, what is the proper behavior for the stop hook
> handler? Stop & disable on stop hook and start & enable on start hook?
>
> 2. Background services - how do we handle colocated applications with
> shared background services?
>
> I'm not sure this is something we support, but if so, what do we do when
> one application is stopped and it has a colocated application that shares a
> background service dependency? I don't think this is something we can
> detect at the charm level, so do we _not_ stop services so that we don't
> cause conflicts?
>
> 3. File cleanup - is anyone doing this?
>
> The docs also say "Remove any files/configuration created during the
> service lifecycle" is part of a charm's stop hook handling behavior. My
> experience isn't exactly vast, but I'm unaware of charms doing this. Is
> this something we actually do? Should we keep that statement in the docs?
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Big memory usage improvements

2016-10-17 Thread Nate Finch
FYI, this tools looks promising.  Probably worth checking out to see if
it'll help us track down those memory leaks.
https://stackimpact.com/blog/memory-leak-detection-in-production-go-applications/

On Thu, Oct 13, 2016 at 8:02 AM Christian Muirhead <
christian.muirh...@canonical.com> wrote:

> Oops, meant to reply-all.
>
> -- Forwarded message -
> From: Christian Muirhead 
> Date: Thu, 13 Oct 2016, 09:26
> Subject: Re: Big memory usage improvements
> To: Katherine Cox-Buday 
>
>
>
>
> On Wed, 12 Oct 2016, 22:36 Katherine Cox-Buday, <
> katherine.cox-bu...@canonical.com> wrote:
>
> Awesome, good work, Christian!
>
>
> :)
> Thanks for your help with the StatePool!
>
>
> Not to detract from this fantastic news, but it's still worrisome that an
> idle Juju seems to have a memory which is growing linearly (before picture
> looked like the beginning of an exponential curve?). Do we feel this is
> memory which will at some point be GCed?
>
>
> No, I think there's still a leak there. The other ones I fixed were
> goroutine leaks which were a bit easier to track down. There aren't any
> goroutines escaping now so I'll need to switch back to looking at heap
> dumps to find more.
>
> Cheers,
> Christian
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: List plugins installed?

2016-09-29 Thread Nate Finch
Seem alike the easiest thing to do is have a designated plugin directory
and have juju install  copy the binary/script there.  Then
we're only running plugins the user has specifically asked to install.

On Thu, Sep 29, 2016, 4:33 AM Stuart Bishop 
wrote:

> On 28 September 2016 at 22:45, roger peppe 
> wrote:
>
>> On 28 September 2016 at 14:55, Rick Harding 
>> wrote:
>> > This is just a miss. The original ability to see the plugins was a
>> subset of
>> > the help command and didn't make our CLI spreadsheet for things to
>> rework. I
>> > agree that list-plugins is the right idea here and that means that
>> plugins
>> > becomes a noun in our language.
>> >
>> > What's interesting is that add/remove fall out because that
>> > installing/uninstalling. I think that show-plugin might be interesting
>> to
>> > auto run the --description flag to bring it into CLI alignment with the
>> new
>> > world order.
>>
>> I've voiced discomfort with this before - I don't think that we should
>> arbitrarily run all executables that happen to have a "juju-" prefix.
>> It's potentially dangerous (for example, note that although git relies
>> heavily
>> on plugins, it doesn't execute a plugin until you explicitly name it).
>>
>> Perhaps there could be a standard way for a plugin to provide
>> metadata about itself as a data file.
>>
>>
> It also might be time to work out how a Juju snap is going to call or
> install plugins. I don't think the existing design is going to work, and
> there is still time to flag it as deprecated in the changelogs for 2.0 and
> work out the way forward for 2.1.
>
>
> --
> Stuart Bishop 
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: List plugins installed?

2016-09-29 Thread Nate Finch
Seem alike the easiest thing to do is have a designated plugin directory
and have juju install  copy the binary/script there.  Then
we're only running plugins the user has specifically asked to install.

On Thu, Sep 29, 2016, 4:33 AM Stuart Bishop 
wrote:

> On 28 September 2016 at 22:45, roger peppe 
> wrote:
>
>> On 28 September 2016 at 14:55, Rick Harding 
>> wrote:
>> > This is just a miss. The original ability to see the plugins was a
>> subset of
>> > the help command and didn't make our CLI spreadsheet for things to
>> rework. I
>> > agree that list-plugins is the right idea here and that means that
>> plugins
>> > becomes a noun in our language.
>> >
>> > What's interesting is that add/remove fall out because that
>> > installing/uninstalling. I think that show-plugin might be interesting
>> to
>> > auto run the --description flag to bring it into CLI alignment with the
>> new
>> > world order.
>>
>> I've voiced discomfort with this before - I don't think that we should
>> arbitrarily run all executables that happen to have a "juju-" prefix.
>> It's potentially dangerous (for example, note that although git relies
>> heavily
>> on plugins, it doesn't execute a plugin until you explicitly name it).
>>
>> Perhaps there could be a standard way for a plugin to provide
>> metadata about itself as a data file.
>>
>>
> It also might be time to work out how a Juju snap is going to call or
> install plugins. I don't think the existing design is going to work, and
> there is still time to flag it as deprecated in the changelogs for 2.0 and
> work out the way forward for 2.1.
>
>
> --
> Stuart Bishop 
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-20 Thread Nate Finch
Personally, I really enjoy having everything in the same place, more than I
expected. It's also one less barrier of entry for newcomers and outsiders.

On Tue, Sep 20, 2016, 5:54 PM Menno Smits <menno.sm...@canonical.com> wrote:

> (gah, hit send too early)
>
> ... If we decide to stay with RB, that will need to be fixed.
>
> On 21 September 2016 at 09:53, Menno Smits <menno.sm...@canonical.com>
> wrote:
>
>> Some of us probably got a little excited (me included). There should be
>> discussion and a clear announcement before we make a signigicant change to
>> our process. The tech board meeting is today/tonight so we'll discuss it
>> there as per Rick's email. Please contribute to this thread if you haven't
>> already and have strong opinions either way on the topic.
>>
>> Interestingly our Github/RB integration seems to have broken a little
>> since Github made these changes. The links to Reviewboard on pull requests
>> aren't getting inserted any more. If we decide to stay with RB
>>
>> On 21 September 2016 at 05:54, Rick Harding <rick.hard...@canonical.com>
>> wrote:
>>
>>> I spoke with Alexis today about this and it's on her list to check with
>>> her folks on this. The tech board has been tasked with he decision, so
>>> please feel free to shoot a copy of your opinions their way. As you say, on
>>> the one hand it's a big impact on the team, but it's also a standard
>>> developer practice that not everyone will agree with so I'm sure the tech
>>> board is a good solution to limiting the amount of bike-shedding and to
>>> have some multi-mind consensus.
>>>
>>> On Tue, Sep 20, 2016 at 1:52 PM Katherine Cox-Buday <
>>> katherine.cox-bu...@canonical.com> wrote:
>>>
>>>> Seems like a good thing to do would be to ensure the tech board doesn't
>>>> have any objections and then put it to a vote since it's more a property of
>>>> the team and not the codebase.
>>>>
>>>> I just want some consistency until a decision is made. E.g. "we will be
>>>> trying out GitHub reviews for the next two weeks; all reviews should be
>>>> done on there".
>>>>
>>>> --
>>>> Katherine
>>>>
>>>> Nate Finch <nate.fi...@canonical.com> writes:
>>>>
>>>> > Can we try reviews on github for a couple weeks? Seems like we'll
>>>> > never know if it's sufficient if we don't try it. And there's no setup
>>>> > cost, which is nice.
>>>> >
>>>> > On Tue, Sep 20, 2016 at 12:44 PM Katherine Cox-Buday
>>>> > <katherine.cox-bu...@canonical.com> wrote:
>>>> >
>>>> > I see quite a few PRs that are being reviewed in GitHub and not
>>>> > ReviewBoard. I really don't care where we do them, but can we
>>>> > please pick a direction and move forward? And until then, can we
>>>> > stick to our previous decision and use RB? With people using both
>>>> > it's much more difficult to tell what's been reviewed and what
>>>> > hasn't.
>>>> >
>>>> > --
>>>> > Katherine
>>>> >
>>>> > Nate Finch <nate.fi...@canonical.com> writes:
>>>> >
>>>> > > In case you missed it, Github rolled out a new review process.
>>>> > It
>>>> > > basically works just like reviewboard does, where you start a
>>>> > review,
>>>> > > batch up comments, then post the review as a whole, so you don't
>>>> > just
>>>> > > write a bunch of disconnected comments (and get one email per
>>>> > review,
>>>> > > not per comment). The only features reviewboard has is the edge
>>>> > case
>>>> > > stuff that we rarely use: like using rbt to post a review from a
>>>> > > random diff that is not connected directly to a github PR. I
>>>> > think
>>>> > > that is easy enough to give up in order to get the benefit of
>>>> > not
>>>> > > needing an entirely separate system to handle reviews.
>>>> > >
>>>> > > I made a little test review on one PR here, and the UX was
>>>> > almost
>>>> > > exactly like working in reviewboard:
>>>> > > https://github.com/juju/juju/pull/6234
>>>> > >
>>>> > > There may be important edge cases I'm missing, but I think it's
>>>> > worth
>>>> > > looking into.
>>>> > >
>>>> > > -Nate
>>>>
>>>> --
>>>> Juju-dev mailing list
>>>> Juju-dev@lists.ubuntu.com
>>>> Modify settings or unsubscribe at:
>>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>>
>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>>
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-20 Thread Nate Finch
Can we try reviews on github for a couple weeks?  Seems like we'll never
know if it's sufficient if we don't try it.  And there's no setup cost,
which is nice.

On Tue, Sep 20, 2016 at 12:44 PM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> I see quite a few PRs that are being reviewed in GitHub and not
> ReviewBoard. I really don't care where we do them, but can we please pick a
> direction and move forward? And until then, can we stick to our previous
> decision and use RB? With people using both it's much more difficult to
> tell what's been reviewed and what hasn't.
>
> --
> Katherine
>
> Nate Finch <nate.fi...@canonical.com> writes:
>
> > In case you missed it, Github rolled out a new review process. It
> > basically works just like reviewboard does, where you start a review,
> > batch up comments, then post the review as a whole, so you don't just
> > write a bunch of disconnected comments (and get one email per review,
> > not per comment). The only features reviewboard has is the edge case
> > stuff that we rarely use: like using rbt to post a review from a
> > random diff that is not connected directly to a github PR. I think
> > that is easy enough to give up in order to get the benefit of not
> > needing an entirely separate system to handle reviews.
> >
> > I made a little test review on one PR here, and the UX was almost
> > exactly like working in reviewboard:
> > https://github.com/juju/juju/pull/6234
> >
> > There may be important edge cases I'm missing, but I think it's worth
> > looking into.
> >
> > -Nate
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-15 Thread Nate Finch
Reviewboard goes down a couple times a month, usually from lack of disk
space or some other BS.  According to a source knowledgeable with these
matters, the charm was rushed out, and the agent for that machine is down
anyway, so we're kinda just waiting for the other shoe to drop.

As for the process things that Ian mentioned, most of those can be
addressed with a sprinkling of convention.  Marking things as issues could
just be adding :x: to the first line (github even pops up suggestions and
auto-completes), thusly:

[image: :x:]This will cause a race condition

And if you want to indicate you're dropping a suggestion, you can use :-1:
 which gives you a thumbs down:

[image: :-1:] I ran the race detector and it's fine.

It won't give you the cumulative "what's left to fix" at the top of the
page, like reviewboard... but for me, I never directly read that, anyway,
just used it to see if there were zero or non-zero comments left.

As for the inline comments in the code - there's a checkbox to hide them
all.  It's not quite as convenient as the gutter indicators per-comment,
but it's sufficient, I think.

On Wed, Sep 14, 2016 at 6:43 PM Ian Booth <ian.bo...@canonical.com> wrote:

>
>
> On 15/09/16 08:22, Rick Harding wrote:
> > I think that the issue is that someone has to maintain the RB and the
> > cost/time spent on that does not seem commensurate with the bonus
> features
> > in my experience.
> >
>
> The maintenance is not that great. We have SSO using github credentials so
> there's really no day to day works IIANM. As a team, we do many, many
> reviews
> per day, and the features I outlined are significant and things I (and I
> assume
> others) rely on. Don't under estimate the value in knowing why a comment
> was
> rejected and being able to properly track that. Or having review comments
> collapsed as a gutter indicated so you can browse the code and still know
> that
> there's a comment there. With github, you can hide the comments but
> there's no
> gutter indicator. All these things add up to a lot.
>
>
> > On Wed, Sep 14, 2016 at 6:13 PM Ian Booth <ian.bo...@canonical.com>
> wrote:
> >
> >> One thing review board does better is use gutter indicators so as not to
> >> interrupt the flow of reading the code with huge comment blocks. It also
> >> seems
> >> much better at allowing previous commits with comments to be viewed in
> >> their
> >> entirety. And it allows the reviewer to differentiate between issues and
> >> comments (ie fix this vs take note of this), plus it allows the notion
> of
> >> marking stuff as fixed vs dropped, with a reason for dropping if needed.
> >> So the
> >> github improvements are nice but there's still a large and significant
> gap
> >> that
> >> is yet to be filled. I for one would miss all the features reviewboard
> >> offers.
> >> Unless there's a way of doing the same thing in github that I'm not
> aware
> >> of.
> >>
> >> On 15/09/16 07:22, Tim Penhey wrote:
> >>> I'm +1 if we can remove the extra tools and we don't get email per
> >> comment.
> >>>
> >>> On 15/09/16 08:03, Nate Finch wrote:
> >>>> In case you missed it, Github rolled out a new review process.  It
> >>>> basically works just like reviewboard does, where you start a review,
> >>>> batch up comments, then post the review as a whole, so you don't just
> >>>> write a bunch of disconnected comments (and get one email per review,
> >>>> not per comment).  The only features reviewboard has is the edge case
> >>>> stuff that we rarely use:  like using rbt to post a review from a
> random
> >>>> diff that is not connected directly to a github PR. I think that is
> easy
> >>>> enough to give up in order to get the benefit of not needing an
> entirely
> >>>> separate system to handle reviews.
> >>>>
> >>>> I made a little test review on one PR here, and the UX was almost
> >>>> exactly like working in reviewboard:
> >> https://github.com/juju/juju/pull/6234
> >>>>
> >>>> There may be important edge cases I'm missing, but I think it's worth
> >>>> looking into.
> >>>>
> >>>> -Nate
> >>>>
> >>>>
> >>>
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
> >
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Reviews on Github

2016-09-14 Thread Nate Finch
In case you missed it, Github rolled out a new review process.  It
basically works just like reviewboard does, where you start a review, batch
up comments, then post the review as a whole, so you don't just write a
bunch of disconnected comments (and get one email per review, not per
comment).  The only features reviewboard has is the edge case stuff that we
rarely use:  like using rbt to post a review from a random diff that is not
connected directly to a github PR. I think that is easy enough to give up
in order to get the benefit of not needing an entirely separate system to
handle reviews.

I made a little test review on one PR here, and the UX was almost exactly
like working in reviewboard: https://github.com/juju/juju/pull/6234

There may be important edge cases I'm missing, but I think it's worth
looking into.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-09-07 Thread Nate Finch
Just a note, because it wasn't clear to me - there are a couple cases where
the automatic upload tools won't do what you want, if you use a version of
juju you built locally.

If you're not a developer or someone who builds juju from source, it's safe
to ignore this email.

*1. If the version of juju you're building is available in streams and you
*want* to upload, you have to use --build-agent, because by default we
won't upload.  *
This happens if you're purposely building an old release, or if QA just
released a new build and you haven't pulled yet and/or master hasn't been
updated with a new version number.  --build-agent works like the old
upload-tools, except it always rebuilds jujud, even if there's an existing
jujud binary.

*2. If you're building a version of the code that is not available in
streams, and you *don't* want to upload, you must use
--agent-version=.*
This can happen if you want to deploy a known-good server version, but only
have a dev client around.  I use this to make sure I can reproduce bugs
before testing my fixes.  --agent-version works basically like the old
default (non-upload) behavior, except you have to explicitly specify a juju
version that exists in streams to deploy (e.g. --agent-version=2.0-beta17)


Note that if you want to be *sure* that juju bootstrap always does what you
expect it to, IMO you should always use either --build-agent (to upload) or
--agent-version (to not upload), depending on your intent.  The behavior of
the bare juju bootstrap can change without warning (from uploading to not
uploading) if a new version of jujud is pushed to streams that matches what
you're building locally (which happens every time a new build is pushed to
streams, until master is updated and you git pull and rebuild), and that
can be really confusing if you are expecting your changes to be uploaded,
and they're not.  It also changes behavior if you switch from master (which
always uploads) to a release tag (which never uploads), which while
predictable, can be easy to forget.

Related note, beta18 (which is in current master) and later versions of the
client can't bootstrap with --agent-version 2.0-beta17 or earlier, due to a
breaking change (you'll get an error about mismatching UUIDs).  This type
of breakage is rare, and generally should only happen during betas (or
pre-beta), but it impacts us right now, so... yeah.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: kill-controller, unregister, destroy-controller

2016-09-02 Thread Nate Finch
This is one of the reasons we always make you type out the controller name
for destroy controller.  Because

juju destroy-controller production-website

should ring a bell.  Simply adding a long flag doesn't really help you
remember if you're killing something important or not, because you always
have to type the same flag for every controller.

This is also why I just always use kill-controller... it's like
destroy-controller --destroy-all-models except you don't need the flag and
it's easier to type even disregarding the flag.  I'm sure I'm not the only
one that will figure this out and just avoid destroy-controller, which
defeats the purpose of trying to make it safer.

I agree with Roger we should be encouraging people to put blocks on
important controllers, and not rely on typing out long things that aren't
actually helpful.



On Fri, Sep 2, 2016 at 7:16 AM roger peppe 
wrote:

> It seems to me that this kind of thing is exactly what "blocks" are
> designed for. An explicit unblock command seems better to me than either an
> explicit flag or an extra prompt, both of which are vulnerable to typing
> without thinking. Particularly if "throwaway" controllers created for
> testing purposes are not blocked by default, so you don't get used to to
> typing "unblock" all the time.
>
> On 1 Sep 2016 16:14, "Mark Ramm-Christensen (Canonical.com)" <
> mark.ramm-christen...@canonical.com> wrote:
>
> I get the desire to remove friction everywhere we can, but unless
> destroying controllers is a regular activity, I actually think SOME
> friction is valuable here.
>
> If controllers are constantly getting into a wedged state where they must
> be killed, that's likely a product of a fast moving set of betas, and
> should be addressed *directly* rather than as they say "applying lipstick
> to a pig"
>
> On Thu, Sep 1, 2016 at 4:04 PM, Marco Ceppi 
> wrote:
>
>> On Thu, Sep 1, 2016 at 9:59 AM Mark Ramm-Christensen (Canonical.com) <
>> mark.ramm-christen...@canonical.com> wrote:
>>
>>> I believe keeping the --destroy-all-models flag is helpful in keeping
>>> you from accidentally destroying a controller that is hosting important
>>> models for someone without thinking.
>>>
>>
>> What happens if I destroy-controller without that flag? Do I have to go
>> into my cloud portal to kill those instances? Is there any way to recover
>> from that to get juju reconnected? If not, it's just a slower death.
>>
>>
>>> On Thu, Sep 1, 2016 at 3:40 PM, Marco Ceppi 
>>> wrote:
>>>
 Hey everyone,

 I know we've had discussions about this over the past few months, but
 it seems we have three commands that overlap pretty aggressively.

 Using Juju beta16, and trying to 'destroy' a controller it looks like
 this now:

 ```
 root@ubuntu:~# juju help destroy-controller
 Usage: juju destroy-controller [options] 

 ...

 Details:
 All models (initial model plus all workload/hosted) associated with the
 controller will first need to be destroyed, either in advance, or by
 specifying `--destroy-all-models`.

 Examples:
 juju destroy-controller --destroy-all-models mycontroller

 See also:
 kill-controller
 unregister
 ```

 When would you ever want to destroy-controller and not
 destroy-all-models? I have to specify that flag everytime, it seems it
 should just be the default behavior. Kill-controller seems to do what
 destroy-controller --destroy-all-models does but more aggressively?

 Finally, unregister and destroy-controller (without
 --destroy-all-models) does the same thing. Can we consider dropping the -
 very long winded almost always required - flag for destroy-controller?

 Finally, there used to be a pretty good amount of feedback during
 destroy-controller, while it was rolling text, I at least knew what was
 happening. Now it's virtually silent. Given it runs for quite a long time,
 can we get some form of feedback to the user back into the command?

 ```
 root@ubuntu:~# juju destroy-controller --destroy-all-models cabs
 WARNING! This command will destroy the "cabs" controller.
 This includes all machines, applications, data and other resources.

 Continue? (y/N):y
 ERROR failed to destroy controller "cabs"

 If the controller is unusable, then you may run

 juju kill-controller

 to forcibly destroy the controller. Upon doing so, review
 your cloud provider console for any resources that need
 to be cleaned up.

 ERROR cannot connect to API: unable to connect to API: websocket.Dial
 wss://10.0.0.4:17070/api: dial tcp 10.0.0.4:17070: getsockopt: no
 route to host
 root@ubuntu:~# juju kill-controller cabs
 WARNING! This command will destroy the "cabs" controller.
 This includes all 

Re: 1.25.6-xenial-amd64 and lxc containers

2016-08-30 Thread Nate Finch
You don't need anything in the environment.yaml.  You specify that you want
to deploy something to a lxc container in the deploy command.  e.g.

juju deploy mysql --to lxc   // deploys mysql to a new lxc container on a
new machine
juju deploy wordpress --to lxc:25  // deploys wordpress to a new lxc
container on the already-existing machine #25
juju deploy haproxy --to 1/lxc/3  // deploys haproxy to lxc container #3 on
machine #1

you can also use add-machine to create empty containers:

juju add-machine lxc // adds a new machine with an empty lxc container on it
juju add-machine lxc:5 // adds a new empty lxc container on machine 5

Hope that helps.
-Nate

On Tue, Aug 30, 2016 at 4:16 PM Daniel Bidwell  wrote:

> What do I have to put in my .juju/environment.yaml to get juju-1.25.6-
> xenial-amd64 to use lxc containers?
> --
> Daniel Bidwell 
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: 1.25.6-xenial-amd64 and lxc containers

2016-08-30 Thread Nate Finch
You don't need anything in the environment.yaml.  You specify that you want
to deploy something to a lxc container in the deploy command.  e.g.

juju deploy mysql --to lxc   // deploys mysql to a new lxc container on a
new machine
juju deploy wordpress --to lxc:25  // deploys wordpress to a new lxc
container on the already-existing machine #25
juju deploy haproxy --to 1/lxc/3  // deploys haproxy to lxc container #3 on
machine #1

you can also use add-machine to create empty containers:

juju add-machine lxc // adds a new machine with an empty lxc container on it
juju add-machine lxc:5 // adds a new empty lxc container on machine 5

Hope that helps.
-Nate

On Tue, Aug 30, 2016 at 4:16 PM Daniel Bidwell  wrote:

> What do I have to put in my .juju/environment.yaml to get juju-1.25.6-
> xenial-amd64 to use lxc containers?
> --
> Daniel Bidwell 
>
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-08-15 Thread Nate Finch
Ian, can you describe how Juju decides if it's running for a developer or
an end user?  I'm worried this could trip people up who are both end users
and happen to have a juju development environment.

On Mon, Aug 15, 2016 at 11:29 AM Mark Shuttleworth  wrote:

> On 15/08/16 08:27, Ian Booth wrote:
> > So if you pull master you'll no longer need to use upload-tools.
> > Juju will Do the Right Thing*, when you type:
>
> Thanks Ian, this is a lovely simplification.
>
> Mark
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: So I wanted to update a dependency . .

2016-08-12 Thread Nate Finch
The main problem with putting code into a different directory is that if
there are files in that repo that reference the package itself, they'd need
to be updated.

So like for example, if flag_test.go imports "launchpad.net/gnuflag" to do
external tests, we'd need to edit that code to import "gnuflag" instead

Also, if there are any third party packages that use one of the same
dependencies we do, we'd need to edit those files, too...

On Fri, Aug 12, 2016 at 2:31 PM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> On Fri, Aug 12, 2016 at 9:11 AM, Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> The problem really stems from two go-isms -
>>
>> 1.) The standard for Go projects is to have the path on disk
>> representative of the URL to the project.
>> 2.) Every file that uses a package has an import statement that
>> references the path on disk.
>>
>> If either one of these were not true, we could have avoided your problem.
>>
>> There's nothing we can do about #2.  Building with the go tool requires
>> that if you use a package in a file, you must have an import statement for
>> it, and if you import "foo/bar/gnuflag", that code must live in
>> $GOPATH/src/foo/bar/gnuflag".
>>
>> For #1, *in theory* we could change things.  Only "go get" knows that
>> $GOPATH/src/launchpad.net/gnuflag corresponds to
>> http://launchpad.net/gnuflag.  When you compile and build, all the go
>> tool cares about is the path on disk and what code is in that folder.  We
>> *could* detach the idea of an import path from the URL it corresponds
>> to.  We could git clone github.com/juju/gnuflag into
>> $GOPATH/src/gnuflag, and the go tool would build it just fine.  the import
>> statement would then be import "gnuflag" and this would work just fine.
>> We'd need to maintain a mapping file of repo to folder on disk, similar to
>> what we do now with dependencies.tsv, but it wouldn't be the end of the
>> world.
>>
>> I'm not entirely convinced this is a good idea, but it would make
>> updating dependencies a lot easier.
>>
> This is interesting, and more or less what I was after in my mail. I
> wasn't sure if there were any out of the box ideas that would make life
> easier, but I hoped to surface some for discussion. Thanks!
>
> I suppose my follow-up question is, why or why not is this a good idea?
> Having to change all of the imports is painful, but as you say, it is an
> intentional design decision by the Go Community. I don't have any real
> feeling or inkling on what other Go projects do, but I suspect at least
> some of them follow this idea of importing all dependencies and updating
> them only if needed. However, by nature of them simply cloning the source
> into the repo, I would also guess they are not able to easily update a
> dependency.
>
>>
>> The source code hack to mgo is due to this problem as well. In theory we
>> can just fork mgo and make our fix, but that would again require us to
>> change a lot of import statements.
>>
>> Part of the problem was that you were changing two very common
>> dependencies that were imported 4 levels deep (i.e. a imports b imports c
>> imports d, and they all import the dependency).  This wouldn't change no
>> matter what we do - you'd still need to start the changes at d and roll
>> them up the stack.  That would be true in any language.
>>
> Yes, I told of my pain, but as painful as it was, I got lots of help from
> many of you. Thank you again for helping land this.
>
>>
>> The "circular" dependency (in quotes because you can't have a true
>> circular dependency in go, but you can have two repos that both depend on
>> each other) shouldn't have landed that way.  Two repos should definitely
>> not depend on each other.  No other repo should ever depend on
>> github.com/juju/juju.  We explicitly make no claims for backwards
>> compatibility, so the thing you're depending on could just disappear one
>> day.  If you need something in there, factor it out into its own repo with
>> a backwards compatibility guarantee first.
>>
>> It *is* a problem that dependencies under juju can be imported in broken
>> states because of our use of dependencies.tsv.  There's nothing that says
>> we couldn't run all the unit tests for all packages juju uses in the
>> landing job for github.com/juju/juju (and any other repo we desire)...
>> just run go test ./... from $GOPATH/src and it'll test everything we have
>> downloaded to build juju with.  That's an easy win, and something we should
>> start doing ASAP in my opin

Re: So I wanted to update a dependency . .

2016-08-12 Thread Nate Finch
This is why we want to make a juju library, so we have a package that we
know we need to keep backwards compatibility with.  Sure, you can vendor or
pin the revision, but that doesn't help you when a new feature or fix lands
and you update, and everything breaks.  If we have a library we try to keep
backwards compatible, then these problems would be minimized.

On Fri, Aug 12, 2016 at 1:56 PM roger peppe <roger.pe...@canonical.com>
wrote:

> On 12 August 2016 at 14:11, Nate Finch <nate.fi...@canonical.com> wrote:
> > No other repo should ever depend on github.com/juju/juju.
>
> I have to call this out. There is no decent way to use Juju from Go without
> depending on github.com/juju/juju.
>
> And backward compatibility isn't insurmountable - dependencies can be
> locked or
> vendored. Juju itself uses many non-backwardly compatible
> dependencies, after all.
>
> I think we should be aiming to provide a nice Go API to Juju, whether that
> ends
> up in github.com/juju/juju or not.
>
>   cheers,
> rog.
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Point people to ubuntu-on-windows instead of the broken juju-on-windows?

2016-08-11 Thread Nate Finch
Just FYI - running the juju client on Windows is 100% supported and we
actively fix bugs on it all the time.   I hadn't seen the particular bug
you referenced about juju ssh, but that definitely seems like something we
should prioritize higher.  The linux ssh works so much better because we
can simply run the OS's ssh and everything Just Works, but that's not
possible on Windows, so it's much more involved to make it work (we
basically have to write an entire ssh client).

However, as a workaround, it would be pretty trivial to write a juju plugin
to launch putty with the right parameters.  Then you'd just do `juju putty
0` rather than juju ssh 0.  Obviously, it would open putty, not a CLI ssh
terminal, but the overall experience is probably a lot better than what we
could provide.

On Thu, Aug 11, 2016 at 11:41 AM Marco Ceppi 
wrote:

> On Thu, Aug 11, 2016 at 11:27 AM Merlijn Sebrechts <
> merlijn.sebrec...@gmail.com> wrote:
>
>> Hi all
>>
>>
>> Juju-on-windows is pretty broken and it doesn't seem that bugs are being
>> fixed anymore. Wouldn't it be better to point people to Juju on "Ubuntu on
>> windows" instead? One of my colleagues started using it and it seems to
>> work just fine.
>>
>> Some show-stopping bugs in Juju-on-windows:
>>
>>- `juju ssh` is not translating newlines correctly
>>.
>>- windows users can't create layered charms because the Charm tools
>>for windows is still at 1.2.9.
>>
>>
>> This is a doc mistake, I'll have an updated charm-tools msi before the
> end of the week in the docs.
>
> What do you guys think?
>>
>
> I think you've got a good point, however, not everyone is on Windows 10.
> Pointing people who are on Windows 10 to WSL instead would be fruitful for
> a good native experience.
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Let's talk retries

2016-08-09 Thread Nate Finch
So, in my opinion, juju/retry is pretty good.  I think with some tweaking
and some intelligent use, it can feel every bit as lightweight as we'd like.

If we have common use cases, we just need to write up an API to encapsulate
them.  If we want a "retry N times or until this channel is closed" then
write an api for that.  Having preconstructed call args with prepopulated
best practices for delays, backoffs, etc. can make it a lot easier.

There's no reason running a complex retry can't look like this:

// closure to encapsulate local variables
f := func() error { return foo.bar(a, b, c) }
err := retry.WithCancel(f, juju.StdRetry, doneCh)

Where StdRetry is the juju standard retry scheme (standard timeout, backoff
strategy, etc), and doneCh is the channel you expect to get closed to
cancel this action.

That's doable with the current API if we add cancellation (though it would
be nice if we extracted the function-to-run from the args struct, since it
makes it more obvious that the retry args can be reused for different
functions).

I think goroutines should be controlled by the caller.  If the caller wants
to put retry.Foo() in a goroutine, it can... but I don't think it's useful
to have that be retry's responsibility, it complicates the package too much.

On Tue, Aug 9, 2016 at 11:26 AM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Andrew's queue is certainly nice, but I agree with the points Roger is
> raising:
>
> 1. Go's idiom for coordinating concurrent operations are goroutines and
> channels.
> 2. Sticking to these idioms makes it much easier to compose parts into a
> whole (if only because the language strongly bends code that way).
>
> As a matter of opinion, I also dislike having to instantiate structs to
> encapsulate logic; I would rather write the logic inline, or in a closure
> -- possibly in a goroutine.
>
> William Reade  writes:
>
> > On Tue, Aug 9, 2016 at 10:17 AM, roger peppe
> >  wrote:
> >
> > BTW I'm not saying that a timer queue is never the correct answer.
> > In some
> > circumstances, it can be the exactly the right thing to use.
> >
> >
> > Yeah -- the context here is that katco has been looking at the
> > provisioner, and I think a timer queue is sensible there. Firing off
> > goroutines for every machine is not necessarily bad by any means, but
> > it *is* somewhat harder to get right in every circumstance (e.g. what
> > happens when a machine is removed while that other goroutine is
> > working on provisioning it?).
> >
> > There are certainly many places where a single retryable op *is* the
> > only important thing and there's no call for a control loop. I'm not
> > suggesting we should always use a queue; but I'd tend to encourage it
> > whenever we have a component dealing with many similar retryable
> > tasks. (I definitely should be using it in dependency.Engine, for
> > example.)
> >
> > Cheers
> > William
> >
>
> --
> Katherine
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


No help Topics in 2.0

2016-07-26 Thread Nate Finch
Juju 1.x had a ton of help in the CLI, right at your fingertips:

$ juju1 help topics
azure-provider  How to configure a Windows Azure provider
basics  Basic commands
commandsBasic help for all commands
constraints How to use commands with constraints
ec2-providerHow to configure an Amazon EC2 provider
global-options  Options common to all commands
glossaryGlossary of terms
hpcloud-providerHow to configure an HP Cloud provider
jujuWhat is Juju?
juju-systemsAbout Juju Environment Systems (JES)
local-provider  How to configure a local (LXC) provider
logging How Juju handles logging
maas-provider   How to configure a MAAS provider
openstack-provider  How to configure an OpenStack provider
placement   How to use placement directives
plugins Show Juju plugins
spaces  How to configure more complex networks using spaces
topics  Topic list
users   About users in Juju

Almost all of this has been removed in 2.0:

$ juju help topics
basics  Basic Help Summary
commandsBasic help for all commands
global-options  Options common to all commands
topics  Topic list

There are a lot of parts of the CLI that *need* that extra level of
documentation.  I'm all for dropping a link to the website at the bottom of
some help docs for further info, but dropping the help entirely seems like
a mistake.  Some things like constraints and placement are used in multiple
commands, but there's no way to know from the CLI how to entire valid
values for them.  Things like the providers are not explained anywhere in
the CLI now, as far  as I can tell.

The plaintext docs are the easiest thing for us to update and maintain...
they live right in our own repo and let us update them whenever the code
updates, and they're the easiest thing for users to get to... let's not
drop that on the floor.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Need info for deploying Mariadb as xenial

2016-06-27 Thread Nate Finch
You can use --series xenial --force  I don't know if there's anything in
the charm that might fail, though.

On Mon, Jun 27, 2016, 7:40 AM Rajith P Venkata  wrote:

> Hi
>
> I am deploying mariadb5.5 on juju 2.0, I am getting error . Mariadb charm
> supports only trusty , is there a way we can have Mariadb charm deployed
> with series  xenial
>
> Rajith
>
> IBM AIX Certified, OCPCertified
> 
>
> Email: rajith...@in.ibm.com
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Cleansing Mongo data

2016-06-24 Thread Nate Finch
It seems as though we should be cleansing all the keys since we never
know what queries we might want to make in the future.

On Fri, Jun 24, 2016 at 12:04 PM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

>
> As I have only just discovered the need to cleanse mongo data, I can't say
> for sure, but it looks like we may have been cleansing things in the parts
> of Juju that need it. William may know more.
>
> If not, I imagine a small upgrade step would make short work of any
> problems.
>
> roger peppe  writes:
>
> > This is useful, thanks.
> >
> > Note that's it's not necessary to cleanse *all* keys that go into Mongo,
> > just the ones that might be used in queries.
> >
> > But one thought... what about keys that already contain full-width
> > dollar and dot?
> >
> >   cheers,
> > rog.
> >
> > On 23 June 2016 at 21:09, Katherine Cox-Buday
> >  wrote:
> >> Hey all,
> >>
> >> William gave me a good review and it came up that I wasn't cleansing
> >> some of
> >> the data being placed in Mongo. I wasn't aware this had to be done,
> >> and
> >> after talking to a few other folks it became apparent that maybe not
> >> many
> >> people know we should be doing this.
> >>
> >> At any rate, William also pointed me to some existing code which did
> >> this.
> >> I've pulled it out into the mongo/utils package for general
> >> consumption. The
> >> comments do a pretty good job of elucidating why this is necessary.
> >>
> >> https://github.com/juju/juju/blob/master/mongo/utils/data_cleansing.go
> >>
> >> -
> >> Katherine
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
>
> --
> Katherine
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: fslock is dead, long live mutex

2016-06-21 Thread Nate Finch
Thanks for this, Tim!  Really happy to see that old fslock code go away,
and using much more reliable OS-level functionality.

On Tue, Jun 21, 2016 at 11:53 AM Alexis Bruemmer <
alexis.bruem...@canonical.com> wrote:

> Tim this is really awesome and will address a few long standing and
> painful field bugs.  Thank you for taking the extra effort to do this work!
>
> Also thank you to Nate Finch and David Cheney for their contributions on
> this effort.
>
> On Tue, Jun 21, 2016 at 2:24 AM, Tim Penhey <tim.pen...@canonical.com>
> wrote:
>
>> Hi folks,
>>
>> We have finally managed to exorcise the fslock from the codebase. Both
>> 1.25 and master no longer refer to it at all. We need to remove it from the
>> juju/utils package to make sure that people don't accidentally try and use
>> it again.
>>
>> There is a new replacement, the juju/mutex package. To acquire a mutex,
>> you create a mutex.Spec structure. You must at least provide a name, delay
>> and clock. The delay is how long the code waits between tries to acquire
>> the mutex. Named mutexs are shared between processes on the same machine.
>> If a process holds the mutex and the process dies, the mutex is
>> automatically released.  A spec can also have a timeout value, and/or an
>> abort channel.
>>
>> On linux this is implemented with abstract domain sockets, on windows it
>> uses a named semaphore, and on other platforms a flock is used on a named
>> temp file.
>>
>> https://godoc.org/github.com/juju/mutex
>>
>> Cheers,
>> Tim
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
>
>
> --
> Alexis Bruemmer
> Juju Core Manager, Canonical Ltd.
> (503) 686-5018
> alexis.bruem...@canonical.com
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Tests do not pass with go1.7

2016-06-08 Thread Nate Finch
Just FYI, in case anyone was like me and decided they wanted to jump on the
faster compile times in 1.7... some of our tests do not pass in go 1.7:

https://bugs.launchpad.net/juju-core/+bug/1589350
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


version parsing

2016-06-03 Thread Nate Finch
a bug was discovered in our version parsing:
https://bugs.launchpad.net/juju-core/+bug/1588911

basically, 2.0.alpha10 was being parsed as the tag being alpha1 and the
patch being 0, instead of alpha and 10.  While correcting this problem, I
noticed that we were matching [a-zA-Z0-9_]+ for the tag... which I'm pretty
sure was a mistake.  After talking to Curtis and Martin, I changed it so
the tag is only [a-z]+.  I'm pretty sure this will not affect anyone else,
but posting it to the list just in case.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: list, show and get (was Re: Fixing "juju help commands")

2016-05-24 Thread Nate Finch
I can understand list-clouds vs show-cloud, for plural vs singular (even
though I don't think we actually have commands like that, with a plural and
a singular).  But show-config vs get-config seems really pedantic. You're
still showing one thing. One is a foo, one is a foo-config. They have very
easily differentiated names, unlike plural vs singular, which actually
might be confusing if they only differed by an s.

On Tue, May 24, 2016, 6:45 PM Tim Penhey <tim.pen...@canonical.com> wrote:

> On 25/05/16 01:49, Nate Finch wrote:
> > While we're on the subject... we have list-foos, show-foos, and
> > get-foo... and I honestly cannot remember when to use which. Can we just
> > standardize on one, so no one has to remember if it's list-models or
> > show-models or show-model-config or get-model-config?  They all mean the
> > same thing, let's just pick one.
>
> 'list-foos' which is to be an alias for 'foos' returns a list of foos.
> 'show-foo' shows information about a single 'foo'
> 'get-foo' gets configuration about a particular 'foo'
>
> This is consistent across all our foos now.
>
> Tim
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Fixing "juju help commands"

2016-05-24 Thread Nate Finch
If there are things we really don't want uses to run, they should probably
just not exist in production builds. If they're just less-used commands,
then I don't think it's a big deal to have them in juju help commands.

If we want to remove them from production builds, they could either be
compiled out by default, or just made into plugins.  But I don't think
hiding is really a good idea.

I am +1 on removing aliases from the default output of juju help commands,
though.  It's really long and spammy right now, and half of the length is
just due to aliases.

I do like the idea of every juju list-foos having an alias of juju foos...
but then why even have list-foos at all?

While we're on the subject... we have list-foos, show-foos, and get-foo...
and I honestly cannot remember when to use which. Can we just standardize
on one, so no one has to remember if it's list-models or show-models or
show-model-config or get-model-config?  They all mean the same thing, let's
just pick one.
On Tue, May 24, 2016, 8:12 AM Marco Ceppi  wrote:

> On Tue, May 24, 2016, 12:19 AM Tim Penhey 
> wrote:
>
>> Hi folks,
>>
>> More from the "let's fix the output" school of thought.
>>
>> One thing that has bugged me about 'juju help commands' was the repeated
>> mentions of "alias for ".
>>
>> I propose that we don't show aliases by default, and allow the user to
>> task for them.
>>
>> Also, the supercommand describe commands method was written before I
>> knew about the tabular output, so some code could be cleaned up there.
>>
>> Proposal:
>>
>> juju help commands
>>- shows the commands that are neither aliases, nor hidden
>> juju help commands --alias
>>- shows either just the aliases, or everything including aliases
>> juju help commands --all
>>- shows basic commands, aliases and hidden commands.
>>
>> I'd like to add the ability to say a command is hidden so it doesn't
>> show up in the commands list. The purpose for these could be debugging
>> assisting type functions, like "dump-model" or "debug-controller" (two
>> commands that don't yet exist).
>>
>
> I don't see a need for this, I and other users often use juju help
> commands to find or remind me of these commands. If you expect users to run
> these debug/dump commands ever, show them.
>
> Even if you don't expect people to run them, hidding them seems awkward.
> Better to simply educate with good help output about what the command does
> and when/why use the command.
>
>
>> Thoughts?
>>
>> Tim
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: natefinch/npipe/v2 does not support the POSIX read/close semantics juju needs

2016-05-23 Thread Nate Finch
Ug.. I'll look at this tomorrow (12:30 is a bad time for me to be diving
into windows API BS)

On Mon, May 23, 2016 at 11:40 PM David Cheney 
wrote:

> TL;DR read issue https://bugs.launchpad.net/bugs/1581337
>
> Thumper asked me to forward this to juju-dev for discussion
>
>
> http://reports.vapour.ws/releases/3964/job/run-unit-tests-win2012-amd64/attempt/2384
>
> Fails because, npipe.Accept is blocking on accept
>
> goroutine 41 [syscall, locked to thread]:
> syscall.Syscall(0x7f8139f102c, 0x2, 0x1a8, 0x, 0x0,
> 0xc082183de8, 0x3e5, 0x1c0050)
>  C:/Go/src/runtime/syscall_windows.go:163 +0x5c
> syscall.WaitForSingleObject(0x1a8, 0xc0, 0xc082183e10, 0x0, 0x0)
>  C:/Go/src/syscall/zsyscall_windows.go:693 +0x6f
> gopkg.in/natefinch/npipe%2ev2.waitForCompletion(0x15c, 0xc0822a42a0,
> 0xc08202b880, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> gopkg.in/natefinch/npipe.v2/npipe_windows.go:181
> +0x45 
> gopkg.in/natefinch/npipe%2ev2.(*PipeListener).AcceptPipe(0xc08202b880,
> 0x0, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> gopkg.in/natefinch/npipe.v2/npipe_windows.go:307
> +0x397 
> gopkg.in/natefinch/npipe%2ev2.(*PipeListener).Accept(0xc08202b880,
> 0x0, 0x0, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> gopkg.in/natefinch/npipe.v2/npipe_windows.go:261
> +0x44 
>
> github.com/juju/juju/worker/metrics/spool.(*socketListener).loop(0xc08227e780
> ,
> 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:71
> +0xaa
> 
>
> github.com/juju/juju/worker/metrics/spool.NewSocketListener.func1(0xc08227e780)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:43
> +0x65
> 
> created by github.com/juju/juju/worker/metrics/spool.NewSocketListener
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:44
> +0x140
> 
>
> and
>
> socketListener is waiting on the tomb to hit done.
>
> goroutine 43 [chan receive]:
> launchpad.net/tomb.(*Tomb).Wait(0xc08227e790, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> launchpad.net/tomb/tomb.go:108
> +0x5f 
>
> github.com/juju/juju/worker/metrics/spool.(*socketListener).Stop(0xc08227e780)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:56
> +0x18c
> 
> github.com/juju/juju/worker/metrics/sender.(*sender).stop(0xc082269770)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/sender/sender.go:104
> +0x45
> 
>
> github.com/juju/juju/worker/metrics/sender.(*sender).(github.com/juju/juju/worker/metrics/sender.stop)-fm()
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/sender/manifold.go:77
> +0x27
> 
>
> github.com/juju/juju/worker/metrics/spool.(*periodicWorker).Kill(0xc0822a40e0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:103
> +0x28
> 
>
> github.com/juju/juju/worker/metrics/sender_test.(*ManifoldSuite).setupWorkerTest.func1(0xc0822791d0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/sender/manifold_test.go:101
> +0x3d
> 
> github.com/juju/testing.(*CleanupSuite).callStack(0xc0821a9a28,
> 0xc0822791d0, 0xc08226bdd0, 0x2, 0x2)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/testing/cleanup.go:52
> +0x51 
> github.com/juju/testing.(*CleanupSuite).TearDownTest(0xc0821a9a28,
> 0xc0822791d0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/testing/cleanup.go:46
> +0x4d 
> github.com/juju/testing.(*IsolationSuite).TearDownTest(0xc0821a9a20,
> 0xc0822791d0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/testing/isolation.go:38
> +0x44 
> 

Re: Specifying bootstrap machine agent details using ip address

2016-05-23 Thread Nate Finch
Hostname or IP address will both work.

On Mon, May 23, 2016 at 1:11 AM phani shankar 
wrote:

> Hi,
>
>  I am new to juju. I was following
> https://jujucharms.com/docs/stable/config-manual document to do manual
> provisioning. While configuring environment.yaml, instead of specifying
> hostname for bootstrap-host, is there a mechanism to specify IP address of
> the machine ?. Please let me know your thoughts.
>
>
> Regards,
> PHANI SHANKAR
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Three CI builds in the last 48 hours have failed because launchpad had a lie down

2016-05-18 Thread Nate Finch
 I agree.  We should snapshot a known-good starting point and start from
there.  It's a huge waste of time to do apt update / upgrade / install git,
bzr, mercurial, etc every single time we run CI.  That's like 1/3rd the
time it takes to run the unit test CI job.

On Wed, May 18, 2016 at 10:35 PM David Cheney 
wrote:

> This is a huge waste of wall time and bandwidth, and increases the
> chance of failure significantly. This is something I would like to see
> changed.
>
> On Thu, May 19, 2016 at 12:06 PM, John Meinel 
> wrote:
> > CI does a build from scratch in a pristine VM. There is some discussion
> > about changing that in a variety of ways (in a container in a long
> running
> > VM, etc), but it doesn't have a local copy to work from so it has to
> pull it
> > from upstream each time.
> >
> > John
> > =:->
> >
> >
> > On Wed, May 18, 2016 at 8:05 PM, David Cheney <
> david.che...@canonical.com>
> > wrote:
> >>
> >> We already have godeps which can take a set of vcs repos and flick
> >> them to the right revisions.
> >>
> >> Why does CI check out every single dependency from upstream every
> >> single time we do a build ? That introduces wc -l dependencies.tsv
> >> points of failure to every single CI run -- not to mention the several
> >> minutes it spends laboriously checking out the code.
> >>
> >> Thanks
> >>
> >> Dave
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >
> >
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Static Analysis tests

2016-05-02 Thread Nate Finch
I think it's a good point that we may only want to run some tests once, not
once per OS/Arch.  However, I believe these particular tests are still
limited in scope by the OS/arch of the host machine.  The go/build package
respects the build tags in files, so in theory, one run of this on Ubuntu
could miss code that is +build windows.  (we could give go/build different
os/arch constraints, but then we're back to running this N times for N
os/arch variants)

I'm certainly happy to have these tests run in verify.sh, but it sounds
like they may need to be run per-OS testrun as well.

On Sun, May 1, 2016 at 10:27 PM Andrew Wilkins <andrew.wilk...@canonical.com>
wrote:

> On Thu, Apr 28, 2016 at 11:48 AM Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> Maybe we're not as far apart as I thought at first.
>>
>
>
>> My thought was that they'd live under github.com/juju/juju/devrules (or
>> some other name) and therefore only get run during a full test run or if
>> you run them there specifically.  What is a full test run if not a test of
>> all our code?  These tests just happen to test all the code at once, rather
>> than piece by piece.  Combining with the other thread, if we also marked
>> them as skipped under -short, you could easily still run go test ./...
>> -short from the root of the juju repo and not incur the extra 16.5 seconds
>> (gocheck has a nice feature where if you call c.Skip() in the SetUpSuite,
>> it skips all the tests in the suite, which is particularly appropriate to
>> these tests, since it's the SetUpSuite that takes all the time).
>>
>
> I'm not opposed to using the Go testing framework in this instance,
> because it makes most sense to write the analysis code in Go. That may not
> always be the case, though, and I don't want to have a rule of "everything
> as Go tests" that means we end up shoe-horning things. This is just
> academic until we need something that doesn't live in the Go ecosystem,
> though.
>
> Most importantly, I don't want to lose the ability to distinguish the
> types of tests. As an example: where we run static analysis should never
> matter, so we can cut a merge job short by performing all of the static
> analysis checks up front. That doesn't matter much if we only gate merges
> on running the tests on one Ubuntu series/arch; but what if we want to
> start gating on Windows, CentOS, or additional architectures? It would not
> make sense to run them all in parallel if they're all going to fail on the
> static analysis tests. And then if we've run them up front, it would be
> ideal to not have to run them on the individual test machines.
>
> So I think it would be OK to have a separate "devrules" package, or
> whatever we want to call it. I would still like these tests to be run by
> verify.sh, so we have one place to go to check that the source code is
> healthy, without also running the unit tests or feature tests. If we have a
> separate package like this, test tags are not really necessary in the short
> term -- the distinction is made by separating the tests into their own
> package. We could still mark them as short/long, but that's orthogonal to
> separation-by-purpose.
>
> Cheers,
> Andrew
>
> Mostly, I just didn't want them to live off in a separate repo or run with
>> a separate tool.
>>
>> On Wed, Apr 27, 2016 at 11:39 PM Andrew Wilkins <
>> andrew.wilk...@canonical.com> wrote:
>>
>>> On Thu, Apr 28, 2016 at 11:14 AM Nate Finch <nate.fi...@canonical.com>
>>> wrote:
>>>
>>>> From the other thread:
>>>>
>>>> I wrote a test that parses the entire codebase under
>>>> github.com/juju/juju to look for places where we're creating a new
>>>> value of crypto/tls.Config instead of using the new helper function that I
>>>> wrote that creates one with more secure defaults.  It takes 16.5 seconds to
>>>> run on my machine.  There's not really any getting around the fact that
>>>> parsing the whole tree takes a long time.
>>>>
>>>> What I *don't* want is to put these tests somewhere else which requires
>>>> more thought/setup to run.  So, no separate long-tests directory or
>>>> anything.  Keep the tests close to the code and run in the same way we run
>>>> unit tests.
>>>>
>>>
>>> The general answer to this belongs back in the other thread, but I agree
>>> that long-running *unit* tests (if there should ever be such a thing)
>>> should not be shunted off to another location. Keep the unit tests with the
>>> unit. Integration tests are a different matter, because t

Re: adding unit tests that take a long time

2016-05-02 Thread Nate Finch
On Thu, Apr 28, 2016 at 11:44 PM Anastasia Macmood <
anastasia.macm...@canonical.com> wrote:

> I understand your desire for a quick turn around.
> But I question the value that you would get from running "fast" (short)
> tests - would this set include some fast running unit tests, integration
> tests and functional tests?
>

Yes.


> Simply because they have been identified as running quickly on some
> machines?
>

Yes.  While absolute speed obviously varies based on hardware/load,
relative speed should be pretty constant across environments.


> How would you know if that "fast" run is comprehensive enough?
>

I know for sure that it would *not* be comprehensive.  That's why we
wouldn't use it to gate landing code.  It is only for developers as a quick
smoke test.

The alternative to this is running NO tests while I'm developing.  Which is
what I do now (and probably what most of us do).  I run tests in the one
package I'm currently editing, and usually even just a subset of those if
they're all full-stack tests.  It's just not practical to do a full 15+
minute test run more often than once every few hours at most, unless
absolutely necessary.


> It sounds to me like you might as well say  "let's run couple of tests
> randomly" and rely on these result until you commit...
>

If they're fast, sure.  It's better than no tests.  I run go vet
constantly.  It doesn't find even 1% of the bugs in my code, but it finds
more than zero, and those ones I can fix before I even compile.  go test
-short is not a test that will tell me if everything is definitely
correct.  It's a test that'll tell me if something is definitely *incorrect*.



> I do not know what you will end up doing with your current dilemma. I
> second Andrew's suggestion as well \o/
> Developing short/long test distinctions and special processing for the
> tests that we maintain seems like a waste of our effort.
>

It's 3 very simple lines of code per slow test (or per suite if the suite
itself is slow).  If that's too much, I could make it a single line of code
by writing a helper function.  You'd just see this something like this:

func (s *MySuite) TestFullStackSomething(c *gc.C) {
testing.SkipIfShort(c)
//  rest of test
}

The functionality for the -short flag and for skipping tests is already
built into the test infrastructure we use (both go test and gocheck).
There's no other effort required.  It can be done incrementally as we're
doing other things.  And if someone doesn't like it, doesn't trust it,
thinks it's a bad idea... it won't impact them at all.  They can just
ignore this thread and everything will be exactly as it was (except for one
line of code they can ignore in some tests).

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-28 Thread Nate Finch
I don't really understand what you mean by stages of development.  At the
end of the day, they all test the exact same thing - is our code correct?
The form of the test seems like it should be unrelated to when they are
run.  Can you explain why you think running tests of different sorts at the
same time would be a bad thing?

Note that I only want to "divide up tests" temporally... not necessarily
spatially.  If we want to put all our static analysis tests in one
directory, our integration tests in another directory, unit tests in the
directory of the unit... that's totally fine.  I just want an easy way to
run all the fast tests (regardless of what or how they test) to get a
general idea of how badly I've broken juju during development.

On Thu, Apr 28, 2016 at 5:24 PM Anastasia Macmood <
anastasia.macm...@canonical.com> wrote:

> For what it's worth, to distinguish between tests based on the times they
> take to run is borderline naive. Meaningful distinction is what the test
> tests :D
> Unit test checks that unit of work under testing is doing what is
> expected;
> integration tests tests that we play well together;
> functional tests tests behaviour;
> static analysis analyses codebase to ensure conformity to agreed policies.
>
> They all have meaning at different stages of development and to bundle
> them based on the running time is to compromise these stages in long-term.
>
>
> On 29/04/16 05:03, Nate Finch wrote:
>
> Our full set of tests in github.com/juju/juu takes 10-15 minutes to run,
> depending on the speed of your computer.  It's no coincidence that our test
> pyramid looks more like this ▽ than this △.   Also, we have a lot of tests:
>
> /home/nate/src/github.com/juju/juju/$ grep -r ") Test" .
> --include="*_test.go" | wc -l
> 9464
>
> About small, medium, and large tests... I think that's a good
> designation.  Certainly 17 seconds is not a small test.  But I *think* it
> qualifies as medium (hopefully most would be faster).   Here's my
> suggestion, tying this back into what I was talking about originally:
>
> Small tests would be those that run with go test -short.  That gives you
> something you can run frequently during development to give you an idea of
> whether or not you really screwed up.  Ideally each one should be less than
> 100ms to run.  (Note that even if all our tests ran this fast, it would
> still take 15 minutes to run them, not including compilation time).
>
> Medium tests would also be run if you don't use -short.  Medium tests
> would still be something that an average developer could run locally, and
> while she may want to get up to grab a drink while they're running, she
> probably wouldn't have time to run to the coffee shop to get said drink.
> Medium tests would be anything more than 100ms, but probably less than
> 15-20 seconds (and hopefully not many of the latter).  Medium tests would
> be run before making a PR, and as a gating job.
>
> Long tests should be relegated to CI, such as bringing up instances in
> real clouds.
>
> I don't think it's terribly useful to divide tests up by type of test. Who
> cares if it's a bug found with static analysis or by executing the code?
> Either way, it's a bug.  The only thing that really matters is how long the
> tests take, so we can avoid running slow tests over and over.  I run go
> vet, go lint, and go fmt on save in my editor.  That's static analysis, but
> they run far more often than I actually run tests and that's because
> they're always super fast.
>
> I think we all agree that all of these tests (except for CI tests) should
> be used to gate landings.  The question then is, how do you run the tests,
> and how do you divide up the tests?  To me, the only useful metric for
> dividing them up is how long they take to run.  I'll run any kind of test
> you give me so long as it's fast enough.
>
> On Thu, Apr 28, 2016 at 12:39 PM Nicholas Skaggs <
> nicholas.ska...@canonical.com> wrote:
>
>> On 04/28/2016 10:12 AM, Katherine Cox-Buday wrote:
>> > On 04/27/2016 09:51 PM, Nate Finch wrote:
>> >> So, this is exactly why I didn't want to mention the nature of the
>> >> test, because we'd get sidetracked. I'll make another thread to talk
>> >> about that specific test.
>> Sorry I forced you into it, but it was important to this discussion. I
>> was wanting to understand your feelings towards a test you should be
>> running regularly as you develop, aka a unit test, that took more than a
>> trivial amount of time to actually execute.
>> >>
>> >> I do still want to talk about what we can do for unit tests that take
>> >> a long time.  I think giving developers the option to skip lo

Re: adding unit tests that take a long time

2016-04-28 Thread Nate Finch
Our full set of tests in github.com/juju/juu takes 10-15 minutes to run,
depending on the speed of your computer.  It's no coincidence that our test
pyramid looks more like this ▽ than this △.   Also, we have a lot of tests:

/home/nate/src/github.com/juju/juju/$ grep -r ") Test" .
--include="*_test.go" | wc -l
9464

About small, medium, and large tests... I think that's a good designation.
Certainly 17 seconds is not a small test.  But I *think* it qualifies as
medium (hopefully most would be faster).   Here's my suggestion, tying this
back into what I was talking about originally:

Small tests would be those that run with go test -short.  That gives you
something you can run frequently during development to give you an idea of
whether or not you really screwed up.  Ideally each one should be less than
100ms to run.  (Note that even if all our tests ran this fast, it would
still take 15 minutes to run them, not including compilation time).

Medium tests would also be run if you don't use -short.  Medium tests would
still be something that an average developer could run locally, and while
she may want to get up to grab a drink while they're running, she probably
wouldn't have time to run to the coffee shop to get said drink.  Medium
tests would be anything more than 100ms, but probably less than 15-20
seconds (and hopefully not many of the latter).  Medium tests would be run
before making a PR, and as a gating job.

Long tests should be relegated to CI, such as bringing up instances in real
clouds.

I don't think it's terribly useful to divide tests up by type of test. Who
cares if it's a bug found with static analysis or by executing the code?
Either way, it's a bug.  The only thing that really matters is how long the
tests take, so we can avoid running slow tests over and over.  I run go
vet, go lint, and go fmt on save in my editor.  That's static analysis, but
they run far more often than I actually run tests and that's because
they're always super fast.

I think we all agree that all of these tests (except for CI tests) should
be used to gate landings.  The question then is, how do you run the tests,
and how do you divide up the tests?  To me, the only useful metric for
dividing them up is how long they take to run.  I'll run any kind of test
you give me so long as it's fast enough.

On Thu, Apr 28, 2016 at 12:39 PM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> On 04/28/2016 10:12 AM, Katherine Cox-Buday wrote:
> > On 04/27/2016 09:51 PM, Nate Finch wrote:
> >> So, this is exactly why I didn't want to mention the nature of the
> >> test, because we'd get sidetracked. I'll make another thread to talk
> >> about that specific test.
> Sorry I forced you into it, but it was important to this discussion. I
> was wanting to understand your feelings towards a test you should be
> running regularly as you develop, aka a unit test, that took more than a
> trivial amount of time to actually execute.
> >>
> >> I do still want to talk about what we can do for unit tests that take
> >> a long time.  I think giving developers the option to skip long tests
> >> is handy - getting a reasonable amount of coverage when you're in the
> >> middle of the develop/test/fix cycle.  It would be really useful for
> >> when you're making changes that affect a lot of packages and so you
> >> end up having to run full tests over and over.  Of course, running
> >> just the short tests would not give you 100% confidence, but once
> >> you've fixed everything so the short tests pass, *then* you could do
> >> a long run for thorough coverage.
> >
> > I believe Cheryl has something like this in the works and will be
> > sending a note out on it soon.
> >
> Yes. It is imperative that developers can quickly (and I mean quickly or
> it won't happen!) run unit tests. We absolutely want testruns to be a
> part of the code, build, run iteration loop.
> >> This is a very low friction way to increase developer productivity,
> >> and something we can implement incrementally.  It can also lead to
> >> better test coverage over all.  If you write 10 unit tests that
> >> complete in milliseconds, but were thinking about writing a couple
> >> longer-running unit tests that make sure things are working
> >> end-to-end, you don't have the disincentive of "well, this will make
> >> everyone's full test runs 30 seconds longer", since you can always
> >> skip them with -short.
> >>
> >> The only real negative I see is that it makes it less painful to
> >> write long tests for no reason, which would still affect landing
> >> times but hopefully everyone is still aware of the impact of
> >> long-running test

Re: Static Analysis tests

2016-04-27 Thread Nate Finch
Maybe we're not as far apart as I thought at first.

My thought was that they'd live under github.com/juju/juju/devrules (or
some other name) and therefore only get run during a full test run or if
you run them there specifically.  What is a full test run if not a test of
all our code?  These tests just happen to test all the code at once, rather
than piece by piece.  Combining with the other thread, if we also marked
them as skipped under -short, you could easily still run go test ./...
-short from the root of the juju repo and not incur the extra 16.5 seconds
(gocheck has a nice feature where if you call c.Skip() in the SetUpSuite,
it skips all the tests in the suite, which is particularly appropriate to
these tests, since it's the SetUpSuite that takes all the time).

Mostly, I just didn't want them to live off in a separate repo or run with
a separate tool.

On Wed, Apr 27, 2016 at 11:39 PM Andrew Wilkins <
andrew.wilk...@canonical.com> wrote:

> On Thu, Apr 28, 2016 at 11:14 AM Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> From the other thread:
>>
>> I wrote a test that parses the entire codebase under github.com/juju/juju to
>> look for places where we're creating a new value of crypto/tls.Config
>> instead of using the new helper function that I wrote that creates one with
>> more secure defaults.  It takes 16.5 seconds to run on my machine.  There's
>> not really any getting around the fact that parsing the whole tree takes a
>> long time.
>>
>> What I *don't* want is to put these tests somewhere else which requires
>> more thought/setup to run.  So, no separate long-tests directory or
>> anything.  Keep the tests close to the code and run in the same way we run
>> unit tests.
>>
>
> The general answer to this belongs back in the other thread, but I agree
> that long-running *unit* tests (if there should ever be such a thing)
> should not be shunted off to another location. Keep the unit tests with the
> unit. Integration tests are a different matter, because they cross multiple
> units. Likewise, tests for project policies.
>
> Andrew's response:
>>
>>
>> *The nature of the test is important here: it's not a test of Juju
>> functionality, but a test to ensure that we don't accidentally use a TLS
>> configuration that doesn't match our project-wide constraints. It's static
>> analysis, using the test framework; and FWIW, the sort of thing that Lingo
>> would be a good fit for.*
>>
>> *I'd suggest that we do organise things like this separately, and run
>> them as part of the "scripts/verify.sh" script. This is the sort of test
>> that you shouldn't need to run often, but I'd like us to gate merges on.*
>>
>> So, I don't really think the method of testing should determine where a
>> test lives or how it is run.  I could test the exact same things with a
>> more common unit test - check the tls config we use when dialing the API is
>> using tls 1.2, that it only uses these specific ciphersuites, etc.  In
>> fact, we have some unit tests that do just that, to verify that SSL is
>> disabled.  However, then we'd need to remember to write those same tests
>> for every place we make a tls.Config.
>>
>
> The method of testing is not particularly relevant; it's the *purpose*
> that matters. You could probably use static analysis for a lot of our
> units; it would be inappropriate, but they'd still be testing units, and so
> should live with them.
>
> The point I was trying to make is that this is not a test of one unit, but
> a policy that covers the entire codebase. You say that you don't want to it
> put them "somewhere else", but it's not at all clear to me where you think
> we *should* have them.
>
>> The thing I like about having this as part of the unit tests is that it's
>> zero friction.  They already gate landings.  We can write them and run them
>> them just like we write and run go tests 1000 times a day.  They're not
>> special.  There's no other commands I need to remember to run, scripts I
>> need to remember to set up.  It's go test, end of story.
>>
>
> Using the Go testing framework is fine. I only want to make sure we're not
> slowing down the edit/test cycle by frequently testing things that are
> infrequently going to change. It's the same deal as with integration tests;
> there's a trade-off between the time spent and confidence level.
>
>> The comment about Lingo is valid, though I think we have room for both in
>> our processes.  Lingo, in my mind, is more appropriate at review-time,
>> which allows us to write lingo rules that may not have 100% confidence.
>> They can be strong suggestions r

Static Analysis tests

2016-04-27 Thread Nate Finch
>From the other thread:

I wrote a test that parses the entire codebase under github.com/juju/juju to
look for places where we're creating a new value of crypto/tls.Config
instead of using the new helper function that I wrote that creates one with
more secure defaults.  It takes 16.5 seconds to run on my machine.  There's
not really any getting around the fact that parsing the whole tree takes a
long time.

What I *don't* want is to put these tests somewhere else which requires
more thought/setup to run.  So, no separate long-tests directory or
anything.  Keep the tests close to the code and run in the same way we run
unit tests.

Andrew's response:


*The nature of the test is important here: it's not a test of Juju
functionality, but a test to ensure that we don't accidentally use a TLS
configuration that doesn't match our project-wide constraints. It's static
analysis, using the test framework; and FWIW, the sort of thing that Lingo
would be a good fit for.*

*I'd suggest that we do organise things like this separately, and run them
as part of the "scripts/verify.sh" script. This is the sort of test that
you shouldn't need to run often, but I'd like us to gate merges on.*

So, I don't really think the method of testing should determine where a
test lives or how it is run.  I could test the exact same things with a
more common unit test - check the tls config we use when dialing the API is
using tls 1.2, that it only uses these specific ciphersuites, etc.  In
fact, we have some unit tests that do just that, to verify that SSL is
disabled.  However, then we'd need to remember to write those same tests
for every place we make a tls.Config.

The thing I like about having this as part of the unit tests is that it's
zero friction.  They already gate landings.  We can write them and run them
them just like we write and run go tests 1000 times a day.  They're not
special.  There's no other commands I need to remember to run, scripts I
need to remember to set up.  It's go test, end of story.

The comment about Lingo is valid, though I think we have room for both in
our processes.  Lingo, in my mind, is more appropriate at review-time,
which allows us to write lingo rules that may not have 100% confidence.
They can be strong suggestions rather than gating rules.  The type of test
I wrote should be a gating rule - there are no false positives.

To give a little more context, I wrote the test as a suite, where you can
add tests to hook into the code parsing, so we can trivially add more tests
that use the full parsed code, while only incurring the 16.5 second parsing
hit once for the entire suite.  That doesn't really affect this discussion
at all, but I figured people might appreciate that this could be extended
for more than my one specific test.  I certainly wouldn't advocate people
writing new 17 seconds tests all over the place.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-27 Thread Nate Finch
I was actually trying to avoid talking about the test itself to keep things
shorter ;)

The test is parsing the entire codebase under github.com/juju/juju to look
for places where we're creating a new value of crypto/tls.Config instead of
using the new helper function that I wrote that creates one with more
secure defaults.  There's not really any getting around the fact that
parsing the whole tree takes a long time.

On Wed, Apr 27, 2016 at 1:25 PM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> This is a timely discussion Nate. I'll avoid saying too much off the
> top, but I do have a question.
>
> On 04/27/2016 12:24 PM, Nate Finch wrote:
> > I just wrote a test that takes ~16.5 seconds on my machine.
> Why does the test take so long? Are you intending it to be a short /
> small scoped test?
>
> Nicholas
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


adding unit tests that take a long time

2016-04-27 Thread Nate Finch
I just wrote a test that takes ~16.5 seconds on my machine. Since we are
trying to *reduce* the time taken by tests, I think it's prudent to give
developers a way to avoid running longer tests (obviously things like the
landing bot etc should always run all the tests).

What I *don't* want is to put long tests somewhere else which requires more
thought/setup to run.  So, no separate long-tests directory or anything.
Keep the tests close to the code and run in the same way we run unit
tests.  This also lowers the barrier to changing older tests that also take
a long time.

I suggest we allow developers to opt out of longer running tests.  This is
already supported by passing the -short flag to go test.

Then, long tests could use

if testing.Short() {
c.Skip("skipping long running test.")
}

go test will run this test, but go test -short will skip it.  Sweet, easy,
devs can run go test -short over and over, and when they need to run for
landing, they can drop the -short.  CI and the landing bot don't need to
change at all.

We could also do some sort of opt-in scheme with build tags, but then CI
and the landing bot would have to change to opt in, and you'd need separate
files for long tests, which is a pain, etc.

Thoughts?
-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


two new helpers in github.com/juju/testing

2016-04-22 Thread Nate Finch
I added a couple new helpers to the testing repo that I think will help us
avoid some of the contortions we have had to do in the past when testing
executables and output on stderr/stdout.

The first is a fairly simple function, CaptureOutput
, which takes in a
func() and returns anything written to stderr or stdout while running that
function.  It temporarily replaces os.Stderr and os.Stdout with files on
disk while running the function (necessary because those two values are
*os.File values, so, rather hard to mock out with anything in-memory).

The second is slightly more complicated - PatchExecHelper


We had PatchExecutable, which creates gnarly bash/batch scripts on disk to
replace expected executables, but it has a few weaknesses - it doesn't
allow you to test the arguments passed to it *and* test stderr/stdout at
the same time, it assumed the thing you were patching out was on the PATH,
and I'm pretty sure it wouldn't work to patch out .exe files on Windows.

PatchExecHelper fixes all of those problems by approaching the problem in a
different way, it creates a function that is intended to mock out a call to
exec.Command.  It's very easy to use, just embed PatchExecHelper in your
test suite, and then use its GetExecCommand() method to get a function that
can be used to mock out exec.Command.

The exact method of how it works is detailed in the godoc, and it's an
example implementation of something I blogged about a while back:
https://npf.io/2015/06/testing-exec-command/

Here's an example of how it can be used:

// production code
var execCommand = exec.Command
func Run(command string, args ...string) ([]byte, error) {
return execCommand(command, args...).CombinedOutput()
}

// test code
type RunSuite struct {
testing.CleanupSuite

// you *must* embed PatchExecHelper in your test suite.
testing.PatchExecHelper
}

func (s RunSuite) TestRun(c *gc.C) {
outArgs := make(chan []string, 1)
f := s.GetExecCommand(testing.PatchExecConfig{
Stderr: "this is stderr!",
Stdout: "this is stdout!",
ExitCode: 5,
Args: outArgs,
})
s.PatchValue(, f)

output, err := Run("echo", "hello world!")
args := <-outArgs
// test output, args, etc.
}

Let me know if you have any questions or comments.
-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD polish for xenial

2016-04-19 Thread Nate Finch
Thanks for explaining, John, that makes sense and really helps me
understand the reasoning behind these changes.

On Tue, Apr 19, 2016 at 11:30 PM John Meinel <j...@arbash-meinel.com> wrote:

> On Tue, Apr 19, 2016 at 6:56 PM, Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> Then I guess I don't understand why it worked fine up until last week.
>>
>
> So up until last week LXD depended on the 'lxc1' package which was the old
> tools for creating containers. That did always set up an 'lxcbr0' bridge,
> which LXD then always used for its own containers.
>
> That was "ok" because "lxc1" was never installed by default so it didn't
> break people who just "install ubuntu". With LXD being
> always-available-always-installed-by-default with Ubuntu, they had to be
> more conservative in their defaults. Otherwise when you just "ec2
> start-instance ubuntu" it might not work as you expect because it has
> additional bridges installed-by-default on your cloud instance that didn't
> exist in Trusty.
>
> John
> =:->
>
>
>
>> > LXD is our code.  Juju is our code.  Surely we can code/configure the
>>> two
>>> > to work together such that it just works for the 95% case?
>>>
>>> I assure you we're aware of this. Unfortunately there's not a good
>>> answer.
>>>
>>> Tycho
>>>
>>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD polish for xenial

2016-04-19 Thread Nate Finch
Then I guess I don't understand why it worked fine up until last week.

On Tue, Apr 19, 2016, 10:39 AM Tycho Andersen <tycho.ander...@canonical.com>
wrote:

> On Tue, Apr 19, 2016 at 01:42:08PM +0000, Nate Finch wrote:
> > On Tue, Apr 19, 2016 at 7:49 AM John Meinel <j...@arbash-meinel.com>
> wrote:
> >
> > > ...
> > >>>
> > >>
> > >
> > >>
> > >>> That's probably the cause of the other confusion in the updated docs
> -
> > >>> now we *do* want the bridge named lxdbr0 not lxcbr0. If the user
> > >>> already has lxc setup in some fashion there's a different error from
> > >>> juju telling them to run the dpkg-reconfigure command. That should
> > >>> presumably always appear whenever there's no usable bridge.
> > >>>
> > >> The flip flopping back and forth on the bridge name is going to cause
> > >> havoc even after release. Do we have clear direction now from LXD on
> how
> > >> they plan to handle the competing network bridge names? I understand
> we're
> > >> now saying a dpkg-reconfigure is required, but I'm not clear if /
> when we
> > >> plan to make this easier.
> > >>
> > >
> > > So, LXD is going to be installed-by-default on Xenial. Which means that
> > > everyone will always have it installed. However, that leads to a
> problem
> > > with having a "standard" subnet dedicated to LXD. While it works for
> lots
> > > of people, there are plenty of cases where that subnet would already
> be in
> > > use elsewhere (in say a company's internal LAN). Which means LXD chose
> to
> > > go with link-local addresses by default. AIUI that means that the
> > > containers can talk to the host (and probably the host can route them
> to
> > > the public internet?) but they aren't able to talk to other containers
> or
> > > have inbound traffic.
> > > My personal hope is that the dpkg-reconfigure steps can provide a
> 70%-case
> > > safe default set of addresses (possibly using autodetection), and give
> a
> > > nice wizard to help people, while still allowing them to override it
> in case
> > >
> >
> > I really fear for what this will do to adoption.  I feel like we're
> > throwing the common "it just works" case out the window to enable a very
> > small edge case to work slightly better.  I'd much rather have defaults
> > where juju/lxd just works with most common setups.  For the one little
> > exception - if the network on your corporate LAN conflicts - we give a
> nice
> > error message and give the user tools to reconfigure.
>
> The problem here is that there's no way to detect this case (e.g. what
> if the network in question is routed behind your default gateway
> somehow?). Things just break, and it can be difficult for people to
> figure out why.
>
> LXD's bridge setup defaults to ipv6 link local with an HTTP proxy that
> allows apt-get to work. The current dpkg-reconfigure prompts will
> suggest reasonable values if you decide you want to enable ipv4.
>
> > Making everyone
> > configure their network (not the easiest or friendliest task even for
> > people on our own team), is surely going to cause us to lose a *lot* of
> > users with a bad first impression.
> >
> > LXD is our code.  Juju is our code.  Surely we can code/configure the two
> > to work together such that it just works for the 95% case?
>
> I assure you we're aware of this. Unfortunately there's not a good
> answer.
>
> Tycho
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD polish for xenial

2016-04-19 Thread Nate Finch
On Tue, Apr 19, 2016 at 7:49 AM John Meinel  wrote:

> ...
>>>
>>
>
>>
>>> That's probably the cause of the other confusion in the updated docs -
>>> now we *do* want the bridge named lxdbr0 not lxcbr0. If the user
>>> already has lxc setup in some fashion there's a different error from
>>> juju telling them to run the dpkg-reconfigure command. That should
>>> presumably always appear whenever there's no usable bridge.
>>>
>> The flip flopping back and forth on the bridge name is going to cause
>> havoc even after release. Do we have clear direction now from LXD on how
>> they plan to handle the competing network bridge names? I understand we're
>> now saying a dpkg-reconfigure is required, but I'm not clear if / when we
>> plan to make this easier.
>>
>
> So, LXD is going to be installed-by-default on Xenial. Which means that
> everyone will always have it installed. However, that leads to a problem
> with having a "standard" subnet dedicated to LXD. While it works for lots
> of people, there are plenty of cases where that subnet would already be in
> use elsewhere (in say a company's internal LAN). Which means LXD chose to
> go with link-local addresses by default. AIUI that means that the
> containers can talk to the host (and probably the host can route them to
> the public internet?) but they aren't able to talk to other containers or
> have inbound traffic.
> My personal hope is that the dpkg-reconfigure steps can provide a 70%-case
> safe default set of addresses (possibly using autodetection), and give a
> nice wizard to help people, while still allowing them to override it in case
>

I really fear for what this will do to adoption.  I feel like we're
throwing the common "it just works" case out the window to enable a very
small edge case to work slightly better.  I'd much rather have defaults
where juju/lxd just works with most common setups.  For the one little
exception - if the network on your corporate LAN conflicts - we give a nice
error message and give the user tools to reconfigure.  Making everyone
configure their network (not the easiest or friendliest task even for
people on our own team), is surely going to cause us to lose a *lot* of
users with a bad first impression.

LXD is our code.  Juju is our code.  Surely we can code/configure the two
to work together such that it just works for the 95% case?
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Thunderbirds are Go 1.6

2016-04-10 Thread Nate Finch
Glorious. Thanks so much for all the work, Michael and Curtis and team!

On Sun, Apr 10, 2016, 1:16 PM Cheryl Jennings 
wrote:

> Thank you for all your work on this, Curtis!
>
> On Sun, Apr 10, 2016 at 11:55 AM, Curtis Hovey-Canonical <
> cur...@canonical.com> wrote:
>
>> Ladies and Gentlemen, We build with Go 1.6 everywhere.
>>
>> Take a moment to consider the implications, because that means we do
>> not use Go 1.2, 1.3, or 1.5. Juju 1.x and 2.x for Windows, OS X, and
>> Linux (precise, trusty, wily, xenial, and centos7) are built with Go 1.6.
>>
>> There was concern about how to support precise if the code grew 1.2
>> incompatibilities. Michael Hudson-Doyle's fabulous package was trivial
>> to backport to precise. We don't need to create a new process to support
>> precise. The golang-1.6 packages was also easy to copy to wily. since
>> the Go 1.5 unit tests are less reliable than the Go 1.6 unit tests, the
>> hour spent setting up wily to build with golang-1.6 will pay for itself
>> in a day.
>>
>> Some unit tests combinations are still using Go 1.2 because the tests
>> assume OS features because of the Golang version instead of checking
>> the OS:
>>
>> 1. The Makefile rules for installing deps are very stale. Precise unit
>> tests will fail when forced to use Go 1.6 because they cannot
>> start mongo:
>> error command line: unknown option sslOnNormalPorts
>> But we see that actual deployments of precise built with Go 1.6 work.
>>
>> 2. Centos and Windows unit test are still Go 1.2 because many lxd
>> related tests fail when run with Go 1.6.
>>
>> The git-merge-juju job is using the xenial daily image to gate merges.
>> The released image it too stale to do an unattended dist-upgrade
>>
>> The unit test runs for xenial-amd64, xenial-arm64, xenial-s390x,
>> xenial-ppc64el, wily-amd64, trusty-amd64, trusty-i386, and
>> trusty-ppc64el will use the golang (1.6) or golang-1.6 (1.6) packages.
>> The go1.2 trusty unit tests amd64, arm64, i386, and ppc64el are gone.
>>
>> Ubuntu source packages prefer the golang-1.6 package, and fallback to
>> golang (>= 1.2). We needed a new builder because Go 1.6 needs more
>> memory. We saw errors like this with 2G of memory
>> golang-1.6/pkg/tool/linux_amd64/link: running gcc failed:
>> fork/exec /usr/bin/gcc: cannot allocate memory
>> Centos is created using the same Go package as trusty. Windows is
>> cross compiled as 64 bit for agent and 32 bit for clients. OS X is
>> natively compiled using Go 1.6 on CI's OS X host.
>>
>> We also got word is was safe to upgrade our arm64 host to xenial. This
>> is done. The ppc64el hosts were also upgraded this week. We are
>> expanding tests for arm64 and ppc64el
>>
>> --
>> Curtis Hovey
>> Canonical Cloud Development and Operations
>> http://launchpad.net/~sinzui
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
I obviously wasn't clear. I was suggesting a --yes-i-really-mean-it flag on
kill-controller, but if you passed just -y we show the prompt anyway
(instead of erroring out on an unknown flsg).

My point with the big prompt was in response to Rick saying it should never
be needed and should only be used in extreme circumstances... If that's how
we feel, we should make sure the user knows it.

Many people have said on this thread that the difference between
kill-controller and destroy-controller is not immediately obvious.., and
we're the devs on the project. I'm just trying to make sure we make it
clear for users.

On Wed, Apr 6, 2016, 4:54 PM Tim Penhey <tim.pen...@canonical.com> wrote:

> On 06/04/16 23:13, Nick Veitch wrote:
> > Sure, I am just concerned about a proliferation of commands to do the
> > same (ultimately) task
> >
> > destroy-controller
>
> The most correct way to take down a controller.
>
> > kill-controller
>
> The OMG it is broken, please do as much as you can and I know I'm going
> to have to manually check any resources left around that it couldn't
> clean up.
>
> > forget/purge-controller
>
> Remove local references to the controller.
>
>
> Not really the same things at all.
>
> Tim
>
>
> >
> >
> >
> > On 6 April 2016 at 11:59, Horacio Duran <horacio.du...@canonical.com
> > <mailto:horacio.du...@canonical.com>> wrote:
> >
> > The issue I see with that approach is that in that case
> > kill-controller might be doing less than you expect instead of more,
> > suppose the controller is having transient issues and kill
> > controller cannot reach the cloud for deletion, this would forget
> > the controller and leave it in the cloud, forget-controller instead
> > tells us very clearly what is going to happen, the change is going
> > to be local and not affect the controller.
> > My 2c
> >
> >
> > On Wednesday, 6 April 2016, Nick Veitch <nick.vei...@canonical.com
> > <mailto:nick.vei...@canonical.com>> wrote:
> >
> > just my tuppence
> >
> > instead of having another command, can't we just add this as an
> > option to kill-controller?
> >
> > juju kill-controller --cleanup 
> >
> >
> >
> > On 6 April 2016 at 11:05, Horacio Duran
> > <horacio.du...@canonical.com> wrote:
> >
> >
> > I might be biased by years of apt-get but purge makes me
> > think that you are going to do what kill is supposed to do,
> > forget sound more aligned whit what you are really aiming to.
> >
> > On Wednesday, 6 April 2016, Andrew Wilkins
> > <andrew.wilk...@canonical.com> wrote:
> >
> > On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings
> > <cheryl.jenni...@canonical.com> wrote:
> >
> > Relevant bug:
> >  https://bugs.launchpad.net/juju-core/+bug/1553059
> >
> > We should provide a way to clean up controllers
> > without making the user manually edit juju's files.
> >
> >
> > Unless anyone objects, or has a better spelling, I will
> > be adding a command to do this:
> >
> > juju purge-controller 
> >
> > The command will require a "-y" or prompt for
> > confirmation, like kill-controller. It will not attempt
> > to destroy the controller, it will just remove the
> > details of it from the client.
> >
> > (Alternative suggestion for spelling: "juju
> > forget-controller". Purge-controller may suggest that
> > we're purging a controller of its contents, rather than
> > purging the controller from the client?)
> >
> > Cheers,
> > Andrew
> >
> > On Mon, Apr 4, 2016 at 7:05 AM, Nate Finch
> > <nate.fi...@canonical.com> wrote:
> >
> > This just happened to me, too.  Kill-controller
> > needs to work if at all possible.  That's the
> > whole point.  And yes, users may not hit
> > specific problems, but devs do, and that wastes
> > our time trying to figure out how to manually
> >   

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
So, I agree, we should never need to use kill controller... but sometimes
you might need it, like if someone SSH's into their controller and messes
it up, etc.

Right now, kill-controller is in all the help docs, i.e. juju help
commands... are we planning on changing that?  I'm not sure I'm on board
with having hidden commands, because they will invariably leak out, and
then we just have commands that everyone knows about, but that are
undocumented.

I'd much prefer giant warning text:

DANGER!!! USE THIS COMMAND ONLY AS A LAST RESORT!!
IT MAY LEAVE RESOURCES RUNNING IN YOUR CLOUD!  ALWAYS
TRY destroy-controller FIRST!

And make the flag to disable the "are you sure?" prompt something long and
annoying to type, not just -y, because that's too easy to just bang out
with muscle memory.

In fact, if I were feeling sneaky, I might accept -y and still make them
confirm, since they're obviously just banging out the command without
thinking.

-Nate



On Wed, Apr 6, 2016 at 11:21 AM Rick Harding <rick.hard...@canonical.com>
wrote:

> I strongly feel that we're avoiding and dancing around the issue.
>
> The user should NEVER EVER have to use kill-controller. If someone types
> that we've failed to build Juju to the standards to which I feel we all
> should expect us to live up to. There is only ONE way for a user to take
> down a controller, destroy-controller.
>
> If a user has models running on that controller we've given them a flag to
> safely say "I know I have stuff running, but I don't care" with the current
> stuff running on there.
>
> I completely understand that destroy-controller is not the reverse of
> bootstrap. We've filed a bug for that and should correct the bug rather
> than moving to other commands and processes to deal with that.
>
> https://bugs.launchpad.net/juju-core/+bug/1563615
>
> kill-controller is a top secret tool we keep in our back pockets for
> emergency debugging when crap that's beyond our planning/control has
> occurred and we've not yet updated destroy-controlller to be able to handle
> that. It should not be in the main docs, release notes, etc. It's
> dangerous, folks should be discouraged from ever using it. Even when we do
> use it, we should be saying things like "Well, destroy-controller seems to
> be broken, we've got this potentially messy/risky way of trying to work
> around it that we can try...but..."
>
> As for the user case where a user registers on someone else's controller
> and wants to no longer have that controller appear, then we should update
> destroy-controller to handle that. There's only ONE way to remove a
> controller from your list, destroy-controller. We should be able to note
> that the user requesting destroy-controller does not have sufficient access
> to remove it and tell them "You've got these models running on this
> controller". And if they want to destroy with the flag to remove all their
> models, then we should do that and remove it from their system. If they
> have no running models, we should note that we're removing that
> controller's information from their system. If there's user confusion we
> can look at a message "You are not the controller admin, removing the
> controller information from your cache" or the like.
>
> I don't feel that adding another command for another way to remove a
> controller from list-controllers is something we want to do and we
> especially don't want to do it this week as we try to freeze 2.0 for
> release.
>
> If folks think I'm missing a point please reach out to me and lets move
> this to a high-bandwidth discussion, but I wanted to share the original
> design/thought on the destroy vs kill controller commands. I want everyone
> to share the feeling that if our users ever type kill-controller into their
> terminals we've failed them and realize that.
>
> Thanks
>
> Rick
>
> On Wed, Apr 6, 2016 at 11:02 AM Cheryl Jennings <
> cheryl.jenni...@canonical.com> wrote:
>
>> Another relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1566426
>>
>> Maybe kill-controller tries to destroy through the API, but has a time
>> out if things get "stuck" where it will fall back to the provider.  I was
>> joking when I suggested yesterday in IRC that we should bring back --force
>> for kill-controller, but maybe we need it (or at least a timeout).
>>
>> On Wed, Apr 6, 2016 at 7:53 AM, Nate Finch <nate.fi...@canonical.com>
>> wrote:
>>
>>> Oh I see.  Yes, I agree that we should always try the right way first
>>> and only use the provider if necessary (especially if using the provider
>>> leaves garbage around).
>>>
>>> It seems like there's no reason why we

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
Oh I see.  Yes, I agree that we should always try the right way first and
only use the provider if necessary (especially if using the provider leaves
garbage around).

It seems like there's no reason why we couldn't make a --force flag do it
that way in 2.0 (aside from time constraints).

On Wed, Apr 6, 2016 at 10:48 AM Aaron Bentley <aaron.bent...@canonical.com>
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 2016-04-06 10:45 AM, Nate Finch wrote:
> > Wait, didn't destroy-environment --force fall back to the provider?
> > I thought that was the whole point of --force
>
> No, it didn't fall back.  It uses the provider unconditionally,
> without trying a normal destroy-controller first.
>
> Aaron
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJXBSG5AAoJEK84cMOcf+9hzSQIAJ/vNKIa1/TnDSyvC2U9ApzW
> TAEvSqaEUw0ZL2dl2tiNKTp3JPzcnCR4VKrBIsh1xi0hB1UNtJR+IW4O46gRI6ok
> ZvA1cAvoJvRdmqf1ntNzYwHRSn/Tm82DGzixTPt0TcTn3KYrk13XpRJuxMbbvHDM
> LfYG0zglGmVKUaWs4rBogh4H4OaiOIR8lORXSC8GRQjA1/C4c+FjIg+KeW5Yw2Ti
> XnG87BPyJ1TtPGWxxeKAk4tnkZwnZKtJOnHU/IfvTFOpECdBjojWnnc6VbQ1um0H
> WwjR6EcA4qxkkhND6ypIGkt9A4k3ZZvckCau52EgIn3pnwhk5OSw64MURJAEmn0=
> =vm/H
> -END PGP SIGNATURE-
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
Wait, didn't destroy-environment --force fall back to the provider?  I
thought that was the whole point of --force

On Wed, Apr 6, 2016 at 10:24 AM Aaron Bentley 
wrote:

> On 2016-04-06 08:22 AM, Andrew Wilkins wrote:
> > What I would like to see is: * destroy-controller to take on a
> > --force flag, causing it to do what kill-controller does now, and
> > what destroy-environment --force used to do
>
> What kill-controller does now, where it attempts to use Juju, but
> falls back to the provider, is much more valuable to us in QA than
> what destroy-environment --force used to do.  We don't want to leak
> resources unnecessarily, but we do want things to die when we try to
> kill them.
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
I agree with your assessment of the vocabulary, for what it's worth.

On Wed, Apr 6, 2016 at 9:41 AM Horacio Duran <horacio.du...@canonical.com>
wrote:

> As a non English native myself I find destroy to be much more final than
> kill, to me destroy sounds like kill and then burn just in case, but I
> don't know if this extends to other non English speakers too.
>
> On Wednesday, 6 April 2016, Nate Finch <nate.fi...@canonical.com> wrote:
>
>> Also +1 to Andrew's proposal.  In particular, the difference between kill
>> and destroy is pretty arbitrary from a vocabulary standpoint, so it's not
>> clear which one is the default and which one is the extreme measure.  A
>> flag on destroy is a lot more clear in that regard (and reduces the number
>> of commands needed).
>>
>> On Wed, Apr 6, 2016 at 8:41 AM Horacio Duran <horacio.du...@canonical.com>
>> wrote:
>>
>>> Strong +1 to what Andrew just proposed
>>>
>>> On Wednesday, 6 April 2016, Andrew Wilkins <andrew.wilk...@canonical.com>
>>> wrote:
>>>
>>>> On Wed, Apr 6, 2016 at 7:35 PM Rick Harding <rick.hard...@canonical.com>
>>>> wrote:
>>>>
>>>>> +1 to the -1 of a new command for this. I'd like to raise the
>>>>> discussion with the technical board as I'd like understand why the change
>>>>> the change that the team had to make made it impossible for 
>>>>> kill-controller
>>>>> to function and what we could do to allow the team to remove legacy code,
>>>>> but still be able to kill off things.
>>>>>
>>>>
>>>> Sorry, I probably should have started a new thread; this is orthogonal.
>>>> Still worth talking about with the board, but orthogonal to removing
>>>> details of a controller. You might "juju register" someone else's
>>>> controller, and then want to remove it from your client without destroying
>>>> it.
>>>>
>>>> I think the UX could do with rethinking. There's a few issues:
>>>>  (1) It's too easy to lose resources via kill-controller. See:
>>>> https://bugs.launchpad.net/juju-core/+bug/1559701 and
>>>> https://bugs.launchpad.net/juju-core/+bug/1566011
>>>>  (2) It's not clear from the name what kill-controller vs.
>>>> destroy-controller is, and so it's easy to assume they either do the same
>>>> thing, or just randomly choose one and run it. That leads to (1) happening
>>>> more than we'd like or expect.
>>>>  (3) destroy-controller is harder to use than kill-controller for the
>>>> standard case of destroying the controller and all of its hosted models.
>>>> You have to pass "--destroy-all-models" to destroy-controller, which you
>>>> don't have to pass to kill-controller. I don't understand the point of
>>>> that, given that you're already asked whether you want to destroy the
>>>> controller or not.
>>>>
>>>> What I would like to see is:
>>>>  * kill-controller to be dropped
>>>>  * destroy-controller's --destroy-all-models flag to be dropped, and
>>>> implied by the accepted prompt (or -y)
>>>>  * destroy-controller to take on a --force flag, causing it to do what
>>>> kill-controller does now, and what destroy-environment --force used to do
>>>>  * a new command to remove a controller from the client
>>>>
>>>> Why a new command? Because removing/forgetting is orthogonal to
>>>> destroying. It's plain weird to say "kill-controller --forget" (or
>>>> --cleanup, or whatever) if you're removing details of a live controller
>>>> that you just don't want to use any more.
>>>>
>>>> Cheers,
>>>> Andrew
>>>>
>>>> On Tue, Apr 5, 2016 at 11:55 PM Andrew Wilkins <
>>>>> andrew.wilk...@canonical.com> wrote:
>>>>>
>>>>>> On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings <
>>>>>> cheryl.jenni...@canonical.com> wrote:
>>>>>>
>>>>>>> Relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1553059
>>>>>>>
>>>>>>> We should provide a way to clean up controllers without making the
>>>>>>> user manually edit juju's files.
>>>>>>>
>>>>>>
>>>>>> Unless anyone objects, or has a better spelling, I will be adding a
>>>>>> command to 

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
Also +1 to Andrew's proposal.  In particular, the difference between kill
and destroy is pretty arbitrary from a vocabulary standpoint, so it's not
clear which one is the default and which one is the extreme measure.  A
flag on destroy is a lot more clear in that regard (and reduces the number
of commands needed).

On Wed, Apr 6, 2016 at 8:41 AM Horacio Duran <horacio.du...@canonical.com>
wrote:

> Strong +1 to what Andrew just proposed
>
> On Wednesday, 6 April 2016, Andrew Wilkins <andrew.wilk...@canonical.com>
> wrote:
>
>> On Wed, Apr 6, 2016 at 7:35 PM Rick Harding <rick.hard...@canonical.com>
>> wrote:
>>
>>> +1 to the -1 of a new command for this. I'd like to raise the discussion
>>> with the technical board as I'd like understand why the change the change
>>> that the team had to make made it impossible for kill-controller to
>>> function and what we could do to allow the team to remove legacy code, but
>>> still be able to kill off things.
>>>
>>
>> Sorry, I probably should have started a new thread; this is orthogonal.
>> Still worth talking about with the board, but orthogonal to removing
>> details of a controller. You might "juju register" someone else's
>> controller, and then want to remove it from your client without destroying
>> it.
>>
>> I think the UX could do with rethinking. There's a few issues:
>>  (1) It's too easy to lose resources via kill-controller. See:
>> https://bugs.launchpad.net/juju-core/+bug/1559701 and
>> https://bugs.launchpad.net/juju-core/+bug/1566011
>>  (2) It's not clear from the name what kill-controller vs.
>> destroy-controller is, and so it's easy to assume they either do the same
>> thing, or just randomly choose one and run it. That leads to (1) happening
>> more than we'd like or expect.
>>  (3) destroy-controller is harder to use than kill-controller for the
>> standard case of destroying the controller and all of its hosted models.
>> You have to pass "--destroy-all-models" to destroy-controller, which you
>> don't have to pass to kill-controller. I don't understand the point of
>> that, given that you're already asked whether you want to destroy the
>> controller or not.
>>
>> What I would like to see is:
>>  * kill-controller to be dropped
>>  * destroy-controller's --destroy-all-models flag to be dropped, and
>> implied by the accepted prompt (or -y)
>>  * destroy-controller to take on a --force flag, causing it to do what
>> kill-controller does now, and what destroy-environment --force used to do
>>  * a new command to remove a controller from the client
>>
>> Why a new command? Because removing/forgetting is orthogonal to
>> destroying. It's plain weird to say "kill-controller --forget" (or
>> --cleanup, or whatever) if you're removing details of a live controller
>> that you just don't want to use any more.
>>
>> Cheers,
>> Andrew
>>
>> On Tue, Apr 5, 2016 at 11:55 PM Andrew Wilkins <
>>> andrew.wilk...@canonical.com> wrote:
>>>
>>>> On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings <
>>>> cheryl.jenni...@canonical.com> wrote:
>>>>
>>>>> Relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1553059
>>>>>
>>>>> We should provide a way to clean up controllers without making the
>>>>> user manually edit juju's files.
>>>>>
>>>>
>>>> Unless anyone objects, or has a better spelling, I will be adding a
>>>> command to do this:
>>>>
>>>> juju purge-controller 
>>>>
>>>> The command will require a "-y" or prompt for confirmation, like
>>>> kill-controller. It will not attempt to destroy the controller, it will
>>>> just remove the details of it from the client.
>>>>
>>>> (Alternative suggestion for spelling: "juju forget-controller".
>>>> Purge-controller may suggest that we're purging a controller of its
>>>> contents, rather than purging the controller from the client?)
>>>>
>>>> Cheers,
>>>> Andrew
>>>>
>>>> On Mon, Apr 4, 2016 at 7:05 AM, Nate Finch <nate.fi...@canonical.com>
>>>>> wrote:
>>>>>
>>>>>> This just happened to me, too.  Kill-controller needs to work if at
>>>>>> all possible.  That's the whole point.  And yes, users may not hit 
>>>>>> specific
>>>>>> problems, but devs do, and that w

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
I agree with Horacio, forget controller is good becsude it only does what
you expect, whereas failing to kill a controller but making it look like
success is bad.

Honestly, I think the problem is kill controller that should just go back
to being a flag on destroy controller.  Then we'd just have destroy and
forget.

On Wed, Apr 6, 2016, 6:59 AM Horacio Duran <horacio.du...@canonical.com>
wrote:

> The issue I see with that approach is that in that case kill-controller
> might be doing less than you expect instead of more, suppose the controller
> is having transient issues and kill controller cannot reach the cloud for
> deletion, this would forget the controller and leave it in the cloud,
> forget-controller instead tells us very clearly what is going to happen,
> the change is going to be local and not affect the controller.
> My 2c
>
>
> On Wednesday, 6 April 2016, Nick Veitch <nick.vei...@canonical.com> wrote:
>
>> just my tuppence
>>
>> instead of having another command, can't we just add this as an option to
>> kill-controller?
>>
>> juju kill-controller --cleanup 
>>
>>
>>
>> On 6 April 2016 at 11:05, Horacio Duran <horacio.du...@canonical.com>
>> wrote:
>>
>>>
>>> I might be biased by years of apt-get but purge makes me think that you
>>> are going to do what kill is supposed to do, forget sound more aligned whit
>>> what you are really aiming to.
>>>
>>> On Wednesday, 6 April 2016, Andrew Wilkins <andrew.wilk...@canonical.com>
>>> wrote:
>>>
>>>> On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings <
>>>> cheryl.jenni...@canonical.com> wrote:
>>>>
>>>>> Relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1553059
>>>>>
>>>>> We should provide a way to clean up controllers without making the
>>>>> user manually edit juju's files.
>>>>>
>>>>
>>>> Unless anyone objects, or has a better spelling, I will be adding a
>>>> command to do this:
>>>>
>>>> juju purge-controller 
>>>>
>>>> The command will require a "-y" or prompt for confirmation, like
>>>> kill-controller. It will not attempt to destroy the controller, it will
>>>> just remove the details of it from the client.
>>>>
>>>> (Alternative suggestion for spelling: "juju forget-controller".
>>>> Purge-controller may suggest that we're purging a controller of its
>>>> contents, rather than purging the controller from the client?)
>>>>
>>>> Cheers,
>>>> Andrew
>>>>
>>>> On Mon, Apr 4, 2016 at 7:05 AM, Nate Finch <nate.fi...@canonical.com>
>>>>> wrote:
>>>>>
>>>>>> This just happened to me, too.  Kill-controller needs to work if at
>>>>>> all possible.  That's the whole point.  And yes, users may not hit 
>>>>>> specific
>>>>>> problems, but devs do, and that wastes our time trying to figure out how 
>>>>>> to
>>>>>> manually clean up the garbage.
>>>>>>
>>>>>> On Mon, Apr 4, 2016 at 8:33 AM Rick Harding <
>>>>>> rick.hard...@canonical.com> wrote:
>>>>>>
>>>>>>> On Sun, Apr 3, 2016 at 6:56 PM Andrew Wilkins <
>>>>>>> andrew.wilk...@canonical.com> wrote:
>>>>>>>
>>>>>>>> In a non-beta release we would make sure that the config changes
>>>>>>>> aren't backwards incompatible.
>>>>>>>>
>>>>>>>
>>>>>>> I think this is the key thing. I think that kill-controller is an
>>>>>>> exception to this rule. I think we should always at least give the user 
>>>>>>> the
>>>>>>> ability to remove their stuff and start over with the new alpha/beta/rc
>>>>>>> release. I'd like to ask us to explore making kill-controller an 
>>>>>>> exception
>>>>>>> to this policy and that if tests prove we can't bootstrap on one beta 
>>>>>>> and
>>>>>>> kill with trunk that it's a blocking bug for us.
>>>>>>> --
>>>>>>> Juju-dev mailing list
>>>>>>> Juju-dev@lists.ubuntu.com
>>>>>>> Modify settings or unsubscribe at:
>>>>>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Juju-dev mailing list
>>>>>> Juju-dev@lists.ubuntu.com
>>>>>> Modify settings or unsubscribe at:
>>>>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>>>>
>>>>>>
>>>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>>
>>
>>
>> --
>> Nick Veitch,
>> CDO Documentation
>> Canonical
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Charm layers & Windows

2016-04-04 Thread Nate Finch
At least... according to the core code, but it occurs to me I may well be
mistaken about what's allowed by the charm authoring tools and/or
charmstore.

On Mon, Apr 4, 2016 at 10:16 AM Nate Finch <nate.fi...@canonical.com> wrote:

> BTW, I don't believe this is true.  I didn't work on series in metadata,
> but I skimmed the code and didn't see anything doing checks for OS
> consistency across the stated series.  I just tried it with a charm labeled
> as trusty & win10 in the metadata.yaml and it worked fine deploying to
> trusty.
>
> On Mon, Apr 4, 2016 at 9:48 AM Stuart Bishop <stuart.bis...@canonical.com>
> wrote:
>
>>
>> You will need two root layers for the two charms, one for Ubuntu and
>> one for Windows, since you can't list both Windows and Ubuntu releases
>> as supported series in the same metadata.yaml.
>>
>>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Unable to kill-controller

2016-04-04 Thread Nate Finch
This just happened to me, too.  Kill-controller needs to work if at all
possible.  That's the whole point.  And yes, users may not hit specific
problems, but devs do, and that wastes our time trying to figure out how to
manually clean up the garbage.

On Mon, Apr 4, 2016 at 8:33 AM Rick Harding 
wrote:

> On Sun, Apr 3, 2016 at 6:56 PM Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> In a non-beta release we would make sure that the config changes aren't
>> backwards incompatible.
>>
>
> I think this is the key thing. I think that kill-controller is an
> exception to this rule. I think we should always at least give the user the
> ability to remove their stuff and start over with the new alpha/beta/rc
> release. I'd like to ask us to explore making kill-controller an exception
> to this policy and that if tests prove we can't bootstrap on one beta and
> kill with trunk that it's a blocking bug for us.
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Charm layers & Windows

2016-04-04 Thread Nate Finch
>From what I've read, there's no server-side story to Ubuntu on Windows.
It's purely desktop, and in fact, only installs on desktop versions of
Windows 10.  That could help with tooling for charm authors, but obviously
the charm code itself still needs to run on vanilla Windows.

The main tricky part I see is that the charm code for Windows expects hooks
to be .exe, .ps1, .cmd, or .bat.  You can't have a hook with no extension
on Windows, and on Linux we expect it to be a command with no extension.
That being said, there's nothing that says you can't have both an
'install.ps1' hook file for Windows and an 'install' hook file for Linux in
the same charm.  The code will do the right thing and use the right hook
for each OS.

On Mon, Apr 4, 2016 at 7:46 AM Rick Harding 
wrote:

> I know that Gabriel and some of the CloudBase folks seemed interested in
> layers and possibly some tooling with powershell. I'm not sure how far that
> went but I thought they were experimenting during the charmer's summit.
> That would help with a charm build on windows, but not for some common code
> between both operating systems.
>
> An interesting thing is how much setup and how ootb the Ubuntu on Windows
> needs. If it's working out of the box, it might be an interesting move for
> us and our tools that Windows users could get a Linux experience. I guess
> that it won't be ideal though as I'm not sure what the server side plans
> around that work is.
>
> On Mon, Apr 4, 2016 at 3:18 AM Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> Hi,
>>
>> I would like to write a charm that should be mostly identical on Windows
>> and Linux, so I think it would make sense to have common code in the form
>> of a layer.
>>
>> Is anyone working on getting "charm build", layers, and friends to work
>> with Windows workloads? If not, I may look into it myself.
>>
>> Cheers,
>> Andrew
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Go 1.6 is now in trusty-proposed

2016-03-28 Thread Nate Finch
I'll just note that we've had flaky tests for as long as I've been working
on Juju, and there's never a "good" time to fix them. :)

On Mon, Mar 28, 2016 at 11:48 AM Aaron Bentley 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 2016-03-28 09:03 AM, Katherine Cox-Buday wrote:
> > Generally +1 on this, but I'm also intrigued by Martin's
> > statistic... do we currently weight test failures by how likely
> > they are to fail (i.e. how likely they are flaky)? That seems like
> > it would be a great metric to use to decide which to fix first.
>
> We don't do it on the likelihood of failure, but we do it on the
> frequency of failure.
>
> http://reports.vapour.ws/releases/top-issues
>
> I report on these on the cross-team call, and once the 2.0 settles
> down, I'll be reporting them on the release call again.
>
> Aaron
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJW+VJcAAoJEK84cMOcf+9hWrwH/0JradfscIE0wnt+yCW9nNCR
> 9hTHI2U19v1VuP6pWI4UiC7srfojPI8EXXEXrrAhF9rT8tpVK4EcJRJK9RvWvvz5
> BEquHMS0+eROFOqDJFavEB8hU7BKHErzkSwSG8uKq7JuwHs9gNtQO9z9fIhVKjnr
> aP4z2IliCqbYfXbupfSTD8TmqhI0AipQymTg3QB4C3sJdXzc5GjzIIckUo/X7aJj
> zH1tEtlwOdP0c9F+8ZVs1j6AAkb+uDGc/1Qr4MT1kInqGkli2UNF4TOX/AihNPyH
> iwYgq6O7uOkijFTrL9obRfbXxIFw1WCc9cYzxbRYnGfQff47Dyj7/BUStPPH0i0=
> =8FQ6
> -END PGP SIGNATURE-
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-28 Thread Nate Finch
+1, don't retry... devs need to feel the pain in order to get proper
motivation to fix this stuff...

On Mon, Mar 28, 2016 at 9:03 AM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Just wanted to say thank you 100x to all involved!
>
> On 03/24/2016 01:03 AM, Michael Hudson-Doyle wrote:
> > Hi,
> >
> > As of a few minutes ago, there is now a golang-1.6 package in
> > trusty-proposed:
> > https://launchpad.net/ubuntu/trusty/+source/golang-1.6 (thanks for the
> > review and copy, Steve).
> >
> > One difference between this and the package I prepared earlier is that
> > it does not install /usr/bin/go but rather /usr/lib/go-1.6/bin/go so
> > Makefiles and such will need to be adjusted to invoke that directly or
> > put /usr/lib/go-1.6/bin on $PATH or whatever. (This also means it can
> > be installed alongside the golang packages that are already in
> > trusty).
> >
> > Cheers,
> > mwh
> > (Hoping that we can now really properly ignore gccgo-4.9 ppc64el bugs!)
> >
> > On 17 February 2016 at 07:58, Michael Hudson-Doyle
> >  wrote:
> >> I have approval for the idea but also decided to wait for 1.6 and upload
> >> that instead. I'm also on leave currently so hopefully this can all
> happen
> >> in early March.
> >>
> >> Cheers,
> >> mwh
> >>
> >> On 17/02/2016 1:17 am, "John Meinel"  wrote:
> >>> To start with, thanks for working on this. However, doesn't this also
> >>> require changing the CI builds to use your ppa?
> >>>
> >>> What is the current state of this? I was just looking around and
> noticed
> >>> golang1.5-go isn't in anything specific for Trusty that I can see. I
> realize
> >>> if its going into an SRU it requires a fair amount of negotiation with
> other
> >>> teams, so I'm not  surprised to see it take a while. I just wanted to
> check
> >>> how it was going.
> >>>
> >>> Thanks,
> >>>
> >>> John
> >>> =:->
> >>>
> >>> On Mon, Jan 18, 2016 at 7:32 AM, Michael Hudson-Doyle
> >>>  wrote:
>  Hi all,
> 
>  As part of the plan for getting Go 1.5 into trusty (see here
>  https://wiki.ubuntu.com/MichaelHudsonDoyle/Go15InTrusty) I've built
>  packages (called golang1.5-go rather than golang-go) for trusty in my
>  ppa:
> 
>  https://launchpad.net/~mwhudson/+archive/ubuntu/go15-trusty/+packages
> 
>  (assuming 3:1.5.3-0ubuntu4 actually builds... I seem to be having a
>  "make stupid packaging mistakes" day)
> 
>  I'll write up a SRU bug to start the process of getting this into
>  trusty tomorrow but before it does end up in trusty it would seem like
>  a good idea to run the CI tests using juju-core packages built with
>  this version of the go compiler. Is that something that's feasible to
>  arrange
> 
>  The only packaging requirement should be to change the build-depends
>  to be on golang1.5-go rather than golang-go or gccgo.
> 
>  Cheers,
>  mwh
> 
>  --
>  Juju-dev mailing list
>  Juju-dev@lists.ubuntu.com
>  Modify settings or unsubscribe at:
>  https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>>
>
> --
> -
> Katherine
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-24 Thread Nate Finch
Does this mean we can assume 1.6 for everything from now on, or is there
some other step we're waiting on?  I have some code that only needs to
exist while we support 1.2, and I'd be happy to just delete it.

On Thu, Mar 24, 2016, 4:07 AM Tim Penhey  wrote:

> Awesome news Michael.
>
> Thank you for all your work on this.
>
> Tim
>
> On 24/03/16 19:03, Michael Hudson-Doyle wrote:
> > Hi,
> >
> > As of a few minutes ago, there is now a golang-1.6 package in
> > trusty-proposed:
> > https://launchpad.net/ubuntu/trusty/+source/golang-1.6 (thanks for the
> > review and copy, Steve).
> >
> > One difference between this and the package I prepared earlier is that
> > it does not install /usr/bin/go but rather /usr/lib/go-1.6/bin/go so
> > Makefiles and such will need to be adjusted to invoke that directly or
> > put /usr/lib/go-1.6/bin on $PATH or whatever. (This also means it can
> > be installed alongside the golang packages that are already in
> > trusty).
> >
> > Cheers,
> > mwh
> > (Hoping that we can now really properly ignore gccgo-4.9 ppc64el bugs!)
> >
> > On 17 February 2016 at 07:58, Michael Hudson-Doyle
> >  wrote:
> >> I have approval for the idea but also decided to wait for 1.6 and upload
> >> that instead. I'm also on leave currently so hopefully this can all
> happen
> >> in early March.
> >>
> >> Cheers,
> >> mwh
> >>
> >> On 17/02/2016 1:17 am, "John Meinel"  wrote:
> >>>
> >>> To start with, thanks for working on this. However, doesn't this also
> >>> require changing the CI builds to use your ppa?
> >>>
> >>> What is the current state of this? I was just looking around and
> noticed
> >>> golang1.5-go isn't in anything specific for Trusty that I can see. I
> realize
> >>> if its going into an SRU it requires a fair amount of negotiation with
> other
> >>> teams, so I'm not  surprised to see it take a while. I just wanted to
> check
> >>> how it was going.
> >>>
> >>> Thanks,
> >>>
> >>> John
> >>> =:->
> >>>
> >>> On Mon, Jan 18, 2016 at 7:32 AM, Michael Hudson-Doyle
> >>>  wrote:
> 
>  Hi all,
> 
>  As part of the plan for getting Go 1.5 into trusty (see here
>  https://wiki.ubuntu.com/MichaelHudsonDoyle/Go15InTrusty) I've built
>  packages (called golang1.5-go rather than golang-go) for trusty in my
>  ppa:
> 
>  https://launchpad.net/~mwhudson/+archive/ubuntu/go15-trusty/+packages
> 
>  (assuming 3:1.5.3-0ubuntu4 actually builds... I seem to be having a
>  "make stupid packaging mistakes" day)
> 
>  I'll write up a SRU bug to start the process of getting this into
>  trusty tomorrow but before it does end up in trusty it would seem like
>  a good idea to run the CI tests using juju-core packages built with
>  this version of the go compiler. Is that something that's feasible to
>  arrange
> 
>  The only packaging requirement should be to change the build-depends
>  to be on golang1.5-go rather than golang-go or gccgo.
> 
>  Cheers,
>  mwh
> 
>  --
>  Juju-dev mailing list
>  Juju-dev@lists.ubuntu.com
>  Modify settings or unsubscribe at:
>  https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>>
> >>>
> >>
> >
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Master is blocked in preparation for 2.0-beta3

2016-03-22 Thread Nate Finch
May I ask why we're not making a branch for beta3, so work may continue on
master?

On Tue, Mar 22, 2016 at 1:07 PM Cheryl Jennings <
cheryl.jenni...@canonical.com> wrote:

> Hi Everyone,
>
> Master is currently blocked to allow us to get to a stable 2.0-beta3.
> While we are working towards that goal, please do not JFDI changes into
> master.
>
> Please let me know if you have questions or concerns.
>
> Thanks!
> -Cheryl
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New feature for charmers - min-juju-version

2016-03-19 Thread Nate Finch
Yes, it'll be ignored, and the charm will be deployed normally.

On Thu, Mar 17, 2016 at 3:29 PM Ryan Beisner <ryan.beis...@canonical.com>
wrote:

> This is awesome.  What will happen if a charm possesses the flag in
> metadata.yaml and is deployed with 1.25.x?  Will it gracefully ignore it?
>
> On Thu, Mar 17, 2016 at 1:57 PM, Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> There is a new (optional) top level field in the metadata.yaml file
>> called min-juju-version. If supplied, this value specifies the minimum
>> version of a Juju server with which the charm is compatible. When a user
>> attempts to deploy a charm (whether from the charmstore or from local) that
>> has min-juju-version specified, if the targeted model's Juju version is
>> lower than that specified, then the user will be shown an error noting that
>> the charm requires a newer version of Juju (and told what version they
>> need). The format for min-juju-version is a string that follows the same
>> scheme as our release versions, so you can be as specific as you like. For
>> example, min-juju-version: "2.0.1-beta3" will deploy on 2.0.1 (release),
>> but will not deploy on 2.0.1-alpha1 (since alpha1 is older than beta3).
>>
>> Note that, at this time, Juju 1.25.x does *not* recognize this flag, so
>> it will, unfortunately, not be respected by 1.25 environments.
>>
>> This code just landed in master, so feel free to give it a spin.
>>
>> -Nate
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Unfairly weighted charmstore results

2016-03-19 Thread Nate Finch
BTW, I reported a very similar problems in this bug:
https://github.com/CanonicalLtd/jujucharms.com/issues/192

On Thu, Mar 17, 2016 at 10:18 AM Uros Jovanovic <
uros.jovano...@canonical.com> wrote:

> Hi Tom,
>
> We currently bump the recommended charms over the community ones. The
> reason other shows is due to using N-grams (3-N) in search and the ranking
> logic using that puts recommended charms over the non-recommended ones. And
> we're not only searching over names of charms but a bunch of content that a
> charm has.
>
> The system works relatively well for recommended charms if you know the
> name (or close to what name is), but not in cases where a name is long and
> the charm is only in community space. That's why you get better results
> with short query vs a longer one.
>
> We're working on providing better search results in the following weeks.
>
>
>
>
> On Thu, Mar 17, 2016 at 2:18 PM, Tom Barber 
> wrote:
>
>> Cross posted from IRC:
>>
>> Hello folks,
>>
>> I have a gripe about the charm store search. Mostly because its really
>> badly weighted towards recommended charms, and finding what you(an end user
>> wants is really hard unless they know what they are doing).
>>
>> Take this example:
>>
>> https://jujucharms.com/q/pentaho
>>
>> Now I'm writing a charm called Pentaho Data Integration, so why do I have
>> to scroll past 55 recommended charms that have nothing to do with what I
>> have looked for?
>>
>> But
>>
>> https://jujucharms.com/q/etl
>>
>> Shows me exactly what I need at the top, with no recommended charms
>> blocking the view.
>>
>> So I guess its weighted towards tags, then names, sorta.
>>
>> Im not against recommended charms being dumped at the top, they are
>> recommended after all but it appears the ranking could be vastly improved.
>>
>> Off the top of my head a ranking combo of something like, keyword
>> relevance, recommended vs non recommended, times deployed, age, tags and
>> last updated. would give a half decent weighting for the charms and would
>> hopefully stop 55 unrelated charms appearing at the top of the list.
>>
>> Now I guess, I could dump pentaho in as a tag to get me top of the SEO
>> rankings, but it seems like generally the method could be improved as the
>> amount of charms increases, quite plausibly using something like Apache
>> Nutch to crawl the available charms and build a proper search facility
>> would improve things.
>>
>> Cheers
>>
>> Tom
>>
>>
>> --
>>
>> Director Meteorite.bi - Saiku Analytics Founder
>> Tel: +44(0)5603641316
>>
>> (Thanks to the Saiku community we reached our Kickstart
>> 
>> goal, but you can always help by sponsoring the project
>> )
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


New feature for charmers - min-juju-version

2016-03-19 Thread Nate Finch
There is a new (optional) top level field in the metadata.yaml file called
min-juju-version. If supplied, this value specifies the minimum version of
a Juju server with which the charm is compatible. When a user attempts to
deploy a charm (whether from the charmstore or from local) that has
min-juju-version specified, if the targeted model's Juju version is lower
than that specified, then the user will be shown an error noting that the
charm requires a newer version of Juju (and told what version they need).
The format for min-juju-version is a string that follows the same scheme as
our release versions, so you can be as specific as you like. For example,
min-juju-version: "2.0.1-beta3" will deploy on 2.0.1 (release), but will
not deploy on 2.0.1-alpha1 (since alpha1 is older than beta3).

Note that, at this time, Juju 1.25.x does *not* recognize this flag, so it
will, unfortunately, not be respected by 1.25 environments.

This code just landed in master, so feel free to give it a spin.

-Nate
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


juju versions are now in github.com/juju/version

2016-03-19 Thread Nate Finch
For a recent change in 2.0, we needed to be able to reference the
version.Number struct from both juju-core and charm.v6.  In order to
support this, we moved most of the contents of github.com/juju/juju/version
to at top level repo at github.com/juju/version.  Note that certain
juju-core-specific data still exists in the original folder, such as the
current Juju binary version.

Just wanted everyone to be aware of the change, so that if you're writing
code that interacts with version numbers, you know where that code now
lives.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Diffing string checker

2016-02-25 Thread Nate Finch
I agree with Horacio, you're my hero, Andrew.

I was actually just looking at sergi's diffmatchpatch implementation for
exactly this purpose a couple days ago.  Thanks for doing that!

On Thu, Feb 25, 2016 at 6:09 AM Andrew Wilkins 
wrote:

> On Thu, Feb 25, 2016 at 5:23 PM roger peppe  wrote:
>
>> Nice. FWIW I have a related tool that tries to help find where a regexp
>> mismatch has happened. It parses the output of gocheck, so it's
>> probably not that useful to non-acme users but you might want
>> to consider including the approach (http://paste.ubuntu.com/15195795/)
>> for fancycheck.Matches and ErrorMatches, perhaps, to highlight where
>> the mismatch starts.
>>
>
> Thanks, that is a better idea. I'll look into doing that next time I'm
> staring at a wall of text :)
>
>
>> On 25 February 2016 at 06:26, Andrew Wilkins
>>  wrote:
>> > Howdy,
>> >
>> > Occasionally I'll change a test, and some string equality test will fail
>> > with a wall of text. Sometimes we shouldn't be checking the whole
>> string,
>> > but sometimes it's legitimate to do so, and it can be difficult/tedious
>> to
>> > spot the differences.
>> >
>> > I've just written a checker which diffs the two string args, and
>> colourises
>> > the output. You may find it useful. I'm using red/green background, but
>> I
>> > also added bold for insertions, strike-through for deletions, in case
>> you're
>> > red/green colour blind. My terminal doesn't do strike-through, and
>> your's
>> > probably doesn't either. Anyway, the important thing is you can see the
>> > difference between bits that are the same vs. insertions/deletions.
>> >
>> > Code is at github.com/axw/fancycheck. Just replace
>> > c.Assert("x", gc.Equals, "y")
>> > with
>> > c.Assert("x", fancycheck.StringEquals, "y")
>> >
>> > Cheers,
>> > Andrew
>> >
>> > --
>> > Juju-dev mailing list
>> > Juju-dev@lists.ubuntu.com
>> > Modify settings or unsubscribe at:
>> > https://lists.ubuntu.com/mailman/listinfo/juju-dev
>> >
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Please merge master into your feature branches

2016-02-18 Thread Nate Finch
So uh... how are we going to deliver patches to 1.25 if CI can't run it?

On Thu, Feb 18, 2016 at 9:48 PM Andrew Wilkins 
wrote:

> On Fri, Feb 19, 2016 at 10:03 AM Ian Booth 
> wrote:
>
>> FYI for folks developing feature branches for juju-core.
>>
>> juju-core master has been updated to include the first round of
>> functionality to
>> improve the bootstrap experience. The consequence of this is that CI
>> scripts
>> needed to be updated to match. This means that any feature branch which
>> has not
>> had master commit 294388 or later merged in will not work with CI and so
>> will
>> not be blessed for release.
>>
>
> Further to this, please see the draft 2.0 release notes for instructions:
> http://paste.ubuntu.com/15127859/
>
> If there's anything you can't figure out from that and/or command help,
> please let us know so we can fix the docs.
>
> Cheers,
> Andrew
>
>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Introducing, juju resources!

2016-02-13 Thread Nate Finch
One idea is to have a resource type of "directory" and upload everything in
that directory.  The current code has only one resource type - file.   But
it's copied to a directory on the unit named by the resource. So it would
be pretty easy to jnstsd copy a number of files to that directory.

Of course, this would be a resources v2 feature. I'm sure there's rework
that would be needed on the back end, but it certainly seems doable.  We
also had ideas for making a github repo as a resource, which might be an
even better UX, depending on your use case.

On Sat, Feb 13, 2016, 9:10 AM Rick Harding 
wrote:

> On Fri, Feb 12, 2016 at 11:06 PM Aaron Bentley <
> aaron.bent...@canonical.com> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> On 2016-02-12 04:26 PM, Katherine Cox-Buday wrote:
>> > Moonstone have been working hard on a new feature coming up in Juju
>> > 2.0 called "Juju Resources", and we're now at a point where we can
>> > share the goodness and call for bugs/feedback!
>>
>> I'm glad to see you folks are working on this. On the QA side, we've
>> been wanting Juju to support some kind of native blob storage,
>> basically since we started.
>>
>> Is is necessary to enumerate the names in the charm? I'd rather we
>> didn't. One of our use cases is plugins for Jenkins. The Jenkins charm
>> has has a setting that tells it which plugins to install, but there's
>> no predetermined list. It would be nice if the plugins could be
>> retrieved in a Juju-native way, but it would be painful to have to
>> update the charm every time we added a new plugin.  And it would be
>> bad if people ended up forking the charm just to adjust the list of
>> plugins.
>>
>> We could bundle the plugins in a single tarfile, but then updating the
>> tarfile would be annoying. For plugins, it's best if each plugin is
>> its own file and there are no restrictions on the filenames.
>
>
> Your read is correct. You must declare the resources. It's helpful to
> users to know what to stick in there and for the charm to be able to handle
> different items. In your case, a single tar file of the plugins you'd like
> to enable could be sent up and the charm would be able to untar, loop
> through them, etc. Ideally the charm would be uploaded to the store with a
> standard set of plugins and folks could then customize which ones they
> wanted by creating their own tar of plugins.
>
> I'll make sure we keep this in mind though. We've started with one file
> per resource, and maybe there's an idea here of "one or more" where you
> might have a charm that takes a data set file. You can deploy with a data
> set, or multiple files of data that all need to be processed, rather than
> building into a single file. We'll have to think it through for a future
> iteration of resources.
>
> Rick
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju terminology change: controllers and models

2016-02-02 Thread Nate Finch
FYI, I noticed ServiceDeployWithNetworks still exists as a client and
facade method, but it's only called by tests. Maybe it should be removed?

On Tue, Feb 2, 2016, 8:34 PM Ian Booth  wrote:

> Hey all
>
> As has been mentioned previously in this list, for the Juju 2.0 release we
> have
> been working on fundamental terminology changes. In particular, we now talk
> about controllers and models instead of state servers and environments.
>
> To this end, a rather large change has landed in master and the upcoming
> 2.0-alpha2 release of Juju will reflect these changes. There are several
> things
> to be aware of. We have also taken the opportunity to remove a lot of code
> which
> existed to support older Juju clients. Needless to say, this Juju 2.0
> release
> will not support upgrading from 1.x - it works only as a clean install.
>
> Note: some of the changes will initially break the GUI and users of the
> Python
> Juju Client - work is underway to update these products for the next alpha3
> release scheduled for next week. For those wishing to continue to test
> Juju 2.0
> without the breaking changes, the alpha1 release is still available via
> ppa:juju/experimental. Separate communications to affected stakeholders
> has/will
> be made as part of the 2.0-alpha2 release.
>
> So, the changes are roughly broken down as follows:
>
> - CLI command name changes
> - facade name changes
> - api method and parameter name changes
> - facade method restructure
> - internal api name changes
> - external artifact/data changes (including on the wire changes)
> - deprecated and older version facades are removed
>
> 1. CLI command name changes
>
> As an obvious example, create-environment becomes create-model. We also
> have
> destroy-controller etc. This alpha2 release will also contain many of the
> other
> CLI changes targetted for 2.0 eg juju backup create becomes juju
> create-backup.
> Not all 2.0 CLI syntax is supported yet, but all the environment -> model
> changes are done.
>
> You will also use -m  instead of -e .
>
> The release notes will go into more detail.
>
> All user facing text now refers to model instead of environment.
>
> 2. Facade name changes
>
> If you are curious, see https://goo.gl/l4JqGd for a representative
> listing of
> all facade and method names and which ones have been changed.
>
> The main one is EnvironmentManager becomes ModelManager. These changes
> affect
> external API clients like the GUI and Python Juju client.
>
> 3. api method and parameter name changes
>
> By way of example:
> EnvironInfo() on the undertaker facade becomes ModelInfo().
> The param struct ModifyEnvironUsers becomes ModifyModelUsers etc.
> EnvironTag attributes become ModelTag.
>
> 4. Service facade method restructure
>
> As part of making our facades more manageable and maintainable when API
> changes
> are required, a whole bunch of service related methods are moved off the
> Client
> facade and onto the Service facade. This had already been started months
> ago,
> and there were shims in place to keep existing clients working, but now
> the job
> is finished.
> eg Client.AddRelation() becomes Service.AddRelation() etc.
>
> This change will break the GUI and Python Juju client.
>
> 5. Internal API name changes
>
> Things like state.AllEnvironments() becomes state.AllModels(), we now use
> names.ModelTag instead of names.EnvironTag, and many, many more.
>
> Note: the names package has not been forked into a .V2 yet (with EnvironTag
> removed) as there are dependencies to sort out. Please do not use
> EnvironTag
> anymore.
>
> 6. External artifact/data changes (including on the wire changes)
>
> There are several main examples here.
> On the wire, we transmit model-uuid tags rather than environment-uuid tags.
> In mongo, we store model-uuid doc fields rather than env-uuid.
> In agent.conf files we store Model info rather than Environment tags.
> In the controller blob store, we store and manage blobs for buckets rather
> than
> environments.
> The controller HTTP endpoints are /model/ In backups we store model tags and details, not environment.
>
> With the blobstore, we've create a .V2 branch which core uses. The
> charmstore
> will continue to use V1 for now.
>
> 7. Deprecated and older version facades are removed
>
> All facade versions have been bumped up. Older facades are removed, and we
> don't
> support fallback to older servers. The main example for facade removal is
> uniter
> v0 and v1 are gone. With deprecated API removal, service deployment now
> expects
> placement parameters rather than catering for older clients that did not
> support
> placement, so there's only one ServiceDeployMethod() instead of 3. All in
> all,
> the code base has been simplified by removing all support for deprecated
> facades
> and API calls.
>
> There are still a couple of grey areas internally to be sorted out. But
> the user
> facing changes are done (pending more CLI work between now and 

Re: Build errors

2016-02-01 Thread Nate Finch
Juju is not structured to work with go get, per se.  Please see the
contributing document for details on building juju:

https://github.com/juju/juju/blob/master/CONTRIBUTING.md


On Mon, Feb 1, 2016 at 1:33 PM Neale Ferguson  wrote:

> I built juju a few weeks ago and attempted to do it today using the same
> procedure. During the stage:
>
> go get -d -v github.com/juju/juju/...
>
> I am now getting:
>
> github.com/Azure/azure-sdk-for-go (download)
> package github.com/juju/juju/cmd/juju
> imports
> github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go
> -autorest/autorest
> :
> cannot find package
> "
> github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/g
> o-autorest/autorest
> "
> in any of:
>
> /home/neale/go/src/
> github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/
> github.com/Azure/go-autorest/autorest (from $GOROOT)
>
> /home/neale/rpmbuild/BUILD/juju/src/
> github.com/Azure/azure-sdk-for-go/Godep
> s/_workspace/src/github.com/Azure/go-autorest/autorest
> 
> (from $GOPATH)
>
>
> And the build fails.
>
> Is this some transient condition related to the Azure git repo or
> something else?
>
> Neale
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Tags and object IDs

2016-01-25 Thread Nate Finch
I was really trying not to give too much information about this exact case,
so we could avoid talking about a specific implementation, and focus on the
more general question of how we identify objects.  Yes, we get the bytes
using an HTTP request, but that is irrelevant to my question :)

On Mon, Jan 25, 2016 at 2:00 AM John Meinel <j...@arbash-meinel.com> wrote:

> On Sat, Jan 23, 2016 at 1:28 AM, William Reade <
> william.re...@canonical.com> wrote:
>
>> On Fri, Jan 22, 2016 at 9:53 PM, Nate Finch <nate.fi...@canonical.com>
>> wrote:
>>
>>> Working in the model layer on the server between the API and the DB.
>>> Specifically in my instance, an API call comes in from a unit, requesting
>>> the bytes for a resource.  We want to record that this unit is now using
>>> the bytes from that specific revision of the resource.  I have a pointer to
>>> a state.Unit, and a function that takes a Resource metadata object and some
>>> reference to the unit, and does the actual transaction to the DB to store
>>> the unit's ID and the resource information.
>>>
>>
>> I'm a bit surprised that we'd be transferring those bytes over an API
>> call in the first place (is json-over-websocket really a great way to send
>> potential gigabytes? shouldn't we be getting URL+SHA256 from the apiserver
>> as we do for charms, and downloading separately? and do we really want to
>> enforce charmstore == apiserver?); and I'd point out that merely having
>> agreed to deliver some bytes to a client is no indication that the client
>> will actually be using those bytes for anything; but we should probably
>> chat about those elsewhere, I'm evidently missing some context.
>>
>
> So I would have expected that we'd rather use a similar raw
> HTTP-to-get-content instead of a JSON request (given the intent of
> resources is that they may be GB in size), but regardless it is the intent
> that you download the bytes from the charm rather from the store directly.
> Similar to how we currently fetch the charm archive content itself.
> As for "will you be using it", the specific request from the charm is when
> it calls "resource-get" which is very specifically the time when the charm
> wants to go do something with those bytes.
>
> John
> =:->
>
>
>> But whenever we do record the unit-X-uses-resource-Y info I assume we'll
>> have much the same stuff available in the apiserver, in which case I think
>> you just want to pass the *Unit back into state; without it, you just need
>> to read the doc from the DB all over again to make appropriate
>> liveness/existence checks [0], and why bother unless you've already hit an
>> assertion failure in your first txn attempt?
>>
>> Cheers
>> William
>>
>> [0] I imagine you're not just dumping (unit, resource) pairs into the DB
>> without checking that they're sane? that's really not safe
>>
>>
>>> On Fri, Jan 22, 2016 at 3:34 PM William Reade <
>>> william.re...@canonical.com> wrote:
>>>
>>>> Need a bit more context here. What layer are you working in?
>>>>
>>>> In general terms, entity references in the API *must* use tags; entity
>>>> references that leak out to users *must not* use tags; otherwise it's a
>>>> matter of judgment and convenience. In state code, it's annoying to use
>>>> tags because we've already got the globalKey convention; in worker code
>>>> it's often justifiable if not exactly awesome. See
>>>> https://github.com/juju/juju/wiki/Managing-complexity#workers
>>>>
>>>> Cheers
>>>> William
>>>>
>>>> On Fri, Jan 22, 2016 at 6:02 PM, Nate Finch <nate.fi...@canonical.com>
>>>> wrote:
>>>>
>>>>> I have a function that is recording which unit is using a specific
>>>>> resource.  I wrote the function to take a UnitTag, because that's the
>>>>> closest thing we have to an ID type. However, I and others seem to 
>>>>> remember
>>>>> hearing that Tags are really only supposed to be used for the API. That
>>>>> leaves me with a problem - what can I pass to this function to indicate
>>>>> which unit I'm talking about?  I'd be fine passing a pointer to the unit
>>>>> object itself, but we're trying to avoid direct dependencies on state.
>>>>> People have suggested just passing a string (presumably
>>>>> unit.Tag().String()), but then my API is too lenient - it appears to say
>>>>> "give me any s

Re: Defaulting tests internal to the package

2016-01-24 Thread Nate Finch
I think the idea that tests of an internal function are worthless because
you might just not be using that function is a fairly specious argument.
That could be applied to unit tests at any level.  What if you're not using
the whole package?  By that logic, the only reliable tests are full stack
tests.

I think that William's position is correct in an ideal world.  The public
interface is the only contract you really need to worry about.  In theory,
all your packages would be simple enough that this would not be difficult,
and the tests would be straightforward.  However, in practice, some
packages are fairly complex beasts, and it can literally be an order of
magnitude easier to test a bit of logic internally than to test it
externally.  And I don't just mean easier in terms of time it takes to
write (although that shouldn't be ignored - we do have deadlines), but also
in simplicity of the code in the tests... the more complicated the code in
the tests, the harder they are to get right, the more difficult it is for
other people to understand them, and the more difficult it is to maintain
them.

Even if you have perfect code coverage, using inversion of control and
dependency injection to be able to do every test externally, you can still
have bugs that slip through, either due to weaknesses in your tests, or
even just poorly coded mocks that don't actually behave like the production
things they're mocking.  Isolating a piece of code down to its bare bones
can make its intended behavior a lot easier to define and understand, and
therefore easier to test.

I think small scope internal unit tests deliver incredible bang for the
buck.  They're generally super easy to write, super easy to understand, and
give you confidence that you haven't screwed up a piece of logic. They're
certainly not ironclad proof that your application as a whole doesn't have
bugs in it... but they are often very strong proof that this bit of logic
does not have bugs.

While I have seen plenty of bugs that were caused by a developer not using
a piece of code he/she was supposed to use, it has almost always been
exported code that wasn't used - a utility function in another package, for
example.  Certainly it's possible you could accidentally stop using an
internal function and no have anyone notice... but I think that's fairly
unlikely compared to the concrete benefits you get from simpler,
faster-to-write internal tests.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Tags and object IDs

2016-01-22 Thread Nate Finch
Working in the model layer on the server between the API and the DB.
Specifically in my instance, an API call comes in from a unit, requesting
the bytes for a resource.  We want to record that this unit is now using
the bytes from that specific revision of the resource.  I have a pointer to
a state.Unit, and a function that takes a Resource metadata object and some
reference to the unit, and does the actual transaction to the DB to store
the unit's ID and the resource information.

On Fri, Jan 22, 2016 at 3:34 PM William Reade <william.re...@canonical.com>
wrote:

> Need a bit more context here. What layer are you working in?
>
> In general terms, entity references in the API *must* use tags; entity
> references that leak out to users *must not* use tags; otherwise it's a
> matter of judgment and convenience. In state code, it's annoying to use
> tags because we've already got the globalKey convention; in worker code
> it's often justifiable if not exactly awesome. See
> https://github.com/juju/juju/wiki/Managing-complexity#workers
>
> Cheers
> William
>
> On Fri, Jan 22, 2016 at 6:02 PM, Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> I have a function that is recording which unit is using a specific
>> resource.  I wrote the function to take a UnitTag, because that's the
>> closest thing we have to an ID type. However, I and others seem to remember
>> hearing that Tags are really only supposed to be used for the API. That
>> leaves me with a problem - what can I pass to this function to indicate
>> which unit I'm talking about?  I'd be fine passing a pointer to the unit
>> object itself, but we're trying to avoid direct dependencies on state.
>> People have suggested just passing a string (presumably
>> unit.Tag().String()), but then my API is too lenient - it appears to say
>> "give me any string you want for an id", but what it really means is "give
>> me a serialized UnitTag".
>>
>> I think most places in the code just use a string for an ID, but this
>> opens up the code to abuses and developer errors.
>>
>> Can someone explain why tags should only be used in the API? It seems
>> like the perfect type to pass around to indicate the ID of a specific
>> object.
>>
>> -Nate
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Defaulting tests internal to the package

2016-01-22 Thread Nate Finch
I'm glad to hear Roger's opinion about testing internal code... that's
exactly how I feel.  True unit tests of small bits of code are easy to
write, easy to read and understand, and give you confidence that your code
is doing what you think it'll do in a wide variety of cases.  If the unit
test fails, it's generally incredibly clear where the fault in the code
lies, and it's easy to fix either the code or the test.  In 6 months, you
or someone else can go back to the tests, easily understand what they are
testing, and modify them trivially when making a bug fix or improvement.

While you certainly *can* write code that tests all the corner cases of
some small utility function through the external API... those tests will
almost always be incredibly opaque and hard to understand, and generally
much more complicated than they would be if you could just test the utility
function by itself.  This is often a problem I have with our current
tests... it's hard to see what is actually being tested, and even harder to
verify that the test itself is correct and complete.  When a test fails,
you often have no clue where the actual problem lies, because the test
traverses so much code.

Of course, I definitely think you *also* need tests of the exported API of
a package... but small unit tests are still incredibly valuable.


On Fri, Jan 22, 2016 at 7:06 AM roger peppe 
wrote:

> On 22 January 2016 at 09:40, William Reade 
> wrote:
> > I do not want anyone to add unit tests for non-exported code, because
> those
> > tests are almost completely worthless.
>
> I'd beg to disagree with this. I think it very much depends what you are
> testing. For me tests are about giving confidence that the code works.
> Sometimes as part of the implementation of a package I'll write a
> function with a well defined contract that doesn't justify being made
> public because it's intimately related to the way that the package is
> implemented. The function might be reasonably tricky, and it may be hard
> to reach all its corner cases at arm's length through the public API.
> It might not even be possible, but that doesn't mean that it's not
> useful to test the function, as it may provide a foundation for more
> varied future functionality.
>
> In this kind of scenario, I think it makes good sense to write a unit test
> for the unexported function. If you change the implementation, you might
> throw away the function and its tests, and that's fine, but this approach,
> in my experience, can give good confidence in some tricky situations with
> significantly less effort than doing everything through the public API.
>
> As always, there's a trade-off here - we want to maximise confidence
> while minimising time and effort spent writing and maintaining the
> code. It's always going to be a judgement call and flat assertions like
> "those tests are almost completely worthless" are not that helpful IMHO.
>
> I do agree that writing external tests *when reasonable* is the way,
> and I think that export_test.go is a reasonable escape hatch for the
> times when it's useful to write internal tests.
>
> Nate, you seem to be arguing that because we can't have total separation
> between external and internal APIs, then we should not have any separation
> at all. I don't agree - I think that keeping as much as possible to
> the external API, but having a few well-defined exported objects (in
> export_test.go) works well, and reflects the nuanced situation that we
> actually live in.
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Tags and object IDs

2016-01-22 Thread Nate Finch
I have a function that is recording which unit is using a specific
resource.  I wrote the function to take a UnitTag, because that's the
closest thing we have to an ID type. However, I and others seem to remember
hearing that Tags are really only supposed to be used for the API. That
leaves me with a problem - what can I pass to this function to indicate
which unit I'm talking about?  I'd be fine passing a pointer to the unit
object itself, but we're trying to avoid direct dependencies on state.
People have suggested just passing a string (presumably
unit.Tag().String()), but then my API is too lenient - it appears to say
"give me any string you want for an id", but what it really means is "give
me a serialized UnitTag".

I think most places in the code just use a string for an ID, but this opens
up the code to abuses and developer errors.

Can someone explain why tags should only be used in the API? It seems like
the perfect type to pass around to indicate the ID of a specific object.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Sending binaries over relations

2016-01-21 Thread Nate Finch
I actually don't see why you would ever need to distribute a specific
library for connecting to an API on another charm.  The charm using the
dynamically determined client would still require a static (i.e. backward
compatible) API on the client library (otherwise the client charm would
have no way to know how to use the client library)... so why not just put
that static API on the server? and then you don't need a different client.
Obviously, you can distribute new clients with new functionality, but by
definition, you'd need new code in the client charm to understand how to
use that functionality.

On Thu, Jan 21, 2016 at 8:55 AM Merlijn Sebrechts <
merlijn.sebrec...@gmail.com> wrote:

> Hi Mark
>
>
> I understand your concern. Leaking operational details is something that
> should not be done in Juju. Apache proxy relationship cannot be implemented
> by Nginx because the relation leaks operational details (apache specific
> config files). This is not 'the Juju way'.
>
> The reason for sharing resources actually is interoperability. Say a
> client wants to connect to a server, then the client may need libraries to
> do that. What libraries will be used is dependent on both the client and
> the server. If you ship the libraries with the client, you are essentially
> hard-coding the server version. This is also not the Juju way; information
> like this should be passed over relations. The problem here is that both
> Charms have to make a joint decision about what libraries, if any, to use
> and where to get them. No Charm has full knowledge to make that decision.
>
> I've seen two common patterns in the wild for these kind of libraries:
>
> 1. The client uses its own libraries compiled for that specific version of
> the server.
> 2. The client uses libraries bundled with the server.
>
> The first case is easy: The client has full knowledge about where to get
> the libraries. It only needs to know the server version. The server tells
> the client his version and it's done. The second case is a little bit
> tougher. The client has knowledge about what libraries it needs, but the
> server has knowledge about where to get those libraries. In this case, the
> client should request the libraries from the server, and the server should
> have a way to send them to the client.
>
> If both cases are supported by the server, you can swap any client in and
> out. This also translates nicely to the idea that a Charm gets created by
> the person who is an expert in that service. The server experts know where
> to get the libraries if the server has to provide them. The client expert
> knows how to compile the libraries if the client has to compile them.
>
> What do you think?
>
>
>
>
> *PS: I don't think this should be done on stack level, since the shared
> libraries are only relevant to the two related Charms. The relevance of the
> libraries is on relation-level so they should be handled by the relations.*
>
>
>
>
> 2016-01-21 13:58 GMT+01:00 Mark Shuttleworth :
>
>>
>> Two thoughts.
>>
>> First, I think it's interesting that you see resources as such a dynamic
>> thing. I believe the current model accommodates this (we considered
>> user-provided resources, and what you are describing are essentially
>> charm-generated resources, but they would behave the same way).
>>
>> Second, I am sceptical about cross-charm resource coordination. We go to
>> great lengths to keep a service encapsulated. Each service is generally
>> prevented from knowing too much about what is going on in the service
>> next door. This is deliberate because it leads to the development of
>> services which can more naturally be swapped out, because the web of
>> expectations between related services is limited by what they can
>> possibly know about each other. What you are describing suggests too
>> intimate an awareness between two services of their exact operational
>> implementation.
>>
>> That said, we do have an idea in the "futures" department, which is a
>> higher-level charm that would manage a group of charms. While they would
>> remain encapsulated, the higher-level charm would have admin-like
>> visibility across all of the services it is supervising. Imagine an
>> "openstack" charm which can command-and-control events across all the
>> constituent services. In THAT case, yes, it would make sense for the
>> higher level charm, which we call a "stack", to be able to coordinate
>> resources across services under its supervision.
>>
>> How does that sound?
>>
>> Mark
>>
>> On 21/01/16 01:21, Merlijn Sebrechts wrote:
>> > Hi John, Mark
>> >
>> >
>> > If I understand you correctly, a Charm will be able to decide what
>> version
>> > of the resource it needs at runtime? This way, a Charm could tell
>> related
>> > Charms what version of the resource they should get? That would solve my
>> > use-case almost completely...
>> >
>> > The only exception being the case where a Charm compiles a library
>> during
>> > 

Defaulting tests internal to the package

2016-01-21 Thread Nate Finch
[reposting this to the wider juju-dev mailing list]

So, I hit an interesting problem a while back.  I have some unit tests that
need to be internal tests, thus they are in 'package foo'.  However, my
code needs to use some testhelper functions that someone else wrote...
which are in 'package foo_test'.  It's impossible to reference those, so I
had to move those helpers to package foo, which then either requires that
we make them exported (which is exactly like using export_test.go, which,
in the juju-core Oakland sprint, we all agreed was bad), or all tests that
use the helpers need to be in 'package foo'... which means I had to go
change a bunch of files to be in package foo, and change the calls in those
files from foo.SomeFunc() to just SomeFunc().

Given the assumption that some tests at some point will make sense to be
internal tests, and given it's likely that helper functions/types will want
to be shared across suites - should we not just always make our tests in
package foo, and avoid this whole mess in the first place?

(A note, this happened again today - I wanted to add a unit test of a
non-exported function to an existing test suite, and can't because the unit
tests are in the foo_test package)

There seems to only be two concrete benefits to putting tests in package
foo_test:
1. It avoids circular dependencies if your test needs to import something
that imports the package you're testing.
2. It makes your tests read like normal usages of your package, i.e.
calling foo.SomeFunc().
The first is obviously non-negotiable when it comes up... but I think it's
actually quite rare (and might indicate a mixture of concerns that warrants
investigation).  The second is nice to have, but not really essential (if
we want tests that are good examples, we can write example functions).

So, I propose we put all tests in package foo by default.  For those devs
that want to only test the exported API of a package, you can still do
that.  But this prevents problems where helper code can't be shared between
the two packages without ugliness and/or dumb code churnm, and it means
anyone can add unit tests for non-exported code without having to create a
whole new file and testsuite.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Copyright headers for auto-generated files?

2016-01-07 Thread Nate Finch
So, two things:
1.) We don't need to worry about this anymore, since I've removed the need
for the generated code.  But to clarify, I was using stringer (
https://github.com/golang/tools/blob/master/cmd/stringer/stringer.go) which
generates files with String() methods for enums.
2.) There's like 1000 go generate tools to embed files as resources in Go
code... we should probably just use one of those, rather than rolling our
own.  (for example, many of them gzip the contents to make them smaller,
etc).

On Thu, Jan 7, 2016 at 3:12 AM Ian Booth  wrote:

> We're already using go generate in the cloud-credentials feature branch.
> And the
> code we generate has the correct header.
>
> We have a top level generate package into which it is intended various
> generators will live.
>
> Here's a generator which reads the contents of a file and assigns that to
> a Go
> constant.
>
> https://github.com/juju/juju/blob/cloud-credentials/generate/filetoconst.go
>
>
>
> On 07/01/16 17:47, Andrew McDermott wrote:
> > Can the "thing" that is generating the content not add the header?
> >
> > On 6 January 2016 at 22:18, Katherine Cox-Buday <
> > katherine.cox-bu...@canonical.com> wrote:
> >
> >> Nate ran into an interesting problem yesterday. If we begin to use go
> >> generate directives to auto-generate code, do we care that this code
> >> doesn't have a Copyright header? We check the generated file into source
> >> control, but as the file is meant to be regenerated in the event of a
> >> change, we don't want to edit it by hand to add the header.
> >>
> >> --
> >> -
> >> Katherine
> >>
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
> >
> >
> >
> >
> >
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: 1st time builder

2015-12-17 Thread Nate Finch
Usually if there are compile time issues, it's because you haven't run
godeps yet.
Make sure you've done the steps under this section of the docs:
https://github.com/juju/juju/blob/master/CONTRIBUTING.md#godeps
go-get by default pulls from the head of every git repo, but to keep our
builds repeatable, we use godeps to pin the revision of each repo, which
can occasionally mean that HEAD of master may not compile if someone has
broken something in a dependent repo.

Also, we don't officially support go 1.5 yet, though I believe the build
works and almost all the tests work (there are a small handful of tests
that fail in go 1.5 IIRC) but that wouldn't cause the compile issues
you're getting here.

On Thu, Dec 17, 2015 at 2:10 PM Neale Ferguson  wrote:

> I am attempting to build juju for the 1st time using the instructions at
> https://github.com/juju/juju and appear to have all the dependencies
> installed. I am using golang 1.5. I have searched this month¹s mailing
> list archives (juju and juju-dev) and googled the symptoms and got
> nothing. I am getting the following when building:
>
> src/github.com/juju/juju/provider/gce/google/instance.go:212: cannot use
> value (type string) as type *string in field value
> src/github.com/juju/juju/provider/gce/google/instance.go:228: cannot use
> item.Value (type *string) as type string in assignment
>
> src/github.com/juju/juju/provider/lxd/lxdclient/client_raw.go:23: cannot
> use (*lxd.Client)(nil) (type *lxd.Client) as type rawClientWrapperFull in
> assignment:
> *lxd.Client does not implement rawClientWrapperFull (wrong type for
> MigrateFrom method)
> have MigrateFrom(string, string, map[string]string, int,
> map[string]string, shared.Devices, []string, string, bool) (*lxd.Response,
> error)
> want MigrateFrom(string, string, map[string]string,
> map[string]string,
> []string, string, bool) (*lxd.Response, error)
>
> src/github.com/juju/juju/provider/azure/environ.go:126: undefined:
> resources.Group
> src/github.com/juju/juju/provider/azure/instance.go:224: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolTCP
> src/github.com/juju/juju/provider/azure/instance.go:226: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolUDP
> src/github.com/juju/juju/provider/azure/instance.go:336: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolTCP
> src/github.com/juju/juju/provider/azure/instance.go:338: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolUDP
> src/github.com/juju/juju/provider/azure/networking.go:65: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolTCP
> src/github.com/juju/juju/provider/azure/networking.go:284: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network².SecurityRuleProtocolTCP
>
> src/github.com/juju/juju/provider/azure/utils.go:15: cannot use
>  (type **map[string]*string) as type *map[string]*string in
> return argument
>
> src/github.com/juju/juju/provider/azure/vmextension.go:68: cannot use
>  (type *map[string]*string) as type
> *map[string]interface {} in field value
>
>
> Before I go too far down the rabbit hole trying to determine the problem,
> I was wondering if either this is a known problem or something I am doing
> incorrectly for the build?
>
> I am building this on a CentOS 7.1 system.
>
> Neale
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Starting a new CI test

2015-12-17 Thread Nate Finch
That is amazing and awesome. Thanks, Martin and everyone else who helped
with this!  Lowering the barrier of entry to writing CI tests can only be a
good thing.

On Thu, Dec 17, 2015 at 1:00 PM Martin Packman 
wrote:

> I just added a cheaty way of getting started on a new CI test.
>
> $ make new-assess name=new_feature
>
> With 'new_feature' as the name you want for your test. This generates
> a new test script, and some unit tests to go along with it, which you
> can run as normal with:
>
> $ make test p=*new_feature*
>
> The QA team has done a fair bit of work of late both simplifying code
> and updating older scripts. These templates should prove to be a
> better example than looking at other random tests as we continue to
> refactor things however. Input from our (many) python experts is
> welcome.
>
> Next steps, a test runner that handles both environment setup and
> annoying problems like ssh config neatly.
>
> Martin
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Comprehensive reviews & when to do them?

2015-12-14 Thread Nate Finch
I think that as long as we make it clear what kind of review we're asking
for, and make sure that we really do follow up on review comments, that we
can have the best of both worlds.  I do think that reviewing once all the
code is done is kind of fruitless... there's a lot more impetus to just
leave the code as it is, since so much time has been invested in it, and
there's often just so much code that really reviewing it would take all day
if not multiple days.

Reviewing in chunks is a lot more reasonable, it just takes some awareness
that you can't expect everything to be perfect and finished (if you want
perfect and finished, wait til it's a 16000 line code drop from the feature
branch).

But it definitely takes discipline on the developer's part to ensure that
they actually go back and address whatever review comments were left, even
if they can't get to everything before they merge to the feature branch.

On Mon, Dec 14, 2015 at 8:22 AM roger peppe 
wrote:

> On 14 December 2015 at 15:39, Katherine Cox-Buday
>  wrote:
> > Hey All,
> >
> > I think we have a mis-alignment in how we currently do reviews and
> feature
> > branches.
> >
> > We've switched over to feature-branches which is great and has allowed
> > Moonstone to land "good enough" code into our feature branch to support a
> > bi-weekly demo and solicit feedback. At the same time, I feel like we're
> > wasting people's time asking for a +1 on code that is not intended to be
> > landed into master. Often this code will take shortcuts or stub out some
> > code, and as the lead I'll make a judgment call to circle-back later.
> > Reviewers don't necessarily know this.
> >
> > Conversely, when we go to land the feature branch into master, these PRs
> are
> > generally rubber-stamped.
> >
> > I feel like maybe we have this backwards?
>
> I feel your pain.
>
> But the other side of this coin is that feature branches can be very big,
> so it's very difficult to give them a comprehensive review when merging
> them into master.
>
> For example, this feature branch (https://github.com/juju/juju/pull/3590)
> involved more than 16000 lines of changes ("diff" prints 30616 lines).
> I wouldn't have any trust in the results of a code review of all that
> code at once.
>
> I think that if you're going to implement features that you don't want
> comprehensively reviewed, it's best to treat them as "spikes" and keep
> them out of the juju repo entirely, then try to land them in manageable
> chunks on the feature branch (or master).
>
> In my view, an important attribute of the feature branches is that they
> are solidly reviewed and thus OK to land without full review of the entire
> diff.
>
> That said, I believe that it is *also* important that people do try
> to review feature branches when they merge, if at all reasonable. It's
> often the first time that people not directly involved with the feature
> branch actually look at the implications for master.
>
>   cheers,
> rog.
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Be very careful in assumptions in tests

2015-12-03 Thread Nate Finch
I'll definitely +1 the need for gc.HasLen ... I've seen a ton of panics in
tests if the code starts erroneously returning nil slices.  Obviously this
is less bad, since the tests still fail, but they're really ugly annoying
failures.

And +1000 to making tests that fail before fixing the code (or at least,
purposely re-breaking the code if you forget and fix the bug first)... I've
caught so many invalid tests that way... it is the only way to prove to
yourself that your test actually work correctly both in proving correct
behavior, and signalling bad behavior.

On Thu, Dec 3, 2015 at 5:24 PM Tim Penhey  wrote:

> Hi folks,
>
> I came across an interesting bug yesterday and during investigation
> found that there was a very comprehensive test that covered the situation.
>
> The problem is that the asserts were not actually running.
>
> The asserts were inside a loop where the expectation was that the loop
> would run exactly once as a previous call returned an array of items
> with one value in it.
>
> Except it didn't.
>
> It returned an empty list.
>
> So, if you see something like this, a simple line like:
>
>   c.Assert(someList, gc.HasLen, 1)
>
> Is sufficient to have caught this particular issue.
>
>
> Remember a key step when writing tests is to have the test fail first,
> then make it pass.
>
> One trick I use is to assert checks against data I know is wrong. If the
> test passes then I know I have other problems.
>
> Tim
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: apiserver authorizers

2015-12-02 Thread Nate Finch
Auth on the api is a big mystery to me. Is there a document on the expected
behavior and the functions and types that back it up?

For example, you said "an environ provisioner may have wide-ranging powers,
but that's no reason to let it see or modify container machines that are
not its direct responsibility" ... I don't really know what that means.
This sounds like authorization, but how do you know who is calling your api
or what they're supposed to be allowed to do?

On Wed, Dec 2, 2015, 6:36 AM William Reade 
wrote:

> The case I spotted yesterday was here:
> http://reviews.vapour.ws/r/3243/diff/1/?file=167369#file167369line41
>
> and in general:
>
>   * seeing `_ common.Authorizer` in a parameter list is a sign you're DIW.
>   * failing to check some auth property when creating the facade is a sign
> you're DIW.
>   * failing to pass the auth on to the facade type itself *might* be OK,
> but should be viewed with some suspicion -- for example:
>   * an environ provisioner may have wide-ranging powers, but that's no
> reason to let it see or modify container machines that are not its direct
> responsibility
>   * right now, many user-facing facades don't need further
> authorization once they know it's an authenticated client, but that won't
> last forever
>
> helpful?
>
> Cheers
> William
>
> On Wed, Dec 2, 2015 at 12:26 PM, roger peppe 
> wrote:
>
>> Could you link to the offending change(s) please, so we
>> can see what doing it wrong looks like?
>>
>> On 2 December 2015 at 09:28, William Reade 
>> wrote:
>> > I just noticed that the unitassigner facade-constructor drops the
>> authorizer
>> > on the floor; and I caught a similar case in a review yesterday (that
>> had
>> > already been LGTMed by someone else).
>> >
>> > Doing that means that *any* api connection can use the thus-unprotected
>> > facade -- clients, agents, and malicious code running in a compromised
>> > machine and using the agent credentials. I don't think we have any APIs
>> > where this is actually a good idea; the best I could say about any such
>> case
>> > is that it's not *actively* harmful *right now*. But big exploits are
>> made
>> > of little holes, let's make an effort not to open them in the first
>> place.
>> >
>> > Moonstone, please fix the unitassigner facade ASAP; everyone else, be
>> told,
>> > and keep an extra eye out for this issue in reviews :).
>> >
>> > Cheers
>> > William
>> >
>> > --
>> > Juju-dev mailing list
>> > Juju-dev@lists.ubuntu.com
>> > Modify settings or unsubscribe at:
>> > https://lists.ubuntu.com/mailman/listinfo/juju-dev
>> >
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: utils/fslock needs to DIAF

2015-12-01 Thread Nate Finch
Certainly using Windows' file system lock is appropriate for locking its
files.  I thought the case we were talking about was just abusing that
ability as a generic cross-process lock.

I wasn't aware of how configstore was using fslock.

On Tue, Dec 1, 2015 at 11:58 AM roger peppe <roger.pe...@canonical.com>
wrote:

> On 1 December 2015 at 16:43, Nate Finch <nate.fi...@canonical.com> wrote:
> > I think half the problem is that someone named the package fslock and not
> > oslock, so we're stuck asking the wrong question.
> >
> > If the question is "How do I acquire an OS-level lock for a process?"
> The
> > answer on Windows is "Use a named mutex".
>
> That's not the question we need to answer here, for the configstore case
> anyway. In that case, we really do want to protect access to a shared data
> store in the filesystem, so the term "fslock" is appropriate I think.
>
> >  Dave seems to be saying that the
> > only problem with unix domain sockets is if you try to think about them
> like
> > files... which may be a result of us thinking too much about the solution
> > and not the problem. (again, I'm not an expert, you guys can duke out
> what
> > the best solution is)  I think using hackery with the filesystem locking
> on
> > Windows is a mistake.
>
> Is there something wrong with using the Windows file-exclusive-open
> semantics to implement a lock associated with a entity in the file system?
> Isn't that what it's for?
>
>
> > On Tue, Dec 1, 2015 at 4:10 AM roger peppe <roger.pe...@canonical.com>
> > wrote:
> >>
> >> So I'm with axw on this one - flock seems like it is a reasonable tool
> for
> >> the job here. FWIW a Unix domain socket also suffers from the "won't
> >> work across NFS" problem. Abstract unix domain sockets also
> >> have the problem that they won't work with long file paths (what
> >> were they thinking?)
> >>
> >> We have been using github.com/camlistore/lock and although it's not
> >> totally ideal (see https://github.com/camlistore/lock/issues/10 )
> >> it does the job. Note that it's non-blocking, so a blocking
> >> layer above it is necessary, for example see the lockFile
> >> function in
> >> https://github.com/juju/persistent-cookiejar/blob/master/serialize.go
> >>
> >> The Windows named mutex thing does sound interesting because
> >> it's a blocking API, which is actually what we need. On the other
> >> hand, under Windows, files opened for writing are locked by default
> >> anyway, so it might be easier just to leverage that property.
> >> The camlistore lock code could use some improvement for the
> >> Windows platform - we could either fork it, or bradfitz would
> >> probably be amenable to a PR.
> >>
> >>   cheers,
> >> rog.
> >>
> >> On 1 December 2015 at 04:12, Nate Finch <nate.fi...@canonical.com>
> wrote:
> >> > I'm not a linux expert, but definitely a named mutex is exactly the
> >> > correct
> >> > thing to use for Windows.  Using something else for this purpose would
> >> > be
> >> > very surprising to a Windows dev and much more likely to be buggy.  I
> >> > don't
> >> > see any reason we need to use the exact same implementation on all
> OSes
> >> > if
> >> > there is something that does exactly the right thing without any
> >> > finagling.
> >> > FWIW, mutexes do die with the process:
> >> >
> >> >
> https://msdn.microsoft.com/en-us/library/windows/desktop/ms682411(v=vs.85).aspx
> >> >
> >> > On Mon, Nov 30, 2015 at 8:29 PM Andrew Wilkins
> >> > <andrew.wilk...@canonical.com> wrote:
> >> >>
> >> >> On Tue, Dec 1, 2015 at 9:07 AM David Cheney
> >> >> <david.che...@canonical.com>
> >> >> wrote:
> >> >>>
> >> >>> http://0pointer.de/blog/projects/locking.html
> >> >>>
> >> >>> In short, opening the same file twice and asserting a lock on it
> will
> >> >>> succeed.
> >> >>
> >> >>
> >> >> Thanks. The article is a bit exasperated. Yes, there are problems to
> be
> >> >> aware of, but it doesn't make them unusable in all cases.
> >> >>  - Juju agents don't get installed onto NFS file systems, so doesn't
> >> >> matter for the agents.
> >> >>  - We're in full control

Re: utils/fslock needs to DIAF

2015-12-01 Thread Nate Finch
I think half the problem is that someone named the package fslock and not
oslock, so we're stuck asking the wrong question.

If the question is "How do I acquire an OS-level lock for a process?"  The
answer on Windows is "Use a named mutex".  Dave seems to be saying that the
only problem with unix domain sockets is if you try to think about them
like files... which may be a result of us thinking too much about the
solution and not the problem. (again, I'm not an expert, you guys can duke
out what the best solution is)  I think using hackery with the filesystem
locking on Windows is a mistake.

On Tue, Dec 1, 2015 at 4:10 AM roger peppe <roger.pe...@canonical.com>
wrote:

> So I'm with axw on this one - flock seems like it is a reasonable tool for
> the job here. FWIW a Unix domain socket also suffers from the "won't
> work across NFS" problem. Abstract unix domain sockets also
> have the problem that they won't work with long file paths (what
> were they thinking?)
>
> We have been using github.com/camlistore/lock and although it's not
> totally ideal (see https://github.com/camlistore/lock/issues/10 )
> it does the job. Note that it's non-blocking, so a blocking
> layer above it is necessary, for example see the lockFile
> function in
> https://github.com/juju/persistent-cookiejar/blob/master/serialize.go
>
> The Windows named mutex thing does sound interesting because
> it's a blocking API, which is actually what we need. On the other
> hand, under Windows, files opened for writing are locked by default
> anyway, so it might be easier just to leverage that property.
> The camlistore lock code could use some improvement for the
> Windows platform - we could either fork it, or bradfitz would
> probably be amenable to a PR.
>
>   cheers,
> rog.
>
> On 1 December 2015 at 04:12, Nate Finch <nate.fi...@canonical.com> wrote:
> > I'm not a linux expert, but definitely a named mutex is exactly the
> correct
> > thing to use for Windows.  Using something else for this purpose would be
> > very surprising to a Windows dev and much more likely to be buggy.  I
> don't
> > see any reason we need to use the exact same implementation on all OSes
> if
> > there is something that does exactly the right thing without any
> finagling.
> > FWIW, mutexes do die with the process:
> >
> https://msdn.microsoft.com/en-us/library/windows/desktop/ms682411(v=vs.85).aspx
> >
> > On Mon, Nov 30, 2015 at 8:29 PM Andrew Wilkins
> > <andrew.wilk...@canonical.com> wrote:
> >>
> >> On Tue, Dec 1, 2015 at 9:07 AM David Cheney <david.che...@canonical.com
> >
> >> wrote:
> >>>
> >>> http://0pointer.de/blog/projects/locking.html
> >>>
> >>> In short, opening the same file twice and asserting a lock on it will
> >>> succeed.
> >>
> >>
> >> Thanks. The article is a bit exasperated. Yes, there are problems to be
> >> aware of, but it doesn't make them unusable in all cases.
> >>  - Juju agents don't get installed onto NFS file systems, so doesn't
> >> matter for the agents.
> >>  - We're in full control of the files we're locking, we're not locking
> >> some file like /etc/passwd where some other random bit of code in the
> >> process is going to open/close it and release the lock by accident.
> >>  - We don't need byte-range locking.
> >>
> >> So only the original uncertainty remains: do we care about clients
> running
> >> their home directory on an NFS share, where the NFS *server* is too old
> to
> >> support flock?
> >>
> >> Maybe a named mutex on Windows and a domain socket on *NIX is the way to
> >> go. I'm not dead set on flock; I just want something that is simple,
> robust
> >> and portable.
> >>
> >> Cheers,
> >> Andrew
> >>
> >>> On Tue, Dec 1, 2015 at 12:04 PM, Andrew Wilkins
> >>> <andrew.wilk...@canonical.com> wrote:
> >>> > On Tue, Dec 1, 2015 at 8:57 AM David Cheney
> >>> > <david.che...@canonical.com>
> >>> > wrote:
> >>> >>
> >>> >> fcntl won't work in threaded go applications, it barely works in non
> >>> >> threaded applications.
> >>> >
> >>> >
> >>> > This is news to me. I've used it plenty in the past, in
> multi-threaded
> >>> > programs. Please point me at relevant literature that explains where
> it
> >>> > doesn't work.
> >>> >
> >>> >>
> >>> >> I'm 

Re: lxd provider in master

2015-11-23 Thread Nate Finch
I guess we don't error out when we see duplicate environment names?
Shouldn't we?

On Mon, Nov 23, 2015 at 6:47 PM Tim Penhey  wrote:

> >> In environments.yaml file I added a very simple clause:
> >>
> >>lxd:
> >>   type: lxd
> >
>
> found that further down the file I found this:
>
> lxd:
>type: local
>container: lxd
>
> That was causing problems :-)
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New feature for 1.26 (master), $(JUJU_HOME)/aliases

2015-10-27 Thread Nate Finch
You're not selling me on this ;)

On Tue, Oct 27, 2015, 4:01 PM Tim Penhey  wrote:

> I can't help but draw parallels to the plugin system.
>
> There was considerable resistance to adding the ideas of plugins to
> Juju. It was added complexity that no one needed and could be done
> simply using existing executable scripts, like juju-foo.
>
> Tim
>
> On 28/10/15 07:33, Cory Johns wrote:
> > You can accomplish most of this with the existing plugin system.  You
> > can't override existing commands, but you can easily create thin
> > wrappers around them with your desired default args, and given the
> > discussion around --no-aliases, it seems like this is actually a
> > benefit.  And plugins provide an easy way to add more intelligent
> > handling of args, and also come with zero additional dev burden on core.
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: On Call Reviewer Duties

2015-10-21 Thread Nate Finch
Sounds good to me.

On Wed, Oct 21, 2015, 6:39 AM Matthew Williams <
matthew.willi...@canonical.com> wrote:

> Hey Folks,
>
> I propose that on call reviewers as part of their job should also review
> the issues list on https://github.com/juju/juju/issues and attempt to
> triage anything that's there. We moved to github to improve collaboration,
> but the issues list is often left ignored. If every OCR took a look at it
> we'd get it under control very quickly
>
> Thoughts
>
> Matty
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


  1   2   3   >