Re: lxd and constraints

2017-01-13 Thread Nate Finch
I just feel like we're entering a minefield that our application and CLI
aren't really built to handle.  I think we *should* handle it, but it needs
to be well planned out, instead of just doing a tiny piece at a time and
only figuring out later if we did the right thing.

There's a few problems I can see:

1.) you can have 10 lxd containers with memory limit of 2GB on a machine
with 4GB of RAM.  Deploying 10 applications to those containers that each
have a constraint of mem=2GB will not work as you expect.  We could add
extra bookkeeping for this, and warn you that you appear to be
oversubscribing the host, but that's more work.

2.) What happens if you try to deploy a container without a memory limit on
a host that already has a container on it?

For example:
4GB host
2GB lxd container
try to deploy a new service in a container on this machine.
Do we warn?  We have no clue how much RAM the service will use.  Maybe
it'll be fine, maybe it won't.

3.) Our CLI doesn't really work well with constraints on containers:

juju deploy mysql --constraints mem=2G --to lxd

Does this deploy a machine with 2GB of ram and a container with a 2GB ram
limit on it?  Or does it deploy a machine with 2GB of ram and a container
with no limit on it?  It has to be one or the other, and currently we have
no way of indicating which we want to do, and no way to do the other one
without using multiple commands.

This is a more likely use case, creating a bigger machine that can hold
multiple containers:
juju add-machine --constraints mem=4GB
// adds machine, let's say 5
// create a container on machine 5 with 2GB memory limit
juju deploy mysql --constraints mem=2GB --to lxd:5

At least in this case, the deploy command is clear, there's only one thing
they can possibly mean.  Usually, the placement directive would override
the constraint, but in this case, it does what you would want... but it is
a littler weird that --to lxd:5 uses the constraint, but --to 5 ignores it.

Note that you can't just write a simple script to do the above, because the
machine number is variable, so you have to parse our output and then use
that for the next command.  It's still scriptable, obviously, but it's more
complicated script than just two lines of bash.

Also note that using this second method, you can't deploy more than one
unit at a time, unless you want multiple units on containers on the same
machine (which I think would be pretty odd).



On Fri, Jan 13, 2017 at 3:48 AM Rick Harding <rick.hard...@canonical.com>
wrote:

In the end, you say you want an instance with 2gb of ram and if the cloud
has an instance with that exact limit it is in fact an exact limit. The key
things here is the clouds don't have infinite malleable instance types like
containers (this works for kvm and for lxd). So I'm not sure the mis-match
is as far apart as it seems. root disk means give me a disk this big, if
you ask for 2 core as long as you can match an instance type with 2 cores
it's exactly the max you get.

It seems part of this can be more adjusting the language from "minimum" to
something closer to "requested X" where request gives it more of a "I want
X" without the min/max boundaries.



On Fri, Jan 13, 2017 at 3:14 AM John Meinel <j...@arbash-meinel.com> wrote:

So we could make it so that constraints are actually 'exactly' for LXD,
which would then conform to both minimum and maximum, and would still be
actually useful for people deploying to containers. We could certainly
probe the host machine and say "you asked for 48 cores, and the host
machine doesn't have it".

However, note that explicit placement also takes precedence over
constraints anyway. If you do:
  juju deploy mysql --constraints mem=4G
today, and then do:
 juju add-unit --to 2
We don't apply the constraint limitations to that specific unit. Arguably
we should at *least* be warning that the constraints for the overall
application don't appear to be valid for this instance.

I guess I'd rather see constraints still set limits for containers, because
people really want that functionality, and that we warn any time you do a
direct placement and the constraints aren't satisfied. (but warn isn't
failing the attempt)

John
=:->

On Fri, Jan 13, 2017 at 10:09 AM, Stuart Bishop <stuart.bis...@canonical.com
> wrote:

On 13 January 2017 at 02:20, Nate Finch <nate.fi...@canonical.com> wrote:

I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.



I'm not really sure how to resolve this problem.  Maybe it's not a
problem.  Maybe constraints just have a different meaning for containers?
You have to specify the machine number you're deploying to for any
deployment past the first anyway, so you're already manually choosing the
machine, at which point, constraints don't really make sense anyway.


I don't think Juju can handle this. Either constraints h

Re: lxd and constraints

2017-01-12 Thread Nate Finch
Merlijn:
I definitely agree that having the same term mean different things on
different platforms is a really bad idea.  I don't think we can change the
concept of constraints as minimums at this point, but maybe a new concept
of limits (to match lxd terminology) could be added.  Limits really only
make sense for containers, so we would either have to error out, or warn
the user and ignore it if you specified a limit when deploying a base
machine.

*Mike:*
The problem with trying to figure out how much "unused" RAM a host has is
that it gets thrown out the window if you ever deploy any unit to the host
machine, or if you deploy a unit in a container without a RAM constraint.
Those units may then use as much RAM as they want.


On Thu, Jan 12, 2017 at 3:22 PM Mike Pontillo <mike.ponti...@canonical.com>
wrote:

On Thu, Jan 12, 2017 at 11:20 AM, Nate Finch <nate.fi...@canonical.com>
wrote:

I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.

It seems as though lxd limits (what in juju we would call constraints) are
implemented as maximums, not minimums.  For containers sharing a host, this
makes sense.  If you say "don't use more than 2 gigs of ram" then you know
the max that container will use, and thus how much leftover you can expect
the host to have for other containers.


Is the second part (how much leftover you can expect the host to have for
other containers) captured somewhere? Because it seems to me that the
important question Juju needs to be asking is, "how over-provisioned is the
host I'm about to deploy on?", so that containers can be intelligently
load-balanced across the infrastructure.

Assuming Juju has full control over the hosts it is deploying containers
onto[2], I think one thing to do might be to allow the admin to specify
ratios (maybe separate for each of CPU, RAM, disk) to indicate how
over-provisioned to allow hosts and containers to be to be.

Let's take as an example a host with 16 GB of RAM, where you want to deploy
16 containers with a constraint of "at least 1G of RAM". There could be two
relevant over-provisioning ratios: one to specify how over-provisioned a
container hypervisor can be, and the other to specify how much more RAM
than the constraint specifies the container can be allowed to use. This
idea is perhaps a little naive; I'm not sure where one would specify these
values.

If that sounds confusing, maybe it's easier to look at an example[1]:

Host RAM (GB) Minimum RAM Constraint (GB) Host Over-provisioning Ratio
Container
Over-provisioning Ratio Allowed number of containers for host RAM Limit per
Container (GB)
16 2 1.00 1.00 8 2
16 1 1.00 1.00 16 1
16 1 2.00 1.00 32 1
16 1 1.00 2.00 16 2
16 1 2.00 2.00 32 2
16 1 2.00 4.00 32 4
16 1 16.00 4.00 256 4
16 1 16.00 1.00 256 1


Regards,
Mike

[1]:
https://docs.google.com/spreadsheets/d/1j6-98nB5AA_viHK9nF42MPpbZf35wv2N2gDi8DrhepU/view


[2]: (so that the numbers aren't thrown off by other juju deployments, or
non-juju deployments to the same hypervisor)
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


lxd and constraints

2017-01-12 Thread Nate Finch
I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.

It seems as though lxd limits (what in juju we would call constraints) are
implemented as maximums, not minimums.  For containers sharing a host, this
makes sense.  If you say "don't use more than 2 gigs of ram" then you know
the max that container will use, and thus how much leftover you can expect
the host to have for other containers.

However, in Juju's case, we expect constraints to be minimums, so that we
know the unit on that machine will have enough RAM to function (e.g. "give
me a machine with at least 2GB of RAM, since I know my application won't
work well with less").

This impedance mismatch is tricky to untangle.  With a naive implementation
of Juju constraints for lxd as a straight up setting of lxd limits, then
you can add a lxd container and specify a memory constraint that is higher
than the host machine's memory, and lxd will happily let you do that
because it knows that container won't exceed that amount of memory (by
definition it cannot).  But it means that juju will then let you deploy a
unit that needs more memory than the container has access to.

Note that this is also the case for setting cores.  On my local lxd
environment I can juju add-machine --constraints cores=48 and the container
will be created just fine.

I'm not really sure how to resolve this problem.  Maybe it's not a
problem.  Maybe constraints just have a different meaning for containers?
You have to specify the machine number you're deploying to for any
deployment past the first anyway, so you're already manually choosing the
machine, at which point, constraints don't really make sense anyway.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Representative tests failing

2016-12-14 Thread Nate Finch
Seems like there's a likely out of disk space problem with the LXD and
windows machines that run representative tests.  For example:

http://juju-ci.vapour.ws/job/github-check-merge-juju/441/
http://juju-ci.vapour.ws/job/github-check-merge-juju/442/

Sounds like this happens often enough to be a problem.

Is this something we can automate by resetting to a snapshot or something
along those lines?
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: One instance manual provider

2016-12-05 Thread Nate Finch
Except you can't expose anything deployed to lxd to the network, right?

On Mon, Dec 5, 2016, 5:49 PM Marco Ceppi <marco.ce...@canonical.com> wrote:

> A one machine manual provider? Might as well just `ssh ; juju
> bootstrap lxd`.
>
> On Mon, Dec 5, 2016 at 10:09 AM Nate Finch <nate.fi...@canonical.com>
> wrote:
>
> The should be no reason you can't deploy to the controller machine using
> manual just like any other cloud.
>
> juju bootstrap manual/x.x.x.x  mycloud
> juju switch controller
> juju deploy  --to 0
>
> Switching to the controller model is probably what you were missing, since
> the default model comes with no machines.
>
> On Mon, Dec 5, 2016 at 9:27 AM Rick Harding <rick.hard...@canonical.com>
> wrote:
>
> I'll have to test it out but I would think that you could
>
> 1) bring up a machine, create a container on it, bootstrap to that
> container as the controller, create another container, and then add-machine
> it as a second machine and things should work ok.
>
> 2) I wonder if you can bootstrap to a machine, manually add container on
> that machine, and then add that container with add-machine.
>
> I'm guessing there's some bits about making sure the added containers have
> the ssh key you want to use for the ssh connection for add-machine.
>
> On Mon, Dec 5, 2016 at 3:18 PM Matthew Williams <
> matthew.willi...@canonical.com> wrote:
>
> Hey Folks,
>
> I notice the docs state that at least two instances are needed for the
> manual provider: https://jujucharms.com/docs/stable/clouds-manual. Some
> quick playing around suggests that this is indeed the case.
>
> Is there a technical reason why? I'd love to spin up a charm on [insert
> vps provider here] and only spend money for one instance
>
> Matty
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Application names starting with digits

2016-12-02 Thread Nate Finch
I generally assume that "hard to type" doesn't apply when you're talking
about someone's native language.  Yes, you or I would have trouble
typing 數據庫 (database), but to someone in China, that's probably a word they
type all the time.  Forcing people to use an English translation for the
name of their software is not very welcoming in this day and age.

On Fri, Dec 2, 2016 at 10:46 AM Marco Ceppi <marco.ce...@canonical.com>
wrote:

> On Fri, Dec 2, 2016 at 10:31 AM Nate Finch <nate.fi...@canonical.com>
> wrote:
>
> One thing we *could* do to support non-english names that would not
> entirely open the door to emoji etc is to simply constrain the names to
> unicode letters and numbers.  Thus you could name something 數據庫 but
> not .
>
>
> I bothered Rick about this a while ago (half jokingly) since I own  http://
> ☁.ws <http://xn--l3h.ws> (poo cloud!) and was going to make a charm
> accompanying that name. Localized unicode characters - emoji or otherwise -
> are still a difficult UX compared to alphanumerics. It takes me 10 mins to
> find the emojis to type the damn domain in if I'm not on a phone.
>
> The only path for unicode names I could see happening, and it's a stretch,
> is if the application name can be set to a larger range of characters.
> Where you may want to name horizon deployed in your environment to
> something localized (or emoji) but the charm name should be flat and simple.
>
> On Fri, Dec 2, 2016 at 9:29 AM Mark Shuttleworth <m...@ubuntu.com> wrote:
>
> On 02/12/16 09:23, Adam Collard wrote:
>
> True, but we could do normalisation in the charm store to prevent
> malicious names. I think it's an important aspect of software in the modern
> world that it can support the wide array of languages that we humans use.
>
>
> This just transfers the definition of 'OK' to a different codebase.
>
> It's much better to have a simple rule that can be well documented,
> enforced the same way in store and client and snapd, and typed on any
> laptop without having to refer to the internet for assistance.
>
>
> Mark
>
> --
>
>
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Application names starting with digits

2016-12-02 Thread Nate Finch
One thing we *could* do to support non-english names that would not
entirely open the door to emoji etc is to simply constrain the names to
unicode letters and numbers.  Thus you could name something 數據庫 but
not .

On Fri, Dec 2, 2016 at 9:29 AM Mark Shuttleworth  wrote:

> On 02/12/16 09:23, Adam Collard wrote:
>
> True, but we could do normalisation in the charm store to prevent
> malicious names. I think it's an important aspect of software in the modern
> world that it can support the wide array of languages that we humans use.
>
>
> This just transfers the definition of 'OK' to a different codebase.
>
> It's much better to have a simple rule that can be well documented,
> enforced the same way in store and client and snapd, and typed on any
> laptop without having to refer to the internet for assistance.
>
>
> Mark
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Application names starting with digits

2016-12-02 Thread Nate Finch
There's no technical reason for the restriction, AFAIK.  I believe it's
just aesthetic.

On Fri, Dec 2, 2016, 5:50 AM James Page  wrote:

> Hi All
>
> Is there a specific rationale for application names being limited to not
> starting with a digit?  I get why they can't end with one but I don't see
> why juju should place the same restriction on the start of an app name?
> Example (where we tripped over this):
>
>juju deploy ubuntu 6wind-virtual-accelerator
>
> (and ubuntu just to test the name out of a proposed charm where we had
> testing problems due to this limitation)
>
> Thoughts?
>
> James
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: A (Very) Minimal Charm

2016-12-01 Thread Nate Finch
So, the whole point of my original email was to say - if you want a minimal
charm, just make one, it's easy.
I just published my above mentioned minimal charm as cs:~natefinch/nop

It's not showing up on jujucharms.com, maybe because charm proof is failing
because it's missing everything except the metadata?  I don't know, but it
still works with juju deploy.

On Thu, Dec 1, 2016 at 10:07 PM José Antonio Rey <j...@ubuntu.com> wrote:

> Wouldn't it be possible for us to implement a configuration flag, and
> have it as default? Going back to the general point, the idea behind the
> ubuntu charm is to have a vainilla Ubuntu where you can work on
> anything. I understand we're mostly using it for testing, and reactive
> is now a big part of the ecosystem. I find two simple approaches for this:
>
>   * Create a ubuntu-vainilla charm which doesn't install any of the
> required packages OR
>   * Implement a 'vainilla' boolean configuration flag, where you can
> specify True for the vainilla Ubuntu install, False to install all of
> the reactive/other dependencies.
>
> If we get to work around the pyyaml issue and implement a better
> solution in the long term, I think that would be amazing. However, we
> can't let one dependency drag us down, and in the meanwhile, we have to
> implement a workaround.
>
> On 12/01/2016 01:56 PM, Cory Johns wrote:
> > Marco,
> >
> > What is the issue you mentioned with using snaps where you mentioned
> > needing an "unconfined classic snap"?
> >
> > On Thu, Dec 1, 2016 at 1:13 PM, Marco Ceppi <marco.ce...@canonical.com
> > <mailto:marco.ce...@canonical.com>> wrote:
> >
> > On Thu, Dec 1, 2016 at 12:56 PM Casey Marshall
> > <casey.marsh...@canonical.com <mailto:casey.marsh...@canonical.com>>
> > wrote:
> >
> > On Thu, Dec 1, 2016 at 6:53 AM, Marco Ceppi
> > <marco.ce...@canonical.com <mailto:marco.ce...@canonical.com>>
> > wrote:
> >
> >     On Thu, Dec 1, 2016 at 5:00 AM Adam Collard
> > <adam.coll...@canonical.com
> > <mailto:adam.coll...@canonical.com>> wrote:
> >
> > On Thu, 1 Dec 2016 at 04:02 Nate Finch
> > <nate.fi...@canonical.com
> > <mailto:nate.fi...@canonical.com>> wrote:
> >
> > On IRC, someone was lamenting the fact that the
> > Ubuntu charm takes longer to deploy now, because it
> > has been updated to exercise more of Juju's
> > features.  My response was - just make a minimal
> > charm, it's easy.  And then of course, I had to
> > figure out how minimal you can get.  Here it is:
> >
> > It's just a directory with a metadata.yaml in it
> > with these contents:
> >
> > name: min
> > summary: nope
> > description: nope
> > series:
> >   - xenial
> >
> > (obviously you can set the series to whatever you
> want)
> > No other files or directories are needed.
> >
> >
> > This is neat, but doesn't detract from the bloat in the
> > ubuntu charm.
> >
> >
> > I'm happy to work though changes to the Ubuntu charm to
> > decrease "bloat".
> >
> >
> > IMHO the bloat in the ubuntu charm isn't from support
> > for Juju features, but the switch to reactive plus
> > conflicts in layer-base wanting to a) support lots of
> > toolchains to allow layers above it to be slimmer and b)
> > be a suitable base for "just deploy me" ubuntu.
> >
> >
> > But it is to support the reactive framework, where we
> > utilize newer Juju features, like status and
> > application-version to make the charm rich despite it's
> > minimal goal set. Honestly, a handful of cached wheelhouses
> > and some apt packages don't strike me as bloat, but I do
> > want to make sure the Ubuntu charm works for those using it.
> So,
> >
> >
> > I think a minimal wheelhouse to provide a consistent charm hook
> > runtime is very reasonable and definitely not the problem here.
> >
> > There are too many packages that ge

A (Very) Minimal Charm

2016-11-30 Thread Nate Finch
On IRC, someone was lamenting the fact that the Ubuntu charm takes longer
to deploy now, because it has been updated to exercise more of Juju's
features.  My response was - just make a minimal charm, it's easy.  And
then of course, I had to figure out how minimal you can get.  Here it is:

It's just a directory with a metadata.yaml in it with these contents:

name: min
summary: nope
description: nope
series:
  - xenial

(obviously you can set the series to whatever you want)
No other files or directories are needed.

Figured this might be useful for others.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: "unfairness" of juju/mutex

2016-11-16 Thread Nate Finch
Just for historical reference.  The original implementation of the new OS
mutex used flock until Dave mentioned that it presented problems with file
management (files getting renamed, deleted, etc).

In general, I'm definitely on the side of using flock, though I don't think
that necessarily solves our starvation problem, it depends on how flock is
implemented and the specific behavior of our units.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Dependencies added for new jsonschema support

2016-11-01 Thread Nate Finch
Thanks, Katherine, I had intended to mention in the email that this was a
request for review by the tech board :)

On Tue, Nov 1, 2016, 5:14 PM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Nate Finch <nate.fi...@canonical.com> writes:
>
> > In support of my PR that adds an interactive mode for add-cloud, we
> needed to add
> > jsonschema for the cloud configurations that can be inspected at runtime
> to generate the
> > interactive queries.
>
> Thanks for the great write-up, Nate. I've added it to the tech board
> agenda[1] and we should review tonight.
>
> --
> Katherine
>
> [1] -
> https://docs.google.com/document/d/13nmOm6ojX5UUNtwfrkqr1cR6eC5XDPtnhN5H6pFLfxo/edit
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Dependencies added for new jsonschema support

2016-11-01 Thread Nate Finch
In support of my PR  that adds an
interactive mode for add-cloud, we needed to add jsonschema for the cloud
configurations that can be inspected at runtime to generate the interactive
queries.

Unfortunately, the jsonschema package we're currently using (gojsonschema)
does not expose the structure of the schema, so it's not able to be
inspected by internal code (
https://godoc.org/github.com/xeipuuv/gojsonschema#Schema).

In looking for a new jsonschema package, I got recommended
github.com/lestrrat/go-jsschema which is actively maintained, and does
expose the schema.

I have made a wrapper package for go-jsschema at github.com/juju/jsonschema
which adds some extra fields to the schema to support our particular use
case, but delegates most of the heavy lifting to the underlying package.

Currently, importing go-jsschema imports 5 other repos from the same author
in support of jsonschema, and one package by a third party (dave's
github.com/pkg/errors).  I am asking about making a PR to remove one of
those dependencies, but it still leaves us with 6 new dependencies (5 of
which are from the same author).

Here's the additions:
github.com/lestrrat/go-jsschema
github.com/lestrrat/go-jspointer
github.com/lestrrat/go-jsref
github.com/lestrrat/go-jsval
github.com/lestrrat/go-pdebug
github.com/lestrrat/go-structinfo
github.com/pkg/errors

The pdebug one is the one I can easily remove with a PR.  I'd like to
remove the structinfo one, but that might be less likely (would require
copying a few dozen lines of code, and I doubt the guy would be amenable...
but I can ask).

However, this shuld be able to be used as a more or less drop in
replacement for the existing forks of the gojsonschema package which we're
maintaining, which would remove 3 dependencies (albeit ones more or less
maintained by us).  In theory, we could also remove environschema at some
point and replace it with jsonschema, but that's a bigger job.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Long tests

2016-10-27 Thread Nate Finch
I ran the tests serially today to try to get a picture of what tests
actually take a long time without worrying about contention.  The full run
took almost 45 minutes (normally it takes like 15 testing packages in
parallel).

Here's the full output (sorry for the google drive link, pastebin got mad
at the length):
https://drive.google.com/file/d/0B-r4AW1RoHJNR3plVjhwSVM1Z0E/view

Unfortunately, the state tests panicked after a 10 minute timeout, but
there's still some good info in there.

Here's some especially bad low hanging fruit (note these are the times for
single tests, not full suites):

PASS: showoutput_test.go:59: ShowOutputSuite.TestRun 18.007s
PASS: upgradejuju_test.go:307: UpgradeJujuSuite.TestUpgradeJuju 10.722s
PASS: status_test.go:3292: StatusSuite.TestStatusAllFormats 12.255s
PASS: unit_test.go:217: UnitSuite.TestUpgradeFailsWithoutTools 10.200s
PASS: syslog_test.go:220: syslogSuite.TestConfigChange 31.715s
PASS: syslog_test.go:193: syslogSuite.TestLogRecordForwarded 31.731s
PASS: bakerystorage_test.go:72: BakeryStorageSuite.TestExpiryTime 58.037s
PASS: local_test.go:547:
localServerSuite.TestStopInstanceSecurityGroupNotDeleted 29.046s
PASS: kvm-broker_test.go:329:
kvmProvisionerSuite.TestContainerStartedAndStopped 10.098s
PASS: worker_test.go:69:
statusHistoryPrunerSuite.TestWorkerWontCallPruneBeforeFiringTimer 10.000s
PASS: uniter_test.go:1190: UniterSuite.TestActionEvents 39.242s
PASS: uniter_test.go:977: UniterSuite.TestUniterRelations 14.132s

There's about 112 tests that are faster than these, but still take more
than a second to run, shown here: http://pastebin.ubuntu.com/23391234/

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Big memory usage improvements

2016-10-17 Thread Nate Finch
FYI, this tools looks promising.  Probably worth checking out to see if
it'll help us track down those memory leaks.
https://stackimpact.com/blog/memory-leak-detection-in-production-go-applications/

On Thu, Oct 13, 2016 at 8:02 AM Christian Muirhead <
christian.muirh...@canonical.com> wrote:

> Oops, meant to reply-all.
>
> -- Forwarded message -
> From: Christian Muirhead 
> Date: Thu, 13 Oct 2016, 09:26
> Subject: Re: Big memory usage improvements
> To: Katherine Cox-Buday 
>
>
>
>
> On Wed, 12 Oct 2016, 22:36 Katherine Cox-Buday, <
> katherine.cox-bu...@canonical.com> wrote:
>
> Awesome, good work, Christian!
>
>
> :)
> Thanks for your help with the StatePool!
>
>
> Not to detract from this fantastic news, but it's still worrisome that an
> idle Juju seems to have a memory which is growing linearly (before picture
> looked like the beginning of an exponential curve?). Do we feel this is
> memory which will at some point be GCed?
>
>
> No, I think there's still a leak there. The other ones I fixed were
> goroutine leaks which were a bit easier to track down. There aren't any
> goroutines escaping now so I'll need to switch back to looking at heap
> dumps to find more.
>
> Cheers,
> Christian
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: List plugins installed?

2016-09-29 Thread Nate Finch
Seem alike the easiest thing to do is have a designated plugin directory
and have juju install  copy the binary/script there.  Then
we're only running plugins the user has specifically asked to install.

On Thu, Sep 29, 2016, 4:33 AM Stuart Bishop 
wrote:

> On 28 September 2016 at 22:45, roger peppe 
> wrote:
>
>> On 28 September 2016 at 14:55, Rick Harding 
>> wrote:
>> > This is just a miss. The original ability to see the plugins was a
>> subset of
>> > the help command and didn't make our CLI spreadsheet for things to
>> rework. I
>> > agree that list-plugins is the right idea here and that means that
>> plugins
>> > becomes a noun in our language.
>> >
>> > What's interesting is that add/remove fall out because that
>> > installing/uninstalling. I think that show-plugin might be interesting
>> to
>> > auto run the --description flag to bring it into CLI alignment with the
>> new
>> > world order.
>>
>> I've voiced discomfort with this before - I don't think that we should
>> arbitrarily run all executables that happen to have a "juju-" prefix.
>> It's potentially dangerous (for example, note that although git relies
>> heavily
>> on plugins, it doesn't execute a plugin until you explicitly name it).
>>
>> Perhaps there could be a standard way for a plugin to provide
>> metadata about itself as a data file.
>>
>>
> It also might be time to work out how a Juju snap is going to call or
> install plugins. I don't think the existing design is going to work, and
> there is still time to flag it as deprecated in the changelogs for 2.0 and
> work out the way forward for 2.1.
>
>
> --
> Stuart Bishop 
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-20 Thread Nate Finch
Personally, I really enjoy having everything in the same place, more than I
expected. It's also one less barrier of entry for newcomers and outsiders.

On Tue, Sep 20, 2016, 5:54 PM Menno Smits <menno.sm...@canonical.com> wrote:

> (gah, hit send too early)
>
> ... If we decide to stay with RB, that will need to be fixed.
>
> On 21 September 2016 at 09:53, Menno Smits <menno.sm...@canonical.com>
> wrote:
>
>> Some of us probably got a little excited (me included). There should be
>> discussion and a clear announcement before we make a signigicant change to
>> our process. The tech board meeting is today/tonight so we'll discuss it
>> there as per Rick's email. Please contribute to this thread if you haven't
>> already and have strong opinions either way on the topic.
>>
>> Interestingly our Github/RB integration seems to have broken a little
>> since Github made these changes. The links to Reviewboard on pull requests
>> aren't getting inserted any more. If we decide to stay with RB
>>
>> On 21 September 2016 at 05:54, Rick Harding <rick.hard...@canonical.com>
>> wrote:
>>
>>> I spoke with Alexis today about this and it's on her list to check with
>>> her folks on this. The tech board has been tasked with he decision, so
>>> please feel free to shoot a copy of your opinions their way. As you say, on
>>> the one hand it's a big impact on the team, but it's also a standard
>>> developer practice that not everyone will agree with so I'm sure the tech
>>> board is a good solution to limiting the amount of bike-shedding and to
>>> have some multi-mind consensus.
>>>
>>> On Tue, Sep 20, 2016 at 1:52 PM Katherine Cox-Buday <
>>> katherine.cox-bu...@canonical.com> wrote:
>>>
>>>> Seems like a good thing to do would be to ensure the tech board doesn't
>>>> have any objections and then put it to a vote since it's more a property of
>>>> the team and not the codebase.
>>>>
>>>> I just want some consistency until a decision is made. E.g. "we will be
>>>> trying out GitHub reviews for the next two weeks; all reviews should be
>>>> done on there".
>>>>
>>>> --
>>>> Katherine
>>>>
>>>> Nate Finch <nate.fi...@canonical.com> writes:
>>>>
>>>> > Can we try reviews on github for a couple weeks? Seems like we'll
>>>> > never know if it's sufficient if we don't try it. And there's no setup
>>>> > cost, which is nice.
>>>> >
>>>> > On Tue, Sep 20, 2016 at 12:44 PM Katherine Cox-Buday
>>>> > <katherine.cox-bu...@canonical.com> wrote:
>>>> >
>>>> > I see quite a few PRs that are being reviewed in GitHub and not
>>>> > ReviewBoard. I really don't care where we do them, but can we
>>>> > please pick a direction and move forward? And until then, can we
>>>> > stick to our previous decision and use RB? With people using both
>>>> > it's much more difficult to tell what's been reviewed and what
>>>> > hasn't.
>>>> >
>>>> > --
>>>> > Katherine
>>>> >
>>>> > Nate Finch <nate.fi...@canonical.com> writes:
>>>> >
>>>> > > In case you missed it, Github rolled out a new review process.
>>>> > It
>>>> > > basically works just like reviewboard does, where you start a
>>>> > review,
>>>> > > batch up comments, then post the review as a whole, so you don't
>>>> > just
>>>> > > write a bunch of disconnected comments (and get one email per
>>>> > review,
>>>> > > not per comment). The only features reviewboard has is the edge
>>>> > case
>>>> > > stuff that we rarely use: like using rbt to post a review from a
>>>> > > random diff that is not connected directly to a github PR. I
>>>> > think
>>>> > > that is easy enough to give up in order to get the benefit of
>>>> > not
>>>> > > needing an entirely separate system to handle reviews.
>>>> > >
>>>> > > I made a little test review on one PR here, and the UX was
>>>> > almost
>>>> > > exactly like working in reviewboard:
>>>> > > https://github.com/juju/juju/pull/6234
>>>> > >
>>>> > > There may be important edge cases I'm missing, but I think it's
>>>> > worth
>>>> > > looking into.
>>>> > >
>>>> > > -Nate
>>>>
>>>> --
>>>> Juju-dev mailing list
>>>> Juju-dev@lists.ubuntu.com
>>>> Modify settings or unsubscribe at:
>>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>>
>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>>
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-20 Thread Nate Finch
Can we try reviews on github for a couple weeks?  Seems like we'll never
know if it's sufficient if we don't try it.  And there's no setup cost,
which is nice.

On Tue, Sep 20, 2016 at 12:44 PM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> I see quite a few PRs that are being reviewed in GitHub and not
> ReviewBoard. I really don't care where we do them, but can we please pick a
> direction and move forward? And until then, can we stick to our previous
> decision and use RB? With people using both it's much more difficult to
> tell what's been reviewed and what hasn't.
>
> --
> Katherine
>
> Nate Finch <nate.fi...@canonical.com> writes:
>
> > In case you missed it, Github rolled out a new review process. It
> > basically works just like reviewboard does, where you start a review,
> > batch up comments, then post the review as a whole, so you don't just
> > write a bunch of disconnected comments (and get one email per review,
> > not per comment). The only features reviewboard has is the edge case
> > stuff that we rarely use: like using rbt to post a review from a
> > random diff that is not connected directly to a github PR. I think
> > that is easy enough to give up in order to get the benefit of not
> > needing an entirely separate system to handle reviews.
> >
> > I made a little test review on one PR here, and the UX was almost
> > exactly like working in reviewboard:
> > https://github.com/juju/juju/pull/6234
> >
> > There may be important edge cases I'm missing, but I think it's worth
> > looking into.
> >
> > -Nate
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reviews on Github

2016-09-15 Thread Nate Finch
Reviewboard goes down a couple times a month, usually from lack of disk
space or some other BS.  According to a source knowledgeable with these
matters, the charm was rushed out, and the agent for that machine is down
anyway, so we're kinda just waiting for the other shoe to drop.

As for the process things that Ian mentioned, most of those can be
addressed with a sprinkling of convention.  Marking things as issues could
just be adding :x: to the first line (github even pops up suggestions and
auto-completes), thusly:

[image: :x:]This will cause a race condition

And if you want to indicate you're dropping a suggestion, you can use :-1:
 which gives you a thumbs down:

[image: :-1:] I ran the race detector and it's fine.

It won't give you the cumulative "what's left to fix" at the top of the
page, like reviewboard... but for me, I never directly read that, anyway,
just used it to see if there were zero or non-zero comments left.

As for the inline comments in the code - there's a checkbox to hide them
all.  It's not quite as convenient as the gutter indicators per-comment,
but it's sufficient, I think.

On Wed, Sep 14, 2016 at 6:43 PM Ian Booth <ian.bo...@canonical.com> wrote:

>
>
> On 15/09/16 08:22, Rick Harding wrote:
> > I think that the issue is that someone has to maintain the RB and the
> > cost/time spent on that does not seem commensurate with the bonus
> features
> > in my experience.
> >
>
> The maintenance is not that great. We have SSO using github credentials so
> there's really no day to day works IIANM. As a team, we do many, many
> reviews
> per day, and the features I outlined are significant and things I (and I
> assume
> others) rely on. Don't under estimate the value in knowing why a comment
> was
> rejected and being able to properly track that. Or having review comments
> collapsed as a gutter indicated so you can browse the code and still know
> that
> there's a comment there. With github, you can hide the comments but
> there's no
> gutter indicator. All these things add up to a lot.
>
>
> > On Wed, Sep 14, 2016 at 6:13 PM Ian Booth <ian.bo...@canonical.com>
> wrote:
> >
> >> One thing review board does better is use gutter indicators so as not to
> >> interrupt the flow of reading the code with huge comment blocks. It also
> >> seems
> >> much better at allowing previous commits with comments to be viewed in
> >> their
> >> entirety. And it allows the reviewer to differentiate between issues and
> >> comments (ie fix this vs take note of this), plus it allows the notion
> of
> >> marking stuff as fixed vs dropped, with a reason for dropping if needed.
> >> So the
> >> github improvements are nice but there's still a large and significant
> gap
> >> that
> >> is yet to be filled. I for one would miss all the features reviewboard
> >> offers.
> >> Unless there's a way of doing the same thing in github that I'm not
> aware
> >> of.
> >>
> >> On 15/09/16 07:22, Tim Penhey wrote:
> >>> I'm +1 if we can remove the extra tools and we don't get email per
> >> comment.
> >>>
> >>> On 15/09/16 08:03, Nate Finch wrote:
> >>>> In case you missed it, Github rolled out a new review process.  It
> >>>> basically works just like reviewboard does, where you start a review,
> >>>> batch up comments, then post the review as a whole, so you don't just
> >>>> write a bunch of disconnected comments (and get one email per review,
> >>>> not per comment).  The only features reviewboard has is the edge case
> >>>> stuff that we rarely use:  like using rbt to post a review from a
> random
> >>>> diff that is not connected directly to a github PR. I think that is
> easy
> >>>> enough to give up in order to get the benefit of not needing an
> entirely
> >>>> separate system to handle reviews.
> >>>>
> >>>> I made a little test review on one PR here, and the UX was almost
> >>>> exactly like working in reviewboard:
> >> https://github.com/juju/juju/pull/6234
> >>>>
> >>>> There may be important edge cases I'm missing, but I think it's worth
> >>>> looking into.
> >>>>
> >>>> -Nate
> >>>>
> >>>>
> >>>
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
> >
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Reviews on Github

2016-09-14 Thread Nate Finch
In case you missed it, Github rolled out a new review process.  It
basically works just like reviewboard does, where you start a review, batch
up comments, then post the review as a whole, so you don't just write a
bunch of disconnected comments (and get one email per review, not per
comment).  The only features reviewboard has is the edge case stuff that we
rarely use:  like using rbt to post a review from a random diff that is not
connected directly to a github PR. I think that is easy enough to give up
in order to get the benefit of not needing an entirely separate system to
handle reviews.

I made a little test review on one PR here, and the UX was almost exactly
like working in reviewboard: https://github.com/juju/juju/pull/6234

There may be important edge cases I'm missing, but I think it's worth
looking into.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-09-07 Thread Nate Finch
Just a note, because it wasn't clear to me - there are a couple cases where
the automatic upload tools won't do what you want, if you use a version of
juju you built locally.

If you're not a developer or someone who builds juju from source, it's safe
to ignore this email.

*1. If the version of juju you're building is available in streams and you
*want* to upload, you have to use --build-agent, because by default we
won't upload.  *
This happens if you're purposely building an old release, or if QA just
released a new build and you haven't pulled yet and/or master hasn't been
updated with a new version number.  --build-agent works like the old
upload-tools, except it always rebuilds jujud, even if there's an existing
jujud binary.

*2. If you're building a version of the code that is not available in
streams, and you *don't* want to upload, you must use
--agent-version=.*
This can happen if you want to deploy a known-good server version, but only
have a dev client around.  I use this to make sure I can reproduce bugs
before testing my fixes.  --agent-version works basically like the old
default (non-upload) behavior, except you have to explicitly specify a juju
version that exists in streams to deploy (e.g. --agent-version=2.0-beta17)


Note that if you want to be *sure* that juju bootstrap always does what you
expect it to, IMO you should always use either --build-agent (to upload) or
--agent-version (to not upload), depending on your intent.  The behavior of
the bare juju bootstrap can change without warning (from uploading to not
uploading) if a new version of jujud is pushed to streams that matches what
you're building locally (which happens every time a new build is pushed to
streams, until master is updated and you git pull and rebuild), and that
can be really confusing if you are expecting your changes to be uploaded,
and they're not.  It also changes behavior if you switch from master (which
always uploads) to a release tag (which never uploads), which while
predictable, can be easy to forget.

Related note, beta18 (which is in current master) and later versions of the
client can't bootstrap with --agent-version 2.0-beta17 or earlier, due to a
breaking change (you'll get an error about mismatching UUIDs).  This type
of breakage is rare, and generally should only happen during betas (or
pre-beta), but it impacts us right now, so... yeah.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: 1.25.6-xenial-amd64 and lxc containers

2016-08-30 Thread Nate Finch
You don't need anything in the environment.yaml.  You specify that you want
to deploy something to a lxc container in the deploy command.  e.g.

juju deploy mysql --to lxc   // deploys mysql to a new lxc container on a
new machine
juju deploy wordpress --to lxc:25  // deploys wordpress to a new lxc
container on the already-existing machine #25
juju deploy haproxy --to 1/lxc/3  // deploys haproxy to lxc container #3 on
machine #1

you can also use add-machine to create empty containers:

juju add-machine lxc // adds a new machine with an empty lxc container on it
juju add-machine lxc:5 // adds a new empty lxc container on machine 5

Hope that helps.
-Nate

On Tue, Aug 30, 2016 at 4:16 PM Daniel Bidwell  wrote:

> What do I have to put in my .juju/environment.yaml to get juju-1.25.6-
> xenial-amd64 to use lxc containers?
> --
> Daniel Bidwell 
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Latest new about Juju master branch - upload-tools obsoleted

2016-08-15 Thread Nate Finch
Ian, can you describe how Juju decides if it's running for a developer or
an end user?  I'm worried this could trip people up who are both end users
and happen to have a juju development environment.

On Mon, Aug 15, 2016 at 11:29 AM Mark Shuttleworth  wrote:

> On 15/08/16 08:27, Ian Booth wrote:
> > So if you pull master you'll no longer need to use upload-tools.
> > Juju will Do the Right Thing*, when you type:
>
> Thanks Ian, this is a lovely simplification.
>
> Mark
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: So I wanted to update a dependency . .

2016-08-12 Thread Nate Finch
The main problem with putting code into a different directory is that if
there are files in that repo that reference the package itself, they'd need
to be updated.

So like for example, if flag_test.go imports "launchpad.net/gnuflag" to do
external tests, we'd need to edit that code to import "gnuflag" instead

Also, if there are any third party packages that use one of the same
dependencies we do, we'd need to edit those files, too...

On Fri, Aug 12, 2016 at 2:31 PM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> On Fri, Aug 12, 2016 at 9:11 AM, Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> The problem really stems from two go-isms -
>>
>> 1.) The standard for Go projects is to have the path on disk
>> representative of the URL to the project.
>> 2.) Every file that uses a package has an import statement that
>> references the path on disk.
>>
>> If either one of these were not true, we could have avoided your problem.
>>
>> There's nothing we can do about #2.  Building with the go tool requires
>> that if you use a package in a file, you must have an import statement for
>> it, and if you import "foo/bar/gnuflag", that code must live in
>> $GOPATH/src/foo/bar/gnuflag".
>>
>> For #1, *in theory* we could change things.  Only "go get" knows that
>> $GOPATH/src/launchpad.net/gnuflag corresponds to
>> http://launchpad.net/gnuflag.  When you compile and build, all the go
>> tool cares about is the path on disk and what code is in that folder.  We
>> *could* detach the idea of an import path from the URL it corresponds
>> to.  We could git clone github.com/juju/gnuflag into
>> $GOPATH/src/gnuflag, and the go tool would build it just fine.  the import
>> statement would then be import "gnuflag" and this would work just fine.
>> We'd need to maintain a mapping file of repo to folder on disk, similar to
>> what we do now with dependencies.tsv, but it wouldn't be the end of the
>> world.
>>
>> I'm not entirely convinced this is a good idea, but it would make
>> updating dependencies a lot easier.
>>
> This is interesting, and more or less what I was after in my mail. I
> wasn't sure if there were any out of the box ideas that would make life
> easier, but I hoped to surface some for discussion. Thanks!
>
> I suppose my follow-up question is, why or why not is this a good idea?
> Having to change all of the imports is painful, but as you say, it is an
> intentional design decision by the Go Community. I don't have any real
> feeling or inkling on what other Go projects do, but I suspect at least
> some of them follow this idea of importing all dependencies and updating
> them only if needed. However, by nature of them simply cloning the source
> into the repo, I would also guess they are not able to easily update a
> dependency.
>
>>
>> The source code hack to mgo is due to this problem as well. In theory we
>> can just fork mgo and make our fix, but that would again require us to
>> change a lot of import statements.
>>
>> Part of the problem was that you were changing two very common
>> dependencies that were imported 4 levels deep (i.e. a imports b imports c
>> imports d, and they all import the dependency).  This wouldn't change no
>> matter what we do - you'd still need to start the changes at d and roll
>> them up the stack.  That would be true in any language.
>>
> Yes, I told of my pain, but as painful as it was, I got lots of help from
> many of you. Thank you again for helping land this.
>
>>
>> The "circular" dependency (in quotes because you can't have a true
>> circular dependency in go, but you can have two repos that both depend on
>> each other) shouldn't have landed that way.  Two repos should definitely
>> not depend on each other.  No other repo should ever depend on
>> github.com/juju/juju.  We explicitly make no claims for backwards
>> compatibility, so the thing you're depending on could just disappear one
>> day.  If you need something in there, factor it out into its own repo with
>> a backwards compatibility guarantee first.
>>
>> It *is* a problem that dependencies under juju can be imported in broken
>> states because of our use of dependencies.tsv.  There's nothing that says
>> we couldn't run all the unit tests for all packages juju uses in the
>> landing job for github.com/juju/juju (and any other repo we desire)...
>> just run go test ./... from $GOPATH/src and it'll test everything we have
>> downloaded to build juju with.  That's an easy win, and something we should
>> start doing ASAP in my opin

Re: So I wanted to update a dependency . .

2016-08-12 Thread Nate Finch
This is why we want to make a juju library, so we have a package that we
know we need to keep backwards compatibility with.  Sure, you can vendor or
pin the revision, but that doesn't help you when a new feature or fix lands
and you update, and everything breaks.  If we have a library we try to keep
backwards compatible, then these problems would be minimized.

On Fri, Aug 12, 2016 at 1:56 PM roger peppe <roger.pe...@canonical.com>
wrote:

> On 12 August 2016 at 14:11, Nate Finch <nate.fi...@canonical.com> wrote:
> > No other repo should ever depend on github.com/juju/juju.
>
> I have to call this out. There is no decent way to use Juju from Go without
> depending on github.com/juju/juju.
>
> And backward compatibility isn't insurmountable - dependencies can be
> locked or
> vendored. Juju itself uses many non-backwardly compatible
> dependencies, after all.
>
> I think we should be aiming to provide a nice Go API to Juju, whether that
> ends
> up in github.com/juju/juju or not.
>
>   cheers,
> rog.
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Let's talk retries

2016-08-09 Thread Nate Finch
So, in my opinion, juju/retry is pretty good.  I think with some tweaking
and some intelligent use, it can feel every bit as lightweight as we'd like.

If we have common use cases, we just need to write up an API to encapsulate
them.  If we want a "retry N times or until this channel is closed" then
write an api for that.  Having preconstructed call args with prepopulated
best practices for delays, backoffs, etc. can make it a lot easier.

There's no reason running a complex retry can't look like this:

// closure to encapsulate local variables
f := func() error { return foo.bar(a, b, c) }
err := retry.WithCancel(f, juju.StdRetry, doneCh)

Where StdRetry is the juju standard retry scheme (standard timeout, backoff
strategy, etc), and doneCh is the channel you expect to get closed to
cancel this action.

That's doable with the current API if we add cancellation (though it would
be nice if we extracted the function-to-run from the args struct, since it
makes it more obvious that the retry args can be reused for different
functions).

I think goroutines should be controlled by the caller.  If the caller wants
to put retry.Foo() in a goroutine, it can... but I don't think it's useful
to have that be retry's responsibility, it complicates the package too much.

On Tue, Aug 9, 2016 at 11:26 AM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Andrew's queue is certainly nice, but I agree with the points Roger is
> raising:
>
> 1. Go's idiom for coordinating concurrent operations are goroutines and
> channels.
> 2. Sticking to these idioms makes it much easier to compose parts into a
> whole (if only because the language strongly bends code that way).
>
> As a matter of opinion, I also dislike having to instantiate structs to
> encapsulate logic; I would rather write the logic inline, or in a closure
> -- possibly in a goroutine.
>
> William Reade  writes:
>
> > On Tue, Aug 9, 2016 at 10:17 AM, roger peppe
> >  wrote:
> >
> > BTW I'm not saying that a timer queue is never the correct answer.
> > In some
> > circumstances, it can be the exactly the right thing to use.
> >
> >
> > Yeah -- the context here is that katco has been looking at the
> > provisioner, and I think a timer queue is sensible there. Firing off
> > goroutines for every machine is not necessarily bad by any means, but
> > it *is* somewhat harder to get right in every circumstance (e.g. what
> > happens when a machine is removed while that other goroutine is
> > working on provisioning it?).
> >
> > There are certainly many places where a single retryable op *is* the
> > only important thing and there's no call for a control loop. I'm not
> > suggesting we should always use a queue; but I'd tend to encourage it
> > whenever we have a component dealing with many similar retryable
> > tasks. (I definitely should be using it in dependency.Engine, for
> > example.)
> >
> > Cheers
> > William
> >
>
> --
> Katherine
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


No help Topics in 2.0

2016-07-26 Thread Nate Finch
Juju 1.x had a ton of help in the CLI, right at your fingertips:

$ juju1 help topics
azure-provider  How to configure a Windows Azure provider
basics  Basic commands
commandsBasic help for all commands
constraints How to use commands with constraints
ec2-providerHow to configure an Amazon EC2 provider
global-options  Options common to all commands
glossaryGlossary of terms
hpcloud-providerHow to configure an HP Cloud provider
jujuWhat is Juju?
juju-systemsAbout Juju Environment Systems (JES)
local-provider  How to configure a local (LXC) provider
logging How Juju handles logging
maas-provider   How to configure a MAAS provider
openstack-provider  How to configure an OpenStack provider
placement   How to use placement directives
plugins Show Juju plugins
spaces  How to configure more complex networks using spaces
topics  Topic list
users   About users in Juju

Almost all of this has been removed in 2.0:

$ juju help topics
basics  Basic Help Summary
commandsBasic help for all commands
global-options  Options common to all commands
topics  Topic list

There are a lot of parts of the CLI that *need* that extra level of
documentation.  I'm all for dropping a link to the website at the bottom of
some help docs for further info, but dropping the help entirely seems like
a mistake.  Some things like constraints and placement are used in multiple
commands, but there's no way to know from the CLI how to entire valid
values for them.  Things like the providers are not explained anywhere in
the CLI now, as far  as I can tell.

The plaintext docs are the easiest thing for us to update and maintain...
they live right in our own repo and let us update them whenever the code
updates, and they're the easiest thing for users to get to... let's not
drop that on the floor.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Cleansing Mongo data

2016-06-24 Thread Nate Finch
It seems as though we should be cleansing all the keys since we never
know what queries we might want to make in the future.

On Fri, Jun 24, 2016 at 12:04 PM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

>
> As I have only just discovered the need to cleanse mongo data, I can't say
> for sure, but it looks like we may have been cleansing things in the parts
> of Juju that need it. William may know more.
>
> If not, I imagine a small upgrade step would make short work of any
> problems.
>
> roger peppe  writes:
>
> > This is useful, thanks.
> >
> > Note that's it's not necessary to cleanse *all* keys that go into Mongo,
> > just the ones that might be used in queries.
> >
> > But one thought... what about keys that already contain full-width
> > dollar and dot?
> >
> >   cheers,
> > rog.
> >
> > On 23 June 2016 at 21:09, Katherine Cox-Buday
> >  wrote:
> >> Hey all,
> >>
> >> William gave me a good review and it came up that I wasn't cleansing
> >> some of
> >> the data being placed in Mongo. I wasn't aware this had to be done,
> >> and
> >> after talking to a few other folks it became apparent that maybe not
> >> many
> >> people know we should be doing this.
> >>
> >> At any rate, William also pointed me to some existing code which did
> >> this.
> >> I've pulled it out into the mongo/utils package for general
> >> consumption. The
> >> comments do a pretty good job of elucidating why this is necessary.
> >>
> >> https://github.com/juju/juju/blob/master/mongo/utils/data_cleansing.go
> >>
> >> -
> >> Katherine
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
>
> --
> Katherine
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: fslock is dead, long live mutex

2016-06-21 Thread Nate Finch
Thanks for this, Tim!  Really happy to see that old fslock code go away,
and using much more reliable OS-level functionality.

On Tue, Jun 21, 2016 at 11:53 AM Alexis Bruemmer <
alexis.bruem...@canonical.com> wrote:

> Tim this is really awesome and will address a few long standing and
> painful field bugs.  Thank you for taking the extra effort to do this work!
>
> Also thank you to Nate Finch and David Cheney for their contributions on
> this effort.
>
> On Tue, Jun 21, 2016 at 2:24 AM, Tim Penhey <tim.pen...@canonical.com>
> wrote:
>
>> Hi folks,
>>
>> We have finally managed to exorcise the fslock from the codebase. Both
>> 1.25 and master no longer refer to it at all. We need to remove it from the
>> juju/utils package to make sure that people don't accidentally try and use
>> it again.
>>
>> There is a new replacement, the juju/mutex package. To acquire a mutex,
>> you create a mutex.Spec structure. You must at least provide a name, delay
>> and clock. The delay is how long the code waits between tries to acquire
>> the mutex. Named mutexs are shared between processes on the same machine.
>> If a process holds the mutex and the process dies, the mutex is
>> automatically released.  A spec can also have a timeout value, and/or an
>> abort channel.
>>
>> On linux this is implemented with abstract domain sockets, on windows it
>> uses a named semaphore, and on other platforms a flock is used on a named
>> temp file.
>>
>> https://godoc.org/github.com/juju/mutex
>>
>> Cheers,
>> Tim
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
>
>
> --
> Alexis Bruemmer
> Juju Core Manager, Canonical Ltd.
> (503) 686-5018
> alexis.bruem...@canonical.com
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Tests do not pass with go1.7

2016-06-08 Thread Nate Finch
Just FYI, in case anyone was like me and decided they wanted to jump on the
faster compile times in 1.7... some of our tests do not pass in go 1.7:

https://bugs.launchpad.net/juju-core/+bug/1589350
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


version parsing

2016-06-03 Thread Nate Finch
a bug was discovered in our version parsing:
https://bugs.launchpad.net/juju-core/+bug/1588911

basically, 2.0.alpha10 was being parsed as the tag being alpha1 and the
patch being 0, instead of alpha and 10.  While correcting this problem, I
noticed that we were matching [a-zA-Z0-9_]+ for the tag... which I'm pretty
sure was a mistake.  After talking to Curtis and Martin, I changed it so
the tag is only [a-z]+.  I'm pretty sure this will not affect anyone else,
but posting it to the list just in case.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: list, show and get (was Re: Fixing "juju help commands")

2016-05-24 Thread Nate Finch
I can understand list-clouds vs show-cloud, for plural vs singular (even
though I don't think we actually have commands like that, with a plural and
a singular).  But show-config vs get-config seems really pedantic. You're
still showing one thing. One is a foo, one is a foo-config. They have very
easily differentiated names, unlike plural vs singular, which actually
might be confusing if they only differed by an s.

On Tue, May 24, 2016, 6:45 PM Tim Penhey <tim.pen...@canonical.com> wrote:

> On 25/05/16 01:49, Nate Finch wrote:
> > While we're on the subject... we have list-foos, show-foos, and
> > get-foo... and I honestly cannot remember when to use which. Can we just
> > standardize on one, so no one has to remember if it's list-models or
> > show-models or show-model-config or get-model-config?  They all mean the
> > same thing, let's just pick one.
>
> 'list-foos' which is to be an alias for 'foos' returns a list of foos.
> 'show-foo' shows information about a single 'foo'
> 'get-foo' gets configuration about a particular 'foo'
>
> This is consistent across all our foos now.
>
> Tim
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Fixing "juju help commands"

2016-05-24 Thread Nate Finch
If there are things we really don't want uses to run, they should probably
just not exist in production builds. If they're just less-used commands,
then I don't think it's a big deal to have them in juju help commands.

If we want to remove them from production builds, they could either be
compiled out by default, or just made into plugins.  But I don't think
hiding is really a good idea.

I am +1 on removing aliases from the default output of juju help commands,
though.  It's really long and spammy right now, and half of the length is
just due to aliases.

I do like the idea of every juju list-foos having an alias of juju foos...
but then why even have list-foos at all?

While we're on the subject... we have list-foos, show-foos, and get-foo...
and I honestly cannot remember when to use which. Can we just standardize
on one, so no one has to remember if it's list-models or show-models or
show-model-config or get-model-config?  They all mean the same thing, let's
just pick one.
On Tue, May 24, 2016, 8:12 AM Marco Ceppi  wrote:

> On Tue, May 24, 2016, 12:19 AM Tim Penhey 
> wrote:
>
>> Hi folks,
>>
>> More from the "let's fix the output" school of thought.
>>
>> One thing that has bugged me about 'juju help commands' was the repeated
>> mentions of "alias for ".
>>
>> I propose that we don't show aliases by default, and allow the user to
>> task for them.
>>
>> Also, the supercommand describe commands method was written before I
>> knew about the tabular output, so some code could be cleaned up there.
>>
>> Proposal:
>>
>> juju help commands
>>- shows the commands that are neither aliases, nor hidden
>> juju help commands --alias
>>- shows either just the aliases, or everything including aliases
>> juju help commands --all
>>- shows basic commands, aliases and hidden commands.
>>
>> I'd like to add the ability to say a command is hidden so it doesn't
>> show up in the commands list. The purpose for these could be debugging
>> assisting type functions, like "dump-model" or "debug-controller" (two
>> commands that don't yet exist).
>>
>
> I don't see a need for this, I and other users often use juju help
> commands to find or remind me of these commands. If you expect users to run
> these debug/dump commands ever, show them.
>
> Even if you don't expect people to run them, hidding them seems awkward.
> Better to simply educate with good help output about what the command does
> and when/why use the command.
>
>
>> Thoughts?
>>
>> Tim
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: natefinch/npipe/v2 does not support the POSIX read/close semantics juju needs

2016-05-23 Thread Nate Finch
Ug.. I'll look at this tomorrow (12:30 is a bad time for me to be diving
into windows API BS)

On Mon, May 23, 2016 at 11:40 PM David Cheney 
wrote:

> TL;DR read issue https://bugs.launchpad.net/bugs/1581337
>
> Thumper asked me to forward this to juju-dev for discussion
>
>
> http://reports.vapour.ws/releases/3964/job/run-unit-tests-win2012-amd64/attempt/2384
>
> Fails because, npipe.Accept is blocking on accept
>
> goroutine 41 [syscall, locked to thread]:
> syscall.Syscall(0x7f8139f102c, 0x2, 0x1a8, 0x, 0x0,
> 0xc082183de8, 0x3e5, 0x1c0050)
>  C:/Go/src/runtime/syscall_windows.go:163 +0x5c
> syscall.WaitForSingleObject(0x1a8, 0xc0, 0xc082183e10, 0x0, 0x0)
>  C:/Go/src/syscall/zsyscall_windows.go:693 +0x6f
> gopkg.in/natefinch/npipe%2ev2.waitForCompletion(0x15c, 0xc0822a42a0,
> 0xc08202b880, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> gopkg.in/natefinch/npipe.v2/npipe_windows.go:181
> +0x45 
> gopkg.in/natefinch/npipe%2ev2.(*PipeListener).AcceptPipe(0xc08202b880,
> 0x0, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> gopkg.in/natefinch/npipe.v2/npipe_windows.go:307
> +0x397 
> gopkg.in/natefinch/npipe%2ev2.(*PipeListener).Accept(0xc08202b880,
> 0x0, 0x0, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> gopkg.in/natefinch/npipe.v2/npipe_windows.go:261
> +0x44 
>
> github.com/juju/juju/worker/metrics/spool.(*socketListener).loop(0xc08227e780
> ,
> 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:71
> +0xaa
> 
>
> github.com/juju/juju/worker/metrics/spool.NewSocketListener.func1(0xc08227e780)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:43
> +0x65
> 
> created by github.com/juju/juju/worker/metrics/spool.NewSocketListener
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:44
> +0x140
> 
>
> and
>
> socketListener is waiting on the tomb to hit done.
>
> goroutine 43 [chan receive]:
> launchpad.net/tomb.(*Tomb).Wait(0xc08227e790, 0x0, 0x0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> launchpad.net/tomb/tomb.go:108
> +0x5f 
>
> github.com/juju/juju/worker/metrics/spool.(*socketListener).Stop(0xc08227e780)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:56
> +0x18c
> 
> github.com/juju/juju/worker/metrics/sender.(*sender).stop(0xc082269770)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/sender/sender.go:104
> +0x45
> 
>
> github.com/juju/juju/worker/metrics/sender.(*sender).(github.com/juju/juju/worker/metrics/sender.stop)-fm()
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/sender/manifold.go:77
> +0x27
> 
>
> github.com/juju/juju/worker/metrics/spool.(*periodicWorker).Kill(0xc0822a40e0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/spool/listener.go:103
> +0x28
> 
>
> github.com/juju/juju/worker/metrics/sender_test.(*ManifoldSuite).setupWorkerTest.func1(0xc0822791d0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/juju/worker/metrics/sender/manifold_test.go:101
> +0x3d
> 
> github.com/juju/testing.(*CleanupSuite).callStack(0xc0821a9a28,
> 0xc0822791d0, 0xc08226bdd0, 0x2, 0x2)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/testing/cleanup.go:52
> +0x51 
> github.com/juju/testing.(*CleanupSuite).TearDownTest(0xc0821a9a28,
> 0xc0822791d0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/testing/cleanup.go:46
> +0x4d 
> github.com/juju/testing.(*IsolationSuite).TearDownTest(0xc0821a9a20,
> 0xc0822791d0)
>  c:/users/admini~1/appdata/local/temp/tmpyrijbo/gogo/src/
> github.com/juju/testing/isolation.go:38
> +0x44 
> 

Re: Specifying bootstrap machine agent details using ip address

2016-05-23 Thread Nate Finch
Hostname or IP address will both work.

On Mon, May 23, 2016 at 1:11 AM phani shankar 
wrote:

> Hi,
>
>  I am new to juju. I was following
> https://jujucharms.com/docs/stable/config-manual document to do manual
> provisioning. While configuring environment.yaml, instead of specifying
> hostname for bootstrap-host, is there a mechanism to specify IP address of
> the machine ?. Please let me know your thoughts.
>
>
> Regards,
> PHANI SHANKAR
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Three CI builds in the last 48 hours have failed because launchpad had a lie down

2016-05-18 Thread Nate Finch
 I agree.  We should snapshot a known-good starting point and start from
there.  It's a huge waste of time to do apt update / upgrade / install git,
bzr, mercurial, etc every single time we run CI.  That's like 1/3rd the
time it takes to run the unit test CI job.

On Wed, May 18, 2016 at 10:35 PM David Cheney 
wrote:

> This is a huge waste of wall time and bandwidth, and increases the
> chance of failure significantly. This is something I would like to see
> changed.
>
> On Thu, May 19, 2016 at 12:06 PM, John Meinel 
> wrote:
> > CI does a build from scratch in a pristine VM. There is some discussion
> > about changing that in a variety of ways (in a container in a long
> running
> > VM, etc), but it doesn't have a local copy to work from so it has to
> pull it
> > from upstream each time.
> >
> > John
> > =:->
> >
> >
> > On Wed, May 18, 2016 at 8:05 PM, David Cheney <
> david.che...@canonical.com>
> > wrote:
> >>
> >> We already have godeps which can take a set of vcs repos and flick
> >> them to the right revisions.
> >>
> >> Why does CI check out every single dependency from upstream every
> >> single time we do a build ? That introduces wc -l dependencies.tsv
> >> points of failure to every single CI run -- not to mention the several
> >> minutes it spends laboriously checking out the code.
> >>
> >> Thanks
> >>
> >> Dave
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >
> >
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Static Analysis tests

2016-05-02 Thread Nate Finch
I think it's a good point that we may only want to run some tests once, not
once per OS/Arch.  However, I believe these particular tests are still
limited in scope by the OS/arch of the host machine.  The go/build package
respects the build tags in files, so in theory, one run of this on Ubuntu
could miss code that is +build windows.  (we could give go/build different
os/arch constraints, but then we're back to running this N times for N
os/arch variants)

I'm certainly happy to have these tests run in verify.sh, but it sounds
like they may need to be run per-OS testrun as well.

On Sun, May 1, 2016 at 10:27 PM Andrew Wilkins <andrew.wilk...@canonical.com>
wrote:

> On Thu, Apr 28, 2016 at 11:48 AM Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> Maybe we're not as far apart as I thought at first.
>>
>
>
>> My thought was that they'd live under github.com/juju/juju/devrules (or
>> some other name) and therefore only get run during a full test run or if
>> you run them there specifically.  What is a full test run if not a test of
>> all our code?  These tests just happen to test all the code at once, rather
>> than piece by piece.  Combining with the other thread, if we also marked
>> them as skipped under -short, you could easily still run go test ./...
>> -short from the root of the juju repo and not incur the extra 16.5 seconds
>> (gocheck has a nice feature where if you call c.Skip() in the SetUpSuite,
>> it skips all the tests in the suite, which is particularly appropriate to
>> these tests, since it's the SetUpSuite that takes all the time).
>>
>
> I'm not opposed to using the Go testing framework in this instance,
> because it makes most sense to write the analysis code in Go. That may not
> always be the case, though, and I don't want to have a rule of "everything
> as Go tests" that means we end up shoe-horning things. This is just
> academic until we need something that doesn't live in the Go ecosystem,
> though.
>
> Most importantly, I don't want to lose the ability to distinguish the
> types of tests. As an example: where we run static analysis should never
> matter, so we can cut a merge job short by performing all of the static
> analysis checks up front. That doesn't matter much if we only gate merges
> on running the tests on one Ubuntu series/arch; but what if we want to
> start gating on Windows, CentOS, or additional architectures? It would not
> make sense to run them all in parallel if they're all going to fail on the
> static analysis tests. And then if we've run them up front, it would be
> ideal to not have to run them on the individual test machines.
>
> So I think it would be OK to have a separate "devrules" package, or
> whatever we want to call it. I would still like these tests to be run by
> verify.sh, so we have one place to go to check that the source code is
> healthy, without also running the unit tests or feature tests. If we have a
> separate package like this, test tags are not really necessary in the short
> term -- the distinction is made by separating the tests into their own
> package. We could still mark them as short/long, but that's orthogonal to
> separation-by-purpose.
>
> Cheers,
> Andrew
>
> Mostly, I just didn't want them to live off in a separate repo or run with
>> a separate tool.
>>
>> On Wed, Apr 27, 2016 at 11:39 PM Andrew Wilkins <
>> andrew.wilk...@canonical.com> wrote:
>>
>>> On Thu, Apr 28, 2016 at 11:14 AM Nate Finch <nate.fi...@canonical.com>
>>> wrote:
>>>
>>>> From the other thread:
>>>>
>>>> I wrote a test that parses the entire codebase under
>>>> github.com/juju/juju to look for places where we're creating a new
>>>> value of crypto/tls.Config instead of using the new helper function that I
>>>> wrote that creates one with more secure defaults.  It takes 16.5 seconds to
>>>> run on my machine.  There's not really any getting around the fact that
>>>> parsing the whole tree takes a long time.
>>>>
>>>> What I *don't* want is to put these tests somewhere else which requires
>>>> more thought/setup to run.  So, no separate long-tests directory or
>>>> anything.  Keep the tests close to the code and run in the same way we run
>>>> unit tests.
>>>>
>>>
>>> The general answer to this belongs back in the other thread, but I agree
>>> that long-running *unit* tests (if there should ever be such a thing)
>>> should not be shunted off to another location. Keep the unit tests with the
>>> unit. Integration tests are a different matter, because t

Re: adding unit tests that take a long time

2016-05-02 Thread Nate Finch
On Thu, Apr 28, 2016 at 11:44 PM Anastasia Macmood <
anastasia.macm...@canonical.com> wrote:

> I understand your desire for a quick turn around.
> But I question the value that you would get from running "fast" (short)
> tests - would this set include some fast running unit tests, integration
> tests and functional tests?
>

Yes.


> Simply because they have been identified as running quickly on some
> machines?
>

Yes.  While absolute speed obviously varies based on hardware/load,
relative speed should be pretty constant across environments.


> How would you know if that "fast" run is comprehensive enough?
>

I know for sure that it would *not* be comprehensive.  That's why we
wouldn't use it to gate landing code.  It is only for developers as a quick
smoke test.

The alternative to this is running NO tests while I'm developing.  Which is
what I do now (and probably what most of us do).  I run tests in the one
package I'm currently editing, and usually even just a subset of those if
they're all full-stack tests.  It's just not practical to do a full 15+
minute test run more often than once every few hours at most, unless
absolutely necessary.


> It sounds to me like you might as well say  "let's run couple of tests
> randomly" and rely on these result until you commit...
>

If they're fast, sure.  It's better than no tests.  I run go vet
constantly.  It doesn't find even 1% of the bugs in my code, but it finds
more than zero, and those ones I can fix before I even compile.  go test
-short is not a test that will tell me if everything is definitely
correct.  It's a test that'll tell me if something is definitely *incorrect*.



> I do not know what you will end up doing with your current dilemma. I
> second Andrew's suggestion as well \o/
> Developing short/long test distinctions and special processing for the
> tests that we maintain seems like a waste of our effort.
>

It's 3 very simple lines of code per slow test (or per suite if the suite
itself is slow).  If that's too much, I could make it a single line of code
by writing a helper function.  You'd just see this something like this:

func (s *MySuite) TestFullStackSomething(c *gc.C) {
testing.SkipIfShort(c)
//  rest of test
}

The functionality for the -short flag and for skipping tests is already
built into the test infrastructure we use (both go test and gocheck).
There's no other effort required.  It can be done incrementally as we're
doing other things.  And if someone doesn't like it, doesn't trust it,
thinks it's a bad idea... it won't impact them at all.  They can just
ignore this thread and everything will be exactly as it was (except for one
line of code they can ignore in some tests).

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-28 Thread Nate Finch
I don't really understand what you mean by stages of development.  At the
end of the day, they all test the exact same thing - is our code correct?
The form of the test seems like it should be unrelated to when they are
run.  Can you explain why you think running tests of different sorts at the
same time would be a bad thing?

Note that I only want to "divide up tests" temporally... not necessarily
spatially.  If we want to put all our static analysis tests in one
directory, our integration tests in another directory, unit tests in the
directory of the unit... that's totally fine.  I just want an easy way to
run all the fast tests (regardless of what or how they test) to get a
general idea of how badly I've broken juju during development.

On Thu, Apr 28, 2016 at 5:24 PM Anastasia Macmood <
anastasia.macm...@canonical.com> wrote:

> For what it's worth, to distinguish between tests based on the times they
> take to run is borderline naive. Meaningful distinction is what the test
> tests :D
> Unit test checks that unit of work under testing is doing what is
> expected;
> integration tests tests that we play well together;
> functional tests tests behaviour;
> static analysis analyses codebase to ensure conformity to agreed policies.
>
> They all have meaning at different stages of development and to bundle
> them based on the running time is to compromise these stages in long-term.
>
>
> On 29/04/16 05:03, Nate Finch wrote:
>
> Our full set of tests in github.com/juju/juu takes 10-15 minutes to run,
> depending on the speed of your computer.  It's no coincidence that our test
> pyramid looks more like this ▽ than this △.   Also, we have a lot of tests:
>
> /home/nate/src/github.com/juju/juju/$ grep -r ") Test" .
> --include="*_test.go" | wc -l
> 9464
>
> About small, medium, and large tests... I think that's a good
> designation.  Certainly 17 seconds is not a small test.  But I *think* it
> qualifies as medium (hopefully most would be faster).   Here's my
> suggestion, tying this back into what I was talking about originally:
>
> Small tests would be those that run with go test -short.  That gives you
> something you can run frequently during development to give you an idea of
> whether or not you really screwed up.  Ideally each one should be less than
> 100ms to run.  (Note that even if all our tests ran this fast, it would
> still take 15 minutes to run them, not including compilation time).
>
> Medium tests would also be run if you don't use -short.  Medium tests
> would still be something that an average developer could run locally, and
> while she may want to get up to grab a drink while they're running, she
> probably wouldn't have time to run to the coffee shop to get said drink.
> Medium tests would be anything more than 100ms, but probably less than
> 15-20 seconds (and hopefully not many of the latter).  Medium tests would
> be run before making a PR, and as a gating job.
>
> Long tests should be relegated to CI, such as bringing up instances in
> real clouds.
>
> I don't think it's terribly useful to divide tests up by type of test. Who
> cares if it's a bug found with static analysis or by executing the code?
> Either way, it's a bug.  The only thing that really matters is how long the
> tests take, so we can avoid running slow tests over and over.  I run go
> vet, go lint, and go fmt on save in my editor.  That's static analysis, but
> they run far more often than I actually run tests and that's because
> they're always super fast.
>
> I think we all agree that all of these tests (except for CI tests) should
> be used to gate landings.  The question then is, how do you run the tests,
> and how do you divide up the tests?  To me, the only useful metric for
> dividing them up is how long they take to run.  I'll run any kind of test
> you give me so long as it's fast enough.
>
> On Thu, Apr 28, 2016 at 12:39 PM Nicholas Skaggs <
> nicholas.ska...@canonical.com> wrote:
>
>> On 04/28/2016 10:12 AM, Katherine Cox-Buday wrote:
>> > On 04/27/2016 09:51 PM, Nate Finch wrote:
>> >> So, this is exactly why I didn't want to mention the nature of the
>> >> test, because we'd get sidetracked. I'll make another thread to talk
>> >> about that specific test.
>> Sorry I forced you into it, but it was important to this discussion. I
>> was wanting to understand your feelings towards a test you should be
>> running regularly as you develop, aka a unit test, that took more than a
>> trivial amount of time to actually execute.
>> >>
>> >> I do still want to talk about what we can do for unit tests that take
>> >> a long time.  I think giving developers the option to skip lo

Re: adding unit tests that take a long time

2016-04-28 Thread Nate Finch
Our full set of tests in github.com/juju/juu takes 10-15 minutes to run,
depending on the speed of your computer.  It's no coincidence that our test
pyramid looks more like this ▽ than this △.   Also, we have a lot of tests:

/home/nate/src/github.com/juju/juju/$ grep -r ") Test" .
--include="*_test.go" | wc -l
9464

About small, medium, and large tests... I think that's a good designation.
Certainly 17 seconds is not a small test.  But I *think* it qualifies as
medium (hopefully most would be faster).   Here's my suggestion, tying this
back into what I was talking about originally:

Small tests would be those that run with go test -short.  That gives you
something you can run frequently during development to give you an idea of
whether or not you really screwed up.  Ideally each one should be less than
100ms to run.  (Note that even if all our tests ran this fast, it would
still take 15 minutes to run them, not including compilation time).

Medium tests would also be run if you don't use -short.  Medium tests would
still be something that an average developer could run locally, and while
she may want to get up to grab a drink while they're running, she probably
wouldn't have time to run to the coffee shop to get said drink.  Medium
tests would be anything more than 100ms, but probably less than 15-20
seconds (and hopefully not many of the latter).  Medium tests would be run
before making a PR, and as a gating job.

Long tests should be relegated to CI, such as bringing up instances in real
clouds.

I don't think it's terribly useful to divide tests up by type of test. Who
cares if it's a bug found with static analysis or by executing the code?
Either way, it's a bug.  The only thing that really matters is how long the
tests take, so we can avoid running slow tests over and over.  I run go
vet, go lint, and go fmt on save in my editor.  That's static analysis, but
they run far more often than I actually run tests and that's because
they're always super fast.

I think we all agree that all of these tests (except for CI tests) should
be used to gate landings.  The question then is, how do you run the tests,
and how do you divide up the tests?  To me, the only useful metric for
dividing them up is how long they take to run.  I'll run any kind of test
you give me so long as it's fast enough.

On Thu, Apr 28, 2016 at 12:39 PM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> On 04/28/2016 10:12 AM, Katherine Cox-Buday wrote:
> > On 04/27/2016 09:51 PM, Nate Finch wrote:
> >> So, this is exactly why I didn't want to mention the nature of the
> >> test, because we'd get sidetracked. I'll make another thread to talk
> >> about that specific test.
> Sorry I forced you into it, but it was important to this discussion. I
> was wanting to understand your feelings towards a test you should be
> running regularly as you develop, aka a unit test, that took more than a
> trivial amount of time to actually execute.
> >>
> >> I do still want to talk about what we can do for unit tests that take
> >> a long time.  I think giving developers the option to skip long tests
> >> is handy - getting a reasonable amount of coverage when you're in the
> >> middle of the develop/test/fix cycle.  It would be really useful for
> >> when you're making changes that affect a lot of packages and so you
> >> end up having to run full tests over and over.  Of course, running
> >> just the short tests would not give you 100% confidence, but once
> >> you've fixed everything so the short tests pass, *then* you could do
> >> a long run for thorough coverage.
> >
> > I believe Cheryl has something like this in the works and will be
> > sending a note out on it soon.
> >
> Yes. It is imperative that developers can quickly (and I mean quickly or
> it won't happen!) run unit tests. We absolutely want testruns to be a
> part of the code, build, run iteration loop.
> >> This is a very low friction way to increase developer productivity,
> >> and something we can implement incrementally.  It can also lead to
> >> better test coverage over all.  If you write 10 unit tests that
> >> complete in milliseconds, but were thinking about writing a couple
> >> longer-running unit tests that make sure things are working
> >> end-to-end, you don't have the disincentive of "well, this will make
> >> everyone's full test runs 30 seconds longer", since you can always
> >> skip them with -short.
> >>
> >> The only real negative I see is that it makes it less painful to
> >> write long tests for no reason, which would still affect landing
> >> times but hopefully everyone is still aware of the impact of
> >> long-running test

Re: Static Analysis tests

2016-04-27 Thread Nate Finch
Maybe we're not as far apart as I thought at first.

My thought was that they'd live under github.com/juju/juju/devrules (or
some other name) and therefore only get run during a full test run or if
you run them there specifically.  What is a full test run if not a test of
all our code?  These tests just happen to test all the code at once, rather
than piece by piece.  Combining with the other thread, if we also marked
them as skipped under -short, you could easily still run go test ./...
-short from the root of the juju repo and not incur the extra 16.5 seconds
(gocheck has a nice feature where if you call c.Skip() in the SetUpSuite,
it skips all the tests in the suite, which is particularly appropriate to
these tests, since it's the SetUpSuite that takes all the time).

Mostly, I just didn't want them to live off in a separate repo or run with
a separate tool.

On Wed, Apr 27, 2016 at 11:39 PM Andrew Wilkins <
andrew.wilk...@canonical.com> wrote:

> On Thu, Apr 28, 2016 at 11:14 AM Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> From the other thread:
>>
>> I wrote a test that parses the entire codebase under github.com/juju/juju to
>> look for places where we're creating a new value of crypto/tls.Config
>> instead of using the new helper function that I wrote that creates one with
>> more secure defaults.  It takes 16.5 seconds to run on my machine.  There's
>> not really any getting around the fact that parsing the whole tree takes a
>> long time.
>>
>> What I *don't* want is to put these tests somewhere else which requires
>> more thought/setup to run.  So, no separate long-tests directory or
>> anything.  Keep the tests close to the code and run in the same way we run
>> unit tests.
>>
>
> The general answer to this belongs back in the other thread, but I agree
> that long-running *unit* tests (if there should ever be such a thing)
> should not be shunted off to another location. Keep the unit tests with the
> unit. Integration tests are a different matter, because they cross multiple
> units. Likewise, tests for project policies.
>
> Andrew's response:
>>
>>
>> *The nature of the test is important here: it's not a test of Juju
>> functionality, but a test to ensure that we don't accidentally use a TLS
>> configuration that doesn't match our project-wide constraints. It's static
>> analysis, using the test framework; and FWIW, the sort of thing that Lingo
>> would be a good fit for.*
>>
>> *I'd suggest that we do organise things like this separately, and run
>> them as part of the "scripts/verify.sh" script. This is the sort of test
>> that you shouldn't need to run often, but I'd like us to gate merges on.*
>>
>> So, I don't really think the method of testing should determine where a
>> test lives or how it is run.  I could test the exact same things with a
>> more common unit test - check the tls config we use when dialing the API is
>> using tls 1.2, that it only uses these specific ciphersuites, etc.  In
>> fact, we have some unit tests that do just that, to verify that SSL is
>> disabled.  However, then we'd need to remember to write those same tests
>> for every place we make a tls.Config.
>>
>
> The method of testing is not particularly relevant; it's the *purpose*
> that matters. You could probably use static analysis for a lot of our
> units; it would be inappropriate, but they'd still be testing units, and so
> should live with them.
>
> The point I was trying to make is that this is not a test of one unit, but
> a policy that covers the entire codebase. You say that you don't want to it
> put them "somewhere else", but it's not at all clear to me where you think
> we *should* have them.
>
>> The thing I like about having this as part of the unit tests is that it's
>> zero friction.  They already gate landings.  We can write them and run them
>> them just like we write and run go tests 1000 times a day.  They're not
>> special.  There's no other commands I need to remember to run, scripts I
>> need to remember to set up.  It's go test, end of story.
>>
>
> Using the Go testing framework is fine. I only want to make sure we're not
> slowing down the edit/test cycle by frequently testing things that are
> infrequently going to change. It's the same deal as with integration tests;
> there's a trade-off between the time spent and confidence level.
>
>> The comment about Lingo is valid, though I think we have room for both in
>> our processes.  Lingo, in my mind, is more appropriate at review-time,
>> which allows us to write lingo rules that may not have 100% confidence.
>> They can be strong suggestions r

Static Analysis tests

2016-04-27 Thread Nate Finch
>From the other thread:

I wrote a test that parses the entire codebase under github.com/juju/juju to
look for places where we're creating a new value of crypto/tls.Config
instead of using the new helper function that I wrote that creates one with
more secure defaults.  It takes 16.5 seconds to run on my machine.  There's
not really any getting around the fact that parsing the whole tree takes a
long time.

What I *don't* want is to put these tests somewhere else which requires
more thought/setup to run.  So, no separate long-tests directory or
anything.  Keep the tests close to the code and run in the same way we run
unit tests.

Andrew's response:


*The nature of the test is important here: it's not a test of Juju
functionality, but a test to ensure that we don't accidentally use a TLS
configuration that doesn't match our project-wide constraints. It's static
analysis, using the test framework; and FWIW, the sort of thing that Lingo
would be a good fit for.*

*I'd suggest that we do organise things like this separately, and run them
as part of the "scripts/verify.sh" script. This is the sort of test that
you shouldn't need to run often, but I'd like us to gate merges on.*

So, I don't really think the method of testing should determine where a
test lives or how it is run.  I could test the exact same things with a
more common unit test - check the tls config we use when dialing the API is
using tls 1.2, that it only uses these specific ciphersuites, etc.  In
fact, we have some unit tests that do just that, to verify that SSL is
disabled.  However, then we'd need to remember to write those same tests
for every place we make a tls.Config.

The thing I like about having this as part of the unit tests is that it's
zero friction.  They already gate landings.  We can write them and run them
them just like we write and run go tests 1000 times a day.  They're not
special.  There's no other commands I need to remember to run, scripts I
need to remember to set up.  It's go test, end of story.

The comment about Lingo is valid, though I think we have room for both in
our processes.  Lingo, in my mind, is more appropriate at review-time,
which allows us to write lingo rules that may not have 100% confidence.
They can be strong suggestions rather than gating rules.  The type of test
I wrote should be a gating rule - there are no false positives.

To give a little more context, I wrote the test as a suite, where you can
add tests to hook into the code parsing, so we can trivially add more tests
that use the full parsed code, while only incurring the 16.5 second parsing
hit once for the entire suite.  That doesn't really affect this discussion
at all, but I figured people might appreciate that this could be extended
for more than my one specific test.  I certainly wouldn't advocate people
writing new 17 seconds tests all over the place.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-27 Thread Nate Finch
I was actually trying to avoid talking about the test itself to keep things
shorter ;)

The test is parsing the entire codebase under github.com/juju/juju to look
for places where we're creating a new value of crypto/tls.Config instead of
using the new helper function that I wrote that creates one with more
secure defaults.  There's not really any getting around the fact that
parsing the whole tree takes a long time.

On Wed, Apr 27, 2016 at 1:25 PM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> This is a timely discussion Nate. I'll avoid saying too much off the
> top, but I do have a question.
>
> On 04/27/2016 12:24 PM, Nate Finch wrote:
> > I just wrote a test that takes ~16.5 seconds on my machine.
> Why does the test take so long? Are you intending it to be a short /
> small scoped test?
>
> Nicholas
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


adding unit tests that take a long time

2016-04-27 Thread Nate Finch
I just wrote a test that takes ~16.5 seconds on my machine. Since we are
trying to *reduce* the time taken by tests, I think it's prudent to give
developers a way to avoid running longer tests (obviously things like the
landing bot etc should always run all the tests).

What I *don't* want is to put long tests somewhere else which requires more
thought/setup to run.  So, no separate long-tests directory or anything.
Keep the tests close to the code and run in the same way we run unit
tests.  This also lowers the barrier to changing older tests that also take
a long time.

I suggest we allow developers to opt out of longer running tests.  This is
already supported by passing the -short flag to go test.

Then, long tests could use

if testing.Short() {
c.Skip("skipping long running test.")
}

go test will run this test, but go test -short will skip it.  Sweet, easy,
devs can run go test -short over and over, and when they need to run for
landing, they can drop the -short.  CI and the landing bot don't need to
change at all.

We could also do some sort of opt-in scheme with build tags, but then CI
and the landing bot would have to change to opt in, and you'd need separate
files for long tests, which is a pain, etc.

Thoughts?
-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


two new helpers in github.com/juju/testing

2016-04-22 Thread Nate Finch
I added a couple new helpers to the testing repo that I think will help us
avoid some of the contortions we have had to do in the past when testing
executables and output on stderr/stdout.

The first is a fairly simple function, CaptureOutput
, which takes in a
func() and returns anything written to stderr or stdout while running that
function.  It temporarily replaces os.Stderr and os.Stdout with files on
disk while running the function (necessary because those two values are
*os.File values, so, rather hard to mock out with anything in-memory).

The second is slightly more complicated - PatchExecHelper


We had PatchExecutable, which creates gnarly bash/batch scripts on disk to
replace expected executables, but it has a few weaknesses - it doesn't
allow you to test the arguments passed to it *and* test stderr/stdout at
the same time, it assumed the thing you were patching out was on the PATH,
and I'm pretty sure it wouldn't work to patch out .exe files on Windows.

PatchExecHelper fixes all of those problems by approaching the problem in a
different way, it creates a function that is intended to mock out a call to
exec.Command.  It's very easy to use, just embed PatchExecHelper in your
test suite, and then use its GetExecCommand() method to get a function that
can be used to mock out exec.Command.

The exact method of how it works is detailed in the godoc, and it's an
example implementation of something I blogged about a while back:
https://npf.io/2015/06/testing-exec-command/

Here's an example of how it can be used:

// production code
var execCommand = exec.Command
func Run(command string, args ...string) ([]byte, error) {
return execCommand(command, args...).CombinedOutput()
}

// test code
type RunSuite struct {
testing.CleanupSuite

// you *must* embed PatchExecHelper in your test suite.
testing.PatchExecHelper
}

func (s RunSuite) TestRun(c *gc.C) {
outArgs := make(chan []string, 1)
f := s.GetExecCommand(testing.PatchExecConfig{
Stderr: "this is stderr!",
Stdout: "this is stdout!",
ExitCode: 5,
Args: outArgs,
})
s.PatchValue(, f)

output, err := Run("echo", "hello world!")
args := <-outArgs
// test output, args, etc.
}

Let me know if you have any questions or comments.
-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD polish for xenial

2016-04-19 Thread Nate Finch
Thanks for explaining, John, that makes sense and really helps me
understand the reasoning behind these changes.

On Tue, Apr 19, 2016 at 11:30 PM John Meinel <j...@arbash-meinel.com> wrote:

> On Tue, Apr 19, 2016 at 6:56 PM, Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> Then I guess I don't understand why it worked fine up until last week.
>>
>
> So up until last week LXD depended on the 'lxc1' package which was the old
> tools for creating containers. That did always set up an 'lxcbr0' bridge,
> which LXD then always used for its own containers.
>
> That was "ok" because "lxc1" was never installed by default so it didn't
> break people who just "install ubuntu". With LXD being
> always-available-always-installed-by-default with Ubuntu, they had to be
> more conservative in their defaults. Otherwise when you just "ec2
> start-instance ubuntu" it might not work as you expect because it has
> additional bridges installed-by-default on your cloud instance that didn't
> exist in Trusty.
>
> John
> =:->
>
>
>
>> > LXD is our code.  Juju is our code.  Surely we can code/configure the
>>> two
>>> > to work together such that it just works for the 95% case?
>>>
>>> I assure you we're aware of this. Unfortunately there's not a good
>>> answer.
>>>
>>> Tycho
>>>
>>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD polish for xenial

2016-04-19 Thread Nate Finch
Then I guess I don't understand why it worked fine up until last week.

On Tue, Apr 19, 2016, 10:39 AM Tycho Andersen <tycho.ander...@canonical.com>
wrote:

> On Tue, Apr 19, 2016 at 01:42:08PM +0000, Nate Finch wrote:
> > On Tue, Apr 19, 2016 at 7:49 AM John Meinel <j...@arbash-meinel.com>
> wrote:
> >
> > > ...
> > >>>
> > >>
> > >
> > >>
> > >>> That's probably the cause of the other confusion in the updated docs
> -
> > >>> now we *do* want the bridge named lxdbr0 not lxcbr0. If the user
> > >>> already has lxc setup in some fashion there's a different error from
> > >>> juju telling them to run the dpkg-reconfigure command. That should
> > >>> presumably always appear whenever there's no usable bridge.
> > >>>
> > >> The flip flopping back and forth on the bridge name is going to cause
> > >> havoc even after release. Do we have clear direction now from LXD on
> how
> > >> they plan to handle the competing network bridge names? I understand
> we're
> > >> now saying a dpkg-reconfigure is required, but I'm not clear if /
> when we
> > >> plan to make this easier.
> > >>
> > >
> > > So, LXD is going to be installed-by-default on Xenial. Which means that
> > > everyone will always have it installed. However, that leads to a
> problem
> > > with having a "standard" subnet dedicated to LXD. While it works for
> lots
> > > of people, there are plenty of cases where that subnet would already
> be in
> > > use elsewhere (in say a company's internal LAN). Which means LXD chose
> to
> > > go with link-local addresses by default. AIUI that means that the
> > > containers can talk to the host (and probably the host can route them
> to
> > > the public internet?) but they aren't able to talk to other containers
> or
> > > have inbound traffic.
> > > My personal hope is that the dpkg-reconfigure steps can provide a
> 70%-case
> > > safe default set of addresses (possibly using autodetection), and give
> a
> > > nice wizard to help people, while still allowing them to override it
> in case
> > >
> >
> > I really fear for what this will do to adoption.  I feel like we're
> > throwing the common "it just works" case out the window to enable a very
> > small edge case to work slightly better.  I'd much rather have defaults
> > where juju/lxd just works with most common setups.  For the one little
> > exception - if the network on your corporate LAN conflicts - we give a
> nice
> > error message and give the user tools to reconfigure.
>
> The problem here is that there's no way to detect this case (e.g. what
> if the network in question is routed behind your default gateway
> somehow?). Things just break, and it can be difficult for people to
> figure out why.
>
> LXD's bridge setup defaults to ipv6 link local with an HTTP proxy that
> allows apt-get to work. The current dpkg-reconfigure prompts will
> suggest reasonable values if you decide you want to enable ipv4.
>
> > Making everyone
> > configure their network (not the easiest or friendliest task even for
> > people on our own team), is surely going to cause us to lose a *lot* of
> > users with a bad first impression.
> >
> > LXD is our code.  Juju is our code.  Surely we can code/configure the two
> > to work together such that it just works for the 95% case?
>
> I assure you we're aware of this. Unfortunately there's not a good
> answer.
>
> Tycho
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: LXD polish for xenial

2016-04-19 Thread Nate Finch
On Tue, Apr 19, 2016 at 7:49 AM John Meinel  wrote:

> ...
>>>
>>
>
>>
>>> That's probably the cause of the other confusion in the updated docs -
>>> now we *do* want the bridge named lxdbr0 not lxcbr0. If the user
>>> already has lxc setup in some fashion there's a different error from
>>> juju telling them to run the dpkg-reconfigure command. That should
>>> presumably always appear whenever there's no usable bridge.
>>>
>> The flip flopping back and forth on the bridge name is going to cause
>> havoc even after release. Do we have clear direction now from LXD on how
>> they plan to handle the competing network bridge names? I understand we're
>> now saying a dpkg-reconfigure is required, but I'm not clear if / when we
>> plan to make this easier.
>>
>
> So, LXD is going to be installed-by-default on Xenial. Which means that
> everyone will always have it installed. However, that leads to a problem
> with having a "standard" subnet dedicated to LXD. While it works for lots
> of people, there are plenty of cases where that subnet would already be in
> use elsewhere (in say a company's internal LAN). Which means LXD chose to
> go with link-local addresses by default. AIUI that means that the
> containers can talk to the host (and probably the host can route them to
> the public internet?) but they aren't able to talk to other containers or
> have inbound traffic.
> My personal hope is that the dpkg-reconfigure steps can provide a 70%-case
> safe default set of addresses (possibly using autodetection), and give a
> nice wizard to help people, while still allowing them to override it in case
>

I really fear for what this will do to adoption.  I feel like we're
throwing the common "it just works" case out the window to enable a very
small edge case to work slightly better.  I'd much rather have defaults
where juju/lxd just works with most common setups.  For the one little
exception - if the network on your corporate LAN conflicts - we give a nice
error message and give the user tools to reconfigure.  Making everyone
configure their network (not the easiest or friendliest task even for
people on our own team), is surely going to cause us to lose a *lot* of
users with a bad first impression.

LXD is our code.  Juju is our code.  Surely we can code/configure the two
to work together such that it just works for the 95% case?
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Thunderbirds are Go 1.6

2016-04-10 Thread Nate Finch
Glorious. Thanks so much for all the work, Michael and Curtis and team!

On Sun, Apr 10, 2016, 1:16 PM Cheryl Jennings 
wrote:

> Thank you for all your work on this, Curtis!
>
> On Sun, Apr 10, 2016 at 11:55 AM, Curtis Hovey-Canonical <
> cur...@canonical.com> wrote:
>
>> Ladies and Gentlemen, We build with Go 1.6 everywhere.
>>
>> Take a moment to consider the implications, because that means we do
>> not use Go 1.2, 1.3, or 1.5. Juju 1.x and 2.x for Windows, OS X, and
>> Linux (precise, trusty, wily, xenial, and centos7) are built with Go 1.6.
>>
>> There was concern about how to support precise if the code grew 1.2
>> incompatibilities. Michael Hudson-Doyle's fabulous package was trivial
>> to backport to precise. We don't need to create a new process to support
>> precise. The golang-1.6 packages was also easy to copy to wily. since
>> the Go 1.5 unit tests are less reliable than the Go 1.6 unit tests, the
>> hour spent setting up wily to build with golang-1.6 will pay for itself
>> in a day.
>>
>> Some unit tests combinations are still using Go 1.2 because the tests
>> assume OS features because of the Golang version instead of checking
>> the OS:
>>
>> 1. The Makefile rules for installing deps are very stale. Precise unit
>> tests will fail when forced to use Go 1.6 because they cannot
>> start mongo:
>> error command line: unknown option sslOnNormalPorts
>> But we see that actual deployments of precise built with Go 1.6 work.
>>
>> 2. Centos and Windows unit test are still Go 1.2 because many lxd
>> related tests fail when run with Go 1.6.
>>
>> The git-merge-juju job is using the xenial daily image to gate merges.
>> The released image it too stale to do an unattended dist-upgrade
>>
>> The unit test runs for xenial-amd64, xenial-arm64, xenial-s390x,
>> xenial-ppc64el, wily-amd64, trusty-amd64, trusty-i386, and
>> trusty-ppc64el will use the golang (1.6) or golang-1.6 (1.6) packages.
>> The go1.2 trusty unit tests amd64, arm64, i386, and ppc64el are gone.
>>
>> Ubuntu source packages prefer the golang-1.6 package, and fallback to
>> golang (>= 1.2). We needed a new builder because Go 1.6 needs more
>> memory. We saw errors like this with 2G of memory
>> golang-1.6/pkg/tool/linux_amd64/link: running gcc failed:
>> fork/exec /usr/bin/gcc: cannot allocate memory
>> Centos is created using the same Go package as trusty. Windows is
>> cross compiled as 64 bit for agent and 32 bit for clients. OS X is
>> natively compiled using Go 1.6 on CI's OS X host.
>>
>> We also got word is was safe to upgrade our arm64 host to xenial. This
>> is done. The ppc64el hosts were also upgraded this week. We are
>> expanding tests for arm64 and ppc64el
>>
>> --
>> Curtis Hovey
>> Canonical Cloud Development and Operations
>> http://launchpad.net/~sinzui
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
I obviously wasn't clear. I was suggesting a --yes-i-really-mean-it flag on
kill-controller, but if you passed just -y we show the prompt anyway
(instead of erroring out on an unknown flsg).

My point with the big prompt was in response to Rick saying it should never
be needed and should only be used in extreme circumstances... If that's how
we feel, we should make sure the user knows it.

Many people have said on this thread that the difference between
kill-controller and destroy-controller is not immediately obvious.., and
we're the devs on the project. I'm just trying to make sure we make it
clear for users.

On Wed, Apr 6, 2016, 4:54 PM Tim Penhey <tim.pen...@canonical.com> wrote:

> On 06/04/16 23:13, Nick Veitch wrote:
> > Sure, I am just concerned about a proliferation of commands to do the
> > same (ultimately) task
> >
> > destroy-controller
>
> The most correct way to take down a controller.
>
> > kill-controller
>
> The OMG it is broken, please do as much as you can and I know I'm going
> to have to manually check any resources left around that it couldn't
> clean up.
>
> > forget/purge-controller
>
> Remove local references to the controller.
>
>
> Not really the same things at all.
>
> Tim
>
>
> >
> >
> >
> > On 6 April 2016 at 11:59, Horacio Duran <horacio.du...@canonical.com
> > <mailto:horacio.du...@canonical.com>> wrote:
> >
> > The issue I see with that approach is that in that case
> > kill-controller might be doing less than you expect instead of more,
> > suppose the controller is having transient issues and kill
> > controller cannot reach the cloud for deletion, this would forget
> > the controller and leave it in the cloud, forget-controller instead
> > tells us very clearly what is going to happen, the change is going
> > to be local and not affect the controller.
> > My 2c
> >
> >
> > On Wednesday, 6 April 2016, Nick Veitch <nick.vei...@canonical.com
> > <mailto:nick.vei...@canonical.com>> wrote:
> >
> > just my tuppence
> >
> > instead of having another command, can't we just add this as an
> > option to kill-controller?
> >
> > juju kill-controller --cleanup 
> >
> >
> >
> > On 6 April 2016 at 11:05, Horacio Duran
> > <horacio.du...@canonical.com> wrote:
> >
> >
> > I might be biased by years of apt-get but purge makes me
> > think that you are going to do what kill is supposed to do,
> > forget sound more aligned whit what you are really aiming to.
> >
> > On Wednesday, 6 April 2016, Andrew Wilkins
> > <andrew.wilk...@canonical.com> wrote:
> >
> > On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings
> > <cheryl.jenni...@canonical.com> wrote:
> >
> > Relevant bug:
> >  https://bugs.launchpad.net/juju-core/+bug/1553059
> >
> > We should provide a way to clean up controllers
> > without making the user manually edit juju's files.
> >
> >
> > Unless anyone objects, or has a better spelling, I will
> > be adding a command to do this:
> >
> > juju purge-controller 
> >
> > The command will require a "-y" or prompt for
> > confirmation, like kill-controller. It will not attempt
> > to destroy the controller, it will just remove the
> > details of it from the client.
> >
> > (Alternative suggestion for spelling: "juju
> > forget-controller". Purge-controller may suggest that
> > we're purging a controller of its contents, rather than
> > purging the controller from the client?)
> >
> > Cheers,
> > Andrew
> >
> > On Mon, Apr 4, 2016 at 7:05 AM, Nate Finch
> > <nate.fi...@canonical.com> wrote:
> >
> > This just happened to me, too.  Kill-controller
> > needs to work if at all possible.  That's the
> > whole point.  And yes, users may not hit
> > specific problems, but devs do, and that wastes
> > our time trying to figure out how to manually
> >   

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
So, I agree, we should never need to use kill controller... but sometimes
you might need it, like if someone SSH's into their controller and messes
it up, etc.

Right now, kill-controller is in all the help docs, i.e. juju help
commands... are we planning on changing that?  I'm not sure I'm on board
with having hidden commands, because they will invariably leak out, and
then we just have commands that everyone knows about, but that are
undocumented.

I'd much prefer giant warning text:

DANGER!!! USE THIS COMMAND ONLY AS A LAST RESORT!!
IT MAY LEAVE RESOURCES RUNNING IN YOUR CLOUD!  ALWAYS
TRY destroy-controller FIRST!

And make the flag to disable the "are you sure?" prompt something long and
annoying to type, not just -y, because that's too easy to just bang out
with muscle memory.

In fact, if I were feeling sneaky, I might accept -y and still make them
confirm, since they're obviously just banging out the command without
thinking.

-Nate



On Wed, Apr 6, 2016 at 11:21 AM Rick Harding <rick.hard...@canonical.com>
wrote:

> I strongly feel that we're avoiding and dancing around the issue.
>
> The user should NEVER EVER have to use kill-controller. If someone types
> that we've failed to build Juju to the standards to which I feel we all
> should expect us to live up to. There is only ONE way for a user to take
> down a controller, destroy-controller.
>
> If a user has models running on that controller we've given them a flag to
> safely say "I know I have stuff running, but I don't care" with the current
> stuff running on there.
>
> I completely understand that destroy-controller is not the reverse of
> bootstrap. We've filed a bug for that and should correct the bug rather
> than moving to other commands and processes to deal with that.
>
> https://bugs.launchpad.net/juju-core/+bug/1563615
>
> kill-controller is a top secret tool we keep in our back pockets for
> emergency debugging when crap that's beyond our planning/control has
> occurred and we've not yet updated destroy-controlller to be able to handle
> that. It should not be in the main docs, release notes, etc. It's
> dangerous, folks should be discouraged from ever using it. Even when we do
> use it, we should be saying things like "Well, destroy-controller seems to
> be broken, we've got this potentially messy/risky way of trying to work
> around it that we can try...but..."
>
> As for the user case where a user registers on someone else's controller
> and wants to no longer have that controller appear, then we should update
> destroy-controller to handle that. There's only ONE way to remove a
> controller from your list, destroy-controller. We should be able to note
> that the user requesting destroy-controller does not have sufficient access
> to remove it and tell them "You've got these models running on this
> controller". And if they want to destroy with the flag to remove all their
> models, then we should do that and remove it from their system. If they
> have no running models, we should note that we're removing that
> controller's information from their system. If there's user confusion we
> can look at a message "You are not the controller admin, removing the
> controller information from your cache" or the like.
>
> I don't feel that adding another command for another way to remove a
> controller from list-controllers is something we want to do and we
> especially don't want to do it this week as we try to freeze 2.0 for
> release.
>
> If folks think I'm missing a point please reach out to me and lets move
> this to a high-bandwidth discussion, but I wanted to share the original
> design/thought on the destroy vs kill controller commands. I want everyone
> to share the feeling that if our users ever type kill-controller into their
> terminals we've failed them and realize that.
>
> Thanks
>
> Rick
>
> On Wed, Apr 6, 2016 at 11:02 AM Cheryl Jennings <
> cheryl.jenni...@canonical.com> wrote:
>
>> Another relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1566426
>>
>> Maybe kill-controller tries to destroy through the API, but has a time
>> out if things get "stuck" where it will fall back to the provider.  I was
>> joking when I suggested yesterday in IRC that we should bring back --force
>> for kill-controller, but maybe we need it (or at least a timeout).
>>
>> On Wed, Apr 6, 2016 at 7:53 AM, Nate Finch <nate.fi...@canonical.com>
>> wrote:
>>
>>> Oh I see.  Yes, I agree that we should always try the right way first
>>> and only use the provider if necessary (especially if using the provider
>>> leaves garbage around).
>>>
>>> It seems like there's no reason why we

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
Oh I see.  Yes, I agree that we should always try the right way first and
only use the provider if necessary (especially if using the provider leaves
garbage around).

It seems like there's no reason why we couldn't make a --force flag do it
that way in 2.0 (aside from time constraints).

On Wed, Apr 6, 2016 at 10:48 AM Aaron Bentley <aaron.bent...@canonical.com>
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 2016-04-06 10:45 AM, Nate Finch wrote:
> > Wait, didn't destroy-environment --force fall back to the provider?
> > I thought that was the whole point of --force
>
> No, it didn't fall back.  It uses the provider unconditionally,
> without trying a normal destroy-controller first.
>
> Aaron
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJXBSG5AAoJEK84cMOcf+9hzSQIAJ/vNKIa1/TnDSyvC2U9ApzW
> TAEvSqaEUw0ZL2dl2tiNKTp3JPzcnCR4VKrBIsh1xi0hB1UNtJR+IW4O46gRI6ok
> ZvA1cAvoJvRdmqf1ntNzYwHRSn/Tm82DGzixTPt0TcTn3KYrk13XpRJuxMbbvHDM
> LfYG0zglGmVKUaWs4rBogh4H4OaiOIR8lORXSC8GRQjA1/C4c+FjIg+KeW5Yw2Ti
> XnG87BPyJ1TtPGWxxeKAk4tnkZwnZKtJOnHU/IfvTFOpECdBjojWnnc6VbQ1um0H
> WwjR6EcA4qxkkhND6ypIGkt9A4k3ZZvckCau52EgIn3pnwhk5OSw64MURJAEmn0=
> =vm/H
> -END PGP SIGNATURE-
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
Wait, didn't destroy-environment --force fall back to the provider?  I
thought that was the whole point of --force

On Wed, Apr 6, 2016 at 10:24 AM Aaron Bentley 
wrote:

> On 2016-04-06 08:22 AM, Andrew Wilkins wrote:
> > What I would like to see is: * destroy-controller to take on a
> > --force flag, causing it to do what kill-controller does now, and
> > what destroy-environment --force used to do
>
> What kill-controller does now, where it attempts to use Juju, but
> falls back to the provider, is much more valuable to us in QA than
> what destroy-environment --force used to do.  We don't want to leak
> resources unnecessarily, but we do want things to die when we try to
> kill them.
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
I agree with your assessment of the vocabulary, for what it's worth.

On Wed, Apr 6, 2016 at 9:41 AM Horacio Duran <horacio.du...@canonical.com>
wrote:

> As a non English native myself I find destroy to be much more final than
> kill, to me destroy sounds like kill and then burn just in case, but I
> don't know if this extends to other non English speakers too.
>
> On Wednesday, 6 April 2016, Nate Finch <nate.fi...@canonical.com> wrote:
>
>> Also +1 to Andrew's proposal.  In particular, the difference between kill
>> and destroy is pretty arbitrary from a vocabulary standpoint, so it's not
>> clear which one is the default and which one is the extreme measure.  A
>> flag on destroy is a lot more clear in that regard (and reduces the number
>> of commands needed).
>>
>> On Wed, Apr 6, 2016 at 8:41 AM Horacio Duran <horacio.du...@canonical.com>
>> wrote:
>>
>>> Strong +1 to what Andrew just proposed
>>>
>>> On Wednesday, 6 April 2016, Andrew Wilkins <andrew.wilk...@canonical.com>
>>> wrote:
>>>
>>>> On Wed, Apr 6, 2016 at 7:35 PM Rick Harding <rick.hard...@canonical.com>
>>>> wrote:
>>>>
>>>>> +1 to the -1 of a new command for this. I'd like to raise the
>>>>> discussion with the technical board as I'd like understand why the change
>>>>> the change that the team had to make made it impossible for 
>>>>> kill-controller
>>>>> to function and what we could do to allow the team to remove legacy code,
>>>>> but still be able to kill off things.
>>>>>
>>>>
>>>> Sorry, I probably should have started a new thread; this is orthogonal.
>>>> Still worth talking about with the board, but orthogonal to removing
>>>> details of a controller. You might "juju register" someone else's
>>>> controller, and then want to remove it from your client without destroying
>>>> it.
>>>>
>>>> I think the UX could do with rethinking. There's a few issues:
>>>>  (1) It's too easy to lose resources via kill-controller. See:
>>>> https://bugs.launchpad.net/juju-core/+bug/1559701 and
>>>> https://bugs.launchpad.net/juju-core/+bug/1566011
>>>>  (2) It's not clear from the name what kill-controller vs.
>>>> destroy-controller is, and so it's easy to assume they either do the same
>>>> thing, or just randomly choose one and run it. That leads to (1) happening
>>>> more than we'd like or expect.
>>>>  (3) destroy-controller is harder to use than kill-controller for the
>>>> standard case of destroying the controller and all of its hosted models.
>>>> You have to pass "--destroy-all-models" to destroy-controller, which you
>>>> don't have to pass to kill-controller. I don't understand the point of
>>>> that, given that you're already asked whether you want to destroy the
>>>> controller or not.
>>>>
>>>> What I would like to see is:
>>>>  * kill-controller to be dropped
>>>>  * destroy-controller's --destroy-all-models flag to be dropped, and
>>>> implied by the accepted prompt (or -y)
>>>>  * destroy-controller to take on a --force flag, causing it to do what
>>>> kill-controller does now, and what destroy-environment --force used to do
>>>>  * a new command to remove a controller from the client
>>>>
>>>> Why a new command? Because removing/forgetting is orthogonal to
>>>> destroying. It's plain weird to say "kill-controller --forget" (or
>>>> --cleanup, or whatever) if you're removing details of a live controller
>>>> that you just don't want to use any more.
>>>>
>>>> Cheers,
>>>> Andrew
>>>>
>>>> On Tue, Apr 5, 2016 at 11:55 PM Andrew Wilkins <
>>>>> andrew.wilk...@canonical.com> wrote:
>>>>>
>>>>>> On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings <
>>>>>> cheryl.jenni...@canonical.com> wrote:
>>>>>>
>>>>>>> Relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1553059
>>>>>>>
>>>>>>> We should provide a way to clean up controllers without making the
>>>>>>> user manually edit juju's files.
>>>>>>>
>>>>>>
>>>>>> Unless anyone objects, or has a better spelling, I will be adding a
>>>>>> command to 

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
Also +1 to Andrew's proposal.  In particular, the difference between kill
and destroy is pretty arbitrary from a vocabulary standpoint, so it's not
clear which one is the default and which one is the extreme measure.  A
flag on destroy is a lot more clear in that regard (and reduces the number
of commands needed).

On Wed, Apr 6, 2016 at 8:41 AM Horacio Duran <horacio.du...@canonical.com>
wrote:

> Strong +1 to what Andrew just proposed
>
> On Wednesday, 6 April 2016, Andrew Wilkins <andrew.wilk...@canonical.com>
> wrote:
>
>> On Wed, Apr 6, 2016 at 7:35 PM Rick Harding <rick.hard...@canonical.com>
>> wrote:
>>
>>> +1 to the -1 of a new command for this. I'd like to raise the discussion
>>> with the technical board as I'd like understand why the change the change
>>> that the team had to make made it impossible for kill-controller to
>>> function and what we could do to allow the team to remove legacy code, but
>>> still be able to kill off things.
>>>
>>
>> Sorry, I probably should have started a new thread; this is orthogonal.
>> Still worth talking about with the board, but orthogonal to removing
>> details of a controller. You might "juju register" someone else's
>> controller, and then want to remove it from your client without destroying
>> it.
>>
>> I think the UX could do with rethinking. There's a few issues:
>>  (1) It's too easy to lose resources via kill-controller. See:
>> https://bugs.launchpad.net/juju-core/+bug/1559701 and
>> https://bugs.launchpad.net/juju-core/+bug/1566011
>>  (2) It's not clear from the name what kill-controller vs.
>> destroy-controller is, and so it's easy to assume they either do the same
>> thing, or just randomly choose one and run it. That leads to (1) happening
>> more than we'd like or expect.
>>  (3) destroy-controller is harder to use than kill-controller for the
>> standard case of destroying the controller and all of its hosted models.
>> You have to pass "--destroy-all-models" to destroy-controller, which you
>> don't have to pass to kill-controller. I don't understand the point of
>> that, given that you're already asked whether you want to destroy the
>> controller or not.
>>
>> What I would like to see is:
>>  * kill-controller to be dropped
>>  * destroy-controller's --destroy-all-models flag to be dropped, and
>> implied by the accepted prompt (or -y)
>>  * destroy-controller to take on a --force flag, causing it to do what
>> kill-controller does now, and what destroy-environment --force used to do
>>  * a new command to remove a controller from the client
>>
>> Why a new command? Because removing/forgetting is orthogonal to
>> destroying. It's plain weird to say "kill-controller --forget" (or
>> --cleanup, or whatever) if you're removing details of a live controller
>> that you just don't want to use any more.
>>
>> Cheers,
>> Andrew
>>
>> On Tue, Apr 5, 2016 at 11:55 PM Andrew Wilkins <
>>> andrew.wilk...@canonical.com> wrote:
>>>
>>>> On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings <
>>>> cheryl.jenni...@canonical.com> wrote:
>>>>
>>>>> Relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1553059
>>>>>
>>>>> We should provide a way to clean up controllers without making the
>>>>> user manually edit juju's files.
>>>>>
>>>>
>>>> Unless anyone objects, or has a better spelling, I will be adding a
>>>> command to do this:
>>>>
>>>> juju purge-controller 
>>>>
>>>> The command will require a "-y" or prompt for confirmation, like
>>>> kill-controller. It will not attempt to destroy the controller, it will
>>>> just remove the details of it from the client.
>>>>
>>>> (Alternative suggestion for spelling: "juju forget-controller".
>>>> Purge-controller may suggest that we're purging a controller of its
>>>> contents, rather than purging the controller from the client?)
>>>>
>>>> Cheers,
>>>> Andrew
>>>>
>>>> On Mon, Apr 4, 2016 at 7:05 AM, Nate Finch <nate.fi...@canonical.com>
>>>>> wrote:
>>>>>
>>>>>> This just happened to me, too.  Kill-controller needs to work if at
>>>>>> all possible.  That's the whole point.  And yes, users may not hit 
>>>>>> specific
>>>>>> problems, but devs do, and that w

Re: Unable to kill-controller

2016-04-06 Thread Nate Finch
I agree with Horacio, forget controller is good becsude it only does what
you expect, whereas failing to kill a controller but making it look like
success is bad.

Honestly, I think the problem is kill controller that should just go back
to being a flag on destroy controller.  Then we'd just have destroy and
forget.

On Wed, Apr 6, 2016, 6:59 AM Horacio Duran <horacio.du...@canonical.com>
wrote:

> The issue I see with that approach is that in that case kill-controller
> might be doing less than you expect instead of more, suppose the controller
> is having transient issues and kill controller cannot reach the cloud for
> deletion, this would forget the controller and leave it in the cloud,
> forget-controller instead tells us very clearly what is going to happen,
> the change is going to be local and not affect the controller.
> My 2c
>
>
> On Wednesday, 6 April 2016, Nick Veitch <nick.vei...@canonical.com> wrote:
>
>> just my tuppence
>>
>> instead of having another command, can't we just add this as an option to
>> kill-controller?
>>
>> juju kill-controller --cleanup 
>>
>>
>>
>> On 6 April 2016 at 11:05, Horacio Duran <horacio.du...@canonical.com>
>> wrote:
>>
>>>
>>> I might be biased by years of apt-get but purge makes me think that you
>>> are going to do what kill is supposed to do, forget sound more aligned whit
>>> what you are really aiming to.
>>>
>>> On Wednesday, 6 April 2016, Andrew Wilkins <andrew.wilk...@canonical.com>
>>> wrote:
>>>
>>>> On Tue, Apr 5, 2016 at 2:29 AM Cheryl Jennings <
>>>> cheryl.jenni...@canonical.com> wrote:
>>>>
>>>>> Relevant bug:  https://bugs.launchpad.net/juju-core/+bug/1553059
>>>>>
>>>>> We should provide a way to clean up controllers without making the
>>>>> user manually edit juju's files.
>>>>>
>>>>
>>>> Unless anyone objects, or has a better spelling, I will be adding a
>>>> command to do this:
>>>>
>>>> juju purge-controller 
>>>>
>>>> The command will require a "-y" or prompt for confirmation, like
>>>> kill-controller. It will not attempt to destroy the controller, it will
>>>> just remove the details of it from the client.
>>>>
>>>> (Alternative suggestion for spelling: "juju forget-controller".
>>>> Purge-controller may suggest that we're purging a controller of its
>>>> contents, rather than purging the controller from the client?)
>>>>
>>>> Cheers,
>>>> Andrew
>>>>
>>>> On Mon, Apr 4, 2016 at 7:05 AM, Nate Finch <nate.fi...@canonical.com>
>>>>> wrote:
>>>>>
>>>>>> This just happened to me, too.  Kill-controller needs to work if at
>>>>>> all possible.  That's the whole point.  And yes, users may not hit 
>>>>>> specific
>>>>>> problems, but devs do, and that wastes our time trying to figure out how 
>>>>>> to
>>>>>> manually clean up the garbage.
>>>>>>
>>>>>> On Mon, Apr 4, 2016 at 8:33 AM Rick Harding <
>>>>>> rick.hard...@canonical.com> wrote:
>>>>>>
>>>>>>> On Sun, Apr 3, 2016 at 6:56 PM Andrew Wilkins <
>>>>>>> andrew.wilk...@canonical.com> wrote:
>>>>>>>
>>>>>>>> In a non-beta release we would make sure that the config changes
>>>>>>>> aren't backwards incompatible.
>>>>>>>>
>>>>>>>
>>>>>>> I think this is the key thing. I think that kill-controller is an
>>>>>>> exception to this rule. I think we should always at least give the user 
>>>>>>> the
>>>>>>> ability to remove their stuff and start over with the new alpha/beta/rc
>>>>>>> release. I'd like to ask us to explore making kill-controller an 
>>>>>>> exception
>>>>>>> to this policy and that if tests prove we can't bootstrap on one beta 
>>>>>>> and
>>>>>>> kill with trunk that it's a blocking bug for us.
>>>>>>> --
>>>>>>> Juju-dev mailing list
>>>>>>> Juju-dev@lists.ubuntu.com
>>>>>>> Modify settings or unsubscribe at:
>>>>>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Juju-dev mailing list
>>>>>> Juju-dev@lists.ubuntu.com
>>>>>> Modify settings or unsubscribe at:
>>>>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>>>>
>>>>>>
>>>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>>
>>
>>
>> --
>> Nick Veitch,
>> CDO Documentation
>> Canonical
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unable to kill-controller

2016-04-04 Thread Nate Finch
This just happened to me, too.  Kill-controller needs to work if at all
possible.  That's the whole point.  And yes, users may not hit specific
problems, but devs do, and that wastes our time trying to figure out how to
manually clean up the garbage.

On Mon, Apr 4, 2016 at 8:33 AM Rick Harding 
wrote:

> On Sun, Apr 3, 2016 at 6:56 PM Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> In a non-beta release we would make sure that the config changes aren't
>> backwards incompatible.
>>
>
> I think this is the key thing. I think that kill-controller is an
> exception to this rule. I think we should always at least give the user the
> ability to remove their stuff and start over with the new alpha/beta/rc
> release. I'd like to ask us to explore making kill-controller an exception
> to this policy and that if tests prove we can't bootstrap on one beta and
> kill with trunk that it's a blocking bug for us.
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-28 Thread Nate Finch
I'll just note that we've had flaky tests for as long as I've been working
on Juju, and there's never a "good" time to fix them. :)

On Mon, Mar 28, 2016 at 11:48 AM Aaron Bentley 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 2016-03-28 09:03 AM, Katherine Cox-Buday wrote:
> > Generally +1 on this, but I'm also intrigued by Martin's
> > statistic... do we currently weight test failures by how likely
> > they are to fail (i.e. how likely they are flaky)? That seems like
> > it would be a great metric to use to decide which to fix first.
>
> We don't do it on the likelihood of failure, but we do it on the
> frequency of failure.
>
> http://reports.vapour.ws/releases/top-issues
>
> I report on these on the cross-team call, and once the 2.0 settles
> down, I'll be reporting them on the release call again.
>
> Aaron
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJW+VJcAAoJEK84cMOcf+9hWrwH/0JradfscIE0wnt+yCW9nNCR
> 9hTHI2U19v1VuP6pWI4UiC7srfojPI8EXXEXrrAhF9rT8tpVK4EcJRJK9RvWvvz5
> BEquHMS0+eROFOqDJFavEB8hU7BKHErzkSwSG8uKq7JuwHs9gNtQO9z9fIhVKjnr
> aP4z2IliCqbYfXbupfSTD8TmqhI0AipQymTg3QB4C3sJdXzc5GjzIIckUo/X7aJj
> zH1tEtlwOdP0c9F+8ZVs1j6AAkb+uDGc/1Qr4MT1kInqGkli2UNF4TOX/AihNPyH
> iwYgq6O7uOkijFTrL9obRfbXxIFw1WCc9cYzxbRYnGfQff47Dyj7/BUStPPH0i0=
> =8FQ6
> -END PGP SIGNATURE-
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-28 Thread Nate Finch
+1, don't retry... devs need to feel the pain in order to get proper
motivation to fix this stuff...

On Mon, Mar 28, 2016 at 9:03 AM Katherine Cox-Buday <
katherine.cox-bu...@canonical.com> wrote:

> Just wanted to say thank you 100x to all involved!
>
> On 03/24/2016 01:03 AM, Michael Hudson-Doyle wrote:
> > Hi,
> >
> > As of a few minutes ago, there is now a golang-1.6 package in
> > trusty-proposed:
> > https://launchpad.net/ubuntu/trusty/+source/golang-1.6 (thanks for the
> > review and copy, Steve).
> >
> > One difference between this and the package I prepared earlier is that
> > it does not install /usr/bin/go but rather /usr/lib/go-1.6/bin/go so
> > Makefiles and such will need to be adjusted to invoke that directly or
> > put /usr/lib/go-1.6/bin on $PATH or whatever. (This also means it can
> > be installed alongside the golang packages that are already in
> > trusty).
> >
> > Cheers,
> > mwh
> > (Hoping that we can now really properly ignore gccgo-4.9 ppc64el bugs!)
> >
> > On 17 February 2016 at 07:58, Michael Hudson-Doyle
> >  wrote:
> >> I have approval for the idea but also decided to wait for 1.6 and upload
> >> that instead. I'm also on leave currently so hopefully this can all
> happen
> >> in early March.
> >>
> >> Cheers,
> >> mwh
> >>
> >> On 17/02/2016 1:17 am, "John Meinel"  wrote:
> >>> To start with, thanks for working on this. However, doesn't this also
> >>> require changing the CI builds to use your ppa?
> >>>
> >>> What is the current state of this? I was just looking around and
> noticed
> >>> golang1.5-go isn't in anything specific for Trusty that I can see. I
> realize
> >>> if its going into an SRU it requires a fair amount of negotiation with
> other
> >>> teams, so I'm not  surprised to see it take a while. I just wanted to
> check
> >>> how it was going.
> >>>
> >>> Thanks,
> >>>
> >>> John
> >>> =:->
> >>>
> >>> On Mon, Jan 18, 2016 at 7:32 AM, Michael Hudson-Doyle
> >>>  wrote:
>  Hi all,
> 
>  As part of the plan for getting Go 1.5 into trusty (see here
>  https://wiki.ubuntu.com/MichaelHudsonDoyle/Go15InTrusty) I've built
>  packages (called golang1.5-go rather than golang-go) for trusty in my
>  ppa:
> 
>  https://launchpad.net/~mwhudson/+archive/ubuntu/go15-trusty/+packages
> 
>  (assuming 3:1.5.3-0ubuntu4 actually builds... I seem to be having a
>  "make stupid packaging mistakes" day)
> 
>  I'll write up a SRU bug to start the process of getting this into
>  trusty tomorrow but before it does end up in trusty it would seem like
>  a good idea to run the CI tests using juju-core packages built with
>  this version of the go compiler. Is that something that's feasible to
>  arrange
> 
>  The only packaging requirement should be to change the build-depends
>  to be on golang1.5-go rather than golang-go or gccgo.
> 
>  Cheers,
>  mwh
> 
>  --
>  Juju-dev mailing list
>  Juju-dev@lists.ubuntu.com
>  Modify settings or unsubscribe at:
>  https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>>
>
> --
> -
> Katherine
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-24 Thread Nate Finch
Does this mean we can assume 1.6 for everything from now on, or is there
some other step we're waiting on?  I have some code that only needs to
exist while we support 1.2, and I'd be happy to just delete it.

On Thu, Mar 24, 2016, 4:07 AM Tim Penhey  wrote:

> Awesome news Michael.
>
> Thank you for all your work on this.
>
> Tim
>
> On 24/03/16 19:03, Michael Hudson-Doyle wrote:
> > Hi,
> >
> > As of a few minutes ago, there is now a golang-1.6 package in
> > trusty-proposed:
> > https://launchpad.net/ubuntu/trusty/+source/golang-1.6 (thanks for the
> > review and copy, Steve).
> >
> > One difference between this and the package I prepared earlier is that
> > it does not install /usr/bin/go but rather /usr/lib/go-1.6/bin/go so
> > Makefiles and such will need to be adjusted to invoke that directly or
> > put /usr/lib/go-1.6/bin on $PATH or whatever. (This also means it can
> > be installed alongside the golang packages that are already in
> > trusty).
> >
> > Cheers,
> > mwh
> > (Hoping that we can now really properly ignore gccgo-4.9 ppc64el bugs!)
> >
> > On 17 February 2016 at 07:58, Michael Hudson-Doyle
> >  wrote:
> >> I have approval for the idea but also decided to wait for 1.6 and upload
> >> that instead. I'm also on leave currently so hopefully this can all
> happen
> >> in early March.
> >>
> >> Cheers,
> >> mwh
> >>
> >> On 17/02/2016 1:17 am, "John Meinel"  wrote:
> >>>
> >>> To start with, thanks for working on this. However, doesn't this also
> >>> require changing the CI builds to use your ppa?
> >>>
> >>> What is the current state of this? I was just looking around and
> noticed
> >>> golang1.5-go isn't in anything specific for Trusty that I can see. I
> realize
> >>> if its going into an SRU it requires a fair amount of negotiation with
> other
> >>> teams, so I'm not  surprised to see it take a while. I just wanted to
> check
> >>> how it was going.
> >>>
> >>> Thanks,
> >>>
> >>> John
> >>> =:->
> >>>
> >>> On Mon, Jan 18, 2016 at 7:32 AM, Michael Hudson-Doyle
> >>>  wrote:
> 
>  Hi all,
> 
>  As part of the plan for getting Go 1.5 into trusty (see here
>  https://wiki.ubuntu.com/MichaelHudsonDoyle/Go15InTrusty) I've built
>  packages (called golang1.5-go rather than golang-go) for trusty in my
>  ppa:
> 
>  https://launchpad.net/~mwhudson/+archive/ubuntu/go15-trusty/+packages
> 
>  (assuming 3:1.5.3-0ubuntu4 actually builds... I seem to be having a
>  "make stupid packaging mistakes" day)
> 
>  I'll write up a SRU bug to start the process of getting this into
>  trusty tomorrow but before it does end up in trusty it would seem like
>  a good idea to run the CI tests using juju-core packages built with
>  this version of the go compiler. Is that something that's feasible to
>  arrange
> 
>  The only packaging requirement should be to change the build-depends
>  to be on golang1.5-go rather than golang-go or gccgo.
> 
>  Cheers,
>  mwh
> 
>  --
>  Juju-dev mailing list
>  Juju-dev@lists.ubuntu.com
>  Modify settings or unsubscribe at:
>  https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>>
> >>>
> >>
> >
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Master is blocked in preparation for 2.0-beta3

2016-03-22 Thread Nate Finch
May I ask why we're not making a branch for beta3, so work may continue on
master?

On Tue, Mar 22, 2016 at 1:07 PM Cheryl Jennings <
cheryl.jenni...@canonical.com> wrote:

> Hi Everyone,
>
> Master is currently blocked to allow us to get to a stable 2.0-beta3.
> While we are working towards that goal, please do not JFDI changes into
> master.
>
> Please let me know if you have questions or concerns.
>
> Thanks!
> -Cheryl
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


juju versions are now in github.com/juju/version

2016-03-19 Thread Nate Finch
For a recent change in 2.0, we needed to be able to reference the
version.Number struct from both juju-core and charm.v6.  In order to
support this, we moved most of the contents of github.com/juju/juju/version
to at top level repo at github.com/juju/version.  Note that certain
juju-core-specific data still exists in the original folder, such as the
current Juju binary version.

Just wanted everyone to be aware of the change, so that if you're writing
code that interacts with version numbers, you know where that code now
lives.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Diffing string checker

2016-02-25 Thread Nate Finch
I agree with Horacio, you're my hero, Andrew.

I was actually just looking at sergi's diffmatchpatch implementation for
exactly this purpose a couple days ago.  Thanks for doing that!

On Thu, Feb 25, 2016 at 6:09 AM Andrew Wilkins 
wrote:

> On Thu, Feb 25, 2016 at 5:23 PM roger peppe  wrote:
>
>> Nice. FWIW I have a related tool that tries to help find where a regexp
>> mismatch has happened. It parses the output of gocheck, so it's
>> probably not that useful to non-acme users but you might want
>> to consider including the approach (http://paste.ubuntu.com/15195795/)
>> for fancycheck.Matches and ErrorMatches, perhaps, to highlight where
>> the mismatch starts.
>>
>
> Thanks, that is a better idea. I'll look into doing that next time I'm
> staring at a wall of text :)
>
>
>> On 25 February 2016 at 06:26, Andrew Wilkins
>>  wrote:
>> > Howdy,
>> >
>> > Occasionally I'll change a test, and some string equality test will fail
>> > with a wall of text. Sometimes we shouldn't be checking the whole
>> string,
>> > but sometimes it's legitimate to do so, and it can be difficult/tedious
>> to
>> > spot the differences.
>> >
>> > I've just written a checker which diffs the two string args, and
>> colourises
>> > the output. You may find it useful. I'm using red/green background, but
>> I
>> > also added bold for insertions, strike-through for deletions, in case
>> you're
>> > red/green colour blind. My terminal doesn't do strike-through, and
>> your's
>> > probably doesn't either. Anyway, the important thing is you can see the
>> > difference between bits that are the same vs. insertions/deletions.
>> >
>> > Code is at github.com/axw/fancycheck. Just replace
>> > c.Assert("x", gc.Equals, "y")
>> > with
>> > c.Assert("x", fancycheck.StringEquals, "y")
>> >
>> > Cheers,
>> > Andrew
>> >
>> > --
>> > Juju-dev mailing list
>> > Juju-dev@lists.ubuntu.com
>> > Modify settings or unsubscribe at:
>> > https://lists.ubuntu.com/mailman/listinfo/juju-dev
>> >
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Please merge master into your feature branches

2016-02-18 Thread Nate Finch
So uh... how are we going to deliver patches to 1.25 if CI can't run it?

On Thu, Feb 18, 2016 at 9:48 PM Andrew Wilkins 
wrote:

> On Fri, Feb 19, 2016 at 10:03 AM Ian Booth 
> wrote:
>
>> FYI for folks developing feature branches for juju-core.
>>
>> juju-core master has been updated to include the first round of
>> functionality to
>> improve the bootstrap experience. The consequence of this is that CI
>> scripts
>> needed to be updated to match. This means that any feature branch which
>> has not
>> had master commit 294388 or later merged in will not work with CI and so
>> will
>> not be blessed for release.
>>
>
> Further to this, please see the draft 2.0 release notes for instructions:
> http://paste.ubuntu.com/15127859/
>
> If there's anything you can't figure out from that and/or command help,
> please let us know so we can fix the docs.
>
> Cheers,
> Andrew
>
>
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Introducing, juju resources!

2016-02-13 Thread Nate Finch
One idea is to have a resource type of "directory" and upload everything in
that directory.  The current code has only one resource type - file.   But
it's copied to a directory on the unit named by the resource. So it would
be pretty easy to jnstsd copy a number of files to that directory.

Of course, this would be a resources v2 feature. I'm sure there's rework
that would be needed on the back end, but it certainly seems doable.  We
also had ideas for making a github repo as a resource, which might be an
even better UX, depending on your use case.

On Sat, Feb 13, 2016, 9:10 AM Rick Harding 
wrote:

> On Fri, Feb 12, 2016 at 11:06 PM Aaron Bentley <
> aaron.bent...@canonical.com> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> On 2016-02-12 04:26 PM, Katherine Cox-Buday wrote:
>> > Moonstone have been working hard on a new feature coming up in Juju
>> > 2.0 called "Juju Resources", and we're now at a point where we can
>> > share the goodness and call for bugs/feedback!
>>
>> I'm glad to see you folks are working on this. On the QA side, we've
>> been wanting Juju to support some kind of native blob storage,
>> basically since we started.
>>
>> Is is necessary to enumerate the names in the charm? I'd rather we
>> didn't. One of our use cases is plugins for Jenkins. The Jenkins charm
>> has has a setting that tells it which plugins to install, but there's
>> no predetermined list. It would be nice if the plugins could be
>> retrieved in a Juju-native way, but it would be painful to have to
>> update the charm every time we added a new plugin.  And it would be
>> bad if people ended up forking the charm just to adjust the list of
>> plugins.
>>
>> We could bundle the plugins in a single tarfile, but then updating the
>> tarfile would be annoying. For plugins, it's best if each plugin is
>> its own file and there are no restrictions on the filenames.
>
>
> Your read is correct. You must declare the resources. It's helpful to
> users to know what to stick in there and for the charm to be able to handle
> different items. In your case, a single tar file of the plugins you'd like
> to enable could be sent up and the charm would be able to untar, loop
> through them, etc. Ideally the charm would be uploaded to the store with a
> standard set of plugins and folks could then customize which ones they
> wanted by creating their own tar of plugins.
>
> I'll make sure we keep this in mind though. We've started with one file
> per resource, and maybe there's an idea here of "one or more" where you
> might have a charm that takes a data set file. You can deploy with a data
> set, or multiple files of data that all need to be processed, rather than
> building into a single file. We'll have to think it through for a future
> iteration of resources.
>
> Rick
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju terminology change: controllers and models

2016-02-02 Thread Nate Finch
FYI, I noticed ServiceDeployWithNetworks still exists as a client and
facade method, but it's only called by tests. Maybe it should be removed?

On Tue, Feb 2, 2016, 8:34 PM Ian Booth  wrote:

> Hey all
>
> As has been mentioned previously in this list, for the Juju 2.0 release we
> have
> been working on fundamental terminology changes. In particular, we now talk
> about controllers and models instead of state servers and environments.
>
> To this end, a rather large change has landed in master and the upcoming
> 2.0-alpha2 release of Juju will reflect these changes. There are several
> things
> to be aware of. We have also taken the opportunity to remove a lot of code
> which
> existed to support older Juju clients. Needless to say, this Juju 2.0
> release
> will not support upgrading from 1.x - it works only as a clean install.
>
> Note: some of the changes will initially break the GUI and users of the
> Python
> Juju Client - work is underway to update these products for the next alpha3
> release scheduled for next week. For those wishing to continue to test
> Juju 2.0
> without the breaking changes, the alpha1 release is still available via
> ppa:juju/experimental. Separate communications to affected stakeholders
> has/will
> be made as part of the 2.0-alpha2 release.
>
> So, the changes are roughly broken down as follows:
>
> - CLI command name changes
> - facade name changes
> - api method and parameter name changes
> - facade method restructure
> - internal api name changes
> - external artifact/data changes (including on the wire changes)
> - deprecated and older version facades are removed
>
> 1. CLI command name changes
>
> As an obvious example, create-environment becomes create-model. We also
> have
> destroy-controller etc. This alpha2 release will also contain many of the
> other
> CLI changes targetted for 2.0 eg juju backup create becomes juju
> create-backup.
> Not all 2.0 CLI syntax is supported yet, but all the environment -> model
> changes are done.
>
> You will also use -m  instead of -e .
>
> The release notes will go into more detail.
>
> All user facing text now refers to model instead of environment.
>
> 2. Facade name changes
>
> If you are curious, see https://goo.gl/l4JqGd for a representative
> listing of
> all facade and method names and which ones have been changed.
>
> The main one is EnvironmentManager becomes ModelManager. These changes
> affect
> external API clients like the GUI and Python Juju client.
>
> 3. api method and parameter name changes
>
> By way of example:
> EnvironInfo() on the undertaker facade becomes ModelInfo().
> The param struct ModifyEnvironUsers becomes ModifyModelUsers etc.
> EnvironTag attributes become ModelTag.
>
> 4. Service facade method restructure
>
> As part of making our facades more manageable and maintainable when API
> changes
> are required, a whole bunch of service related methods are moved off the
> Client
> facade and onto the Service facade. This had already been started months
> ago,
> and there were shims in place to keep existing clients working, but now
> the job
> is finished.
> eg Client.AddRelation() becomes Service.AddRelation() etc.
>
> This change will break the GUI and Python Juju client.
>
> 5. Internal API name changes
>
> Things like state.AllEnvironments() becomes state.AllModels(), we now use
> names.ModelTag instead of names.EnvironTag, and many, many more.
>
> Note: the names package has not been forked into a .V2 yet (with EnvironTag
> removed) as there are dependencies to sort out. Please do not use
> EnvironTag
> anymore.
>
> 6. External artifact/data changes (including on the wire changes)
>
> There are several main examples here.
> On the wire, we transmit model-uuid tags rather than environment-uuid tags.
> In mongo, we store model-uuid doc fields rather than env-uuid.
> In agent.conf files we store Model info rather than Environment tags.
> In the controller blob store, we store and manage blobs for buckets rather
> than
> environments.
> The controller HTTP endpoints are /model/ In backups we store model tags and details, not environment.
>
> With the blobstore, we've create a .V2 branch which core uses. The
> charmstore
> will continue to use V1 for now.
>
> 7. Deprecated and older version facades are removed
>
> All facade versions have been bumped up. Older facades are removed, and we
> don't
> support fallback to older servers. The main example for facade removal is
> uniter
> v0 and v1 are gone. With deprecated API removal, service deployment now
> expects
> placement parameters rather than catering for older clients that did not
> support
> placement, so there's only one ServiceDeployMethod() instead of 3. All in
> all,
> the code base has been simplified by removing all support for deprecated
> facades
> and API calls.
>
> There are still a couple of grey areas internally to be sorted out. But
> the user
> facing changes are done (pending more CLI work between now and 

Re: Build errors

2016-02-01 Thread Nate Finch
Juju is not structured to work with go get, per se.  Please see the
contributing document for details on building juju:

https://github.com/juju/juju/blob/master/CONTRIBUTING.md


On Mon, Feb 1, 2016 at 1:33 PM Neale Ferguson  wrote:

> I built juju a few weeks ago and attempted to do it today using the same
> procedure. During the stage:
>
> go get -d -v github.com/juju/juju/...
>
> I am now getting:
>
> github.com/Azure/azure-sdk-for-go (download)
> package github.com/juju/juju/cmd/juju
> imports
> github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go
> -autorest/autorest
> :
> cannot find package
> "
> github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/g
> o-autorest/autorest
> "
> in any of:
>
> /home/neale/go/src/
> github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/
> github.com/Azure/go-autorest/autorest (from $GOROOT)
>
> /home/neale/rpmbuild/BUILD/juju/src/
> github.com/Azure/azure-sdk-for-go/Godep
> s/_workspace/src/github.com/Azure/go-autorest/autorest
> 
> (from $GOPATH)
>
>
> And the build fails.
>
> Is this some transient condition related to the Azure git repo or
> something else?
>
> Neale
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Tags and object IDs

2016-01-25 Thread Nate Finch
I was really trying not to give too much information about this exact case,
so we could avoid talking about a specific implementation, and focus on the
more general question of how we identify objects.  Yes, we get the bytes
using an HTTP request, but that is irrelevant to my question :)

On Mon, Jan 25, 2016 at 2:00 AM John Meinel <j...@arbash-meinel.com> wrote:

> On Sat, Jan 23, 2016 at 1:28 AM, William Reade <
> william.re...@canonical.com> wrote:
>
>> On Fri, Jan 22, 2016 at 9:53 PM, Nate Finch <nate.fi...@canonical.com>
>> wrote:
>>
>>> Working in the model layer on the server between the API and the DB.
>>> Specifically in my instance, an API call comes in from a unit, requesting
>>> the bytes for a resource.  We want to record that this unit is now using
>>> the bytes from that specific revision of the resource.  I have a pointer to
>>> a state.Unit, and a function that takes a Resource metadata object and some
>>> reference to the unit, and does the actual transaction to the DB to store
>>> the unit's ID and the resource information.
>>>
>>
>> I'm a bit surprised that we'd be transferring those bytes over an API
>> call in the first place (is json-over-websocket really a great way to send
>> potential gigabytes? shouldn't we be getting URL+SHA256 from the apiserver
>> as we do for charms, and downloading separately? and do we really want to
>> enforce charmstore == apiserver?); and I'd point out that merely having
>> agreed to deliver some bytes to a client is no indication that the client
>> will actually be using those bytes for anything; but we should probably
>> chat about those elsewhere, I'm evidently missing some context.
>>
>
> So I would have expected that we'd rather use a similar raw
> HTTP-to-get-content instead of a JSON request (given the intent of
> resources is that they may be GB in size), but regardless it is the intent
> that you download the bytes from the charm rather from the store directly.
> Similar to how we currently fetch the charm archive content itself.
> As for "will you be using it", the specific request from the charm is when
> it calls "resource-get" which is very specifically the time when the charm
> wants to go do something with those bytes.
>
> John
> =:->
>
>
>> But whenever we do record the unit-X-uses-resource-Y info I assume we'll
>> have much the same stuff available in the apiserver, in which case I think
>> you just want to pass the *Unit back into state; without it, you just need
>> to read the doc from the DB all over again to make appropriate
>> liveness/existence checks [0], and why bother unless you've already hit an
>> assertion failure in your first txn attempt?
>>
>> Cheers
>> William
>>
>> [0] I imagine you're not just dumping (unit, resource) pairs into the DB
>> without checking that they're sane? that's really not safe
>>
>>
>>> On Fri, Jan 22, 2016 at 3:34 PM William Reade <
>>> william.re...@canonical.com> wrote:
>>>
>>>> Need a bit more context here. What layer are you working in?
>>>>
>>>> In general terms, entity references in the API *must* use tags; entity
>>>> references that leak out to users *must not* use tags; otherwise it's a
>>>> matter of judgment and convenience. In state code, it's annoying to use
>>>> tags because we've already got the globalKey convention; in worker code
>>>> it's often justifiable if not exactly awesome. See
>>>> https://github.com/juju/juju/wiki/Managing-complexity#workers
>>>>
>>>> Cheers
>>>> William
>>>>
>>>> On Fri, Jan 22, 2016 at 6:02 PM, Nate Finch <nate.fi...@canonical.com>
>>>> wrote:
>>>>
>>>>> I have a function that is recording which unit is using a specific
>>>>> resource.  I wrote the function to take a UnitTag, because that's the
>>>>> closest thing we have to an ID type. However, I and others seem to 
>>>>> remember
>>>>> hearing that Tags are really only supposed to be used for the API. That
>>>>> leaves me with a problem - what can I pass to this function to indicate
>>>>> which unit I'm talking about?  I'd be fine passing a pointer to the unit
>>>>> object itself, but we're trying to avoid direct dependencies on state.
>>>>> People have suggested just passing a string (presumably
>>>>> unit.Tag().String()), but then my API is too lenient - it appears to say
>>>>> "give me any s

Re: Defaulting tests internal to the package

2016-01-24 Thread Nate Finch
I think the idea that tests of an internal function are worthless because
you might just not be using that function is a fairly specious argument.
That could be applied to unit tests at any level.  What if you're not using
the whole package?  By that logic, the only reliable tests are full stack
tests.

I think that William's position is correct in an ideal world.  The public
interface is the only contract you really need to worry about.  In theory,
all your packages would be simple enough that this would not be difficult,
and the tests would be straightforward.  However, in practice, some
packages are fairly complex beasts, and it can literally be an order of
magnitude easier to test a bit of logic internally than to test it
externally.  And I don't just mean easier in terms of time it takes to
write (although that shouldn't be ignored - we do have deadlines), but also
in simplicity of the code in the tests... the more complicated the code in
the tests, the harder they are to get right, the more difficult it is for
other people to understand them, and the more difficult it is to maintain
them.

Even if you have perfect code coverage, using inversion of control and
dependency injection to be able to do every test externally, you can still
have bugs that slip through, either due to weaknesses in your tests, or
even just poorly coded mocks that don't actually behave like the production
things they're mocking.  Isolating a piece of code down to its bare bones
can make its intended behavior a lot easier to define and understand, and
therefore easier to test.

I think small scope internal unit tests deliver incredible bang for the
buck.  They're generally super easy to write, super easy to understand, and
give you confidence that you haven't screwed up a piece of logic. They're
certainly not ironclad proof that your application as a whole doesn't have
bugs in it... but they are often very strong proof that this bit of logic
does not have bugs.

While I have seen plenty of bugs that were caused by a developer not using
a piece of code he/she was supposed to use, it has almost always been
exported code that wasn't used - a utility function in another package, for
example.  Certainly it's possible you could accidentally stop using an
internal function and no have anyone notice... but I think that's fairly
unlikely compared to the concrete benefits you get from simpler,
faster-to-write internal tests.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Tags and object IDs

2016-01-22 Thread Nate Finch
Working in the model layer on the server between the API and the DB.
Specifically in my instance, an API call comes in from a unit, requesting
the bytes for a resource.  We want to record that this unit is now using
the bytes from that specific revision of the resource.  I have a pointer to
a state.Unit, and a function that takes a Resource metadata object and some
reference to the unit, and does the actual transaction to the DB to store
the unit's ID and the resource information.

On Fri, Jan 22, 2016 at 3:34 PM William Reade <william.re...@canonical.com>
wrote:

> Need a bit more context here. What layer are you working in?
>
> In general terms, entity references in the API *must* use tags; entity
> references that leak out to users *must not* use tags; otherwise it's a
> matter of judgment and convenience. In state code, it's annoying to use
> tags because we've already got the globalKey convention; in worker code
> it's often justifiable if not exactly awesome. See
> https://github.com/juju/juju/wiki/Managing-complexity#workers
>
> Cheers
> William
>
> On Fri, Jan 22, 2016 at 6:02 PM, Nate Finch <nate.fi...@canonical.com>
> wrote:
>
>> I have a function that is recording which unit is using a specific
>> resource.  I wrote the function to take a UnitTag, because that's the
>> closest thing we have to an ID type. However, I and others seem to remember
>> hearing that Tags are really only supposed to be used for the API. That
>> leaves me with a problem - what can I pass to this function to indicate
>> which unit I'm talking about?  I'd be fine passing a pointer to the unit
>> object itself, but we're trying to avoid direct dependencies on state.
>> People have suggested just passing a string (presumably
>> unit.Tag().String()), but then my API is too lenient - it appears to say
>> "give me any string you want for an id", but what it really means is "give
>> me a serialized UnitTag".
>>
>> I think most places in the code just use a string for an ID, but this
>> opens up the code to abuses and developer errors.
>>
>> Can someone explain why tags should only be used in the API? It seems
>> like the perfect type to pass around to indicate the ID of a specific
>> object.
>>
>> -Nate
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Defaulting tests internal to the package

2016-01-22 Thread Nate Finch
I'm glad to hear Roger's opinion about testing internal code... that's
exactly how I feel.  True unit tests of small bits of code are easy to
write, easy to read and understand, and give you confidence that your code
is doing what you think it'll do in a wide variety of cases.  If the unit
test fails, it's generally incredibly clear where the fault in the code
lies, and it's easy to fix either the code or the test.  In 6 months, you
or someone else can go back to the tests, easily understand what they are
testing, and modify them trivially when making a bug fix or improvement.

While you certainly *can* write code that tests all the corner cases of
some small utility function through the external API... those tests will
almost always be incredibly opaque and hard to understand, and generally
much more complicated than they would be if you could just test the utility
function by itself.  This is often a problem I have with our current
tests... it's hard to see what is actually being tested, and even harder to
verify that the test itself is correct and complete.  When a test fails,
you often have no clue where the actual problem lies, because the test
traverses so much code.

Of course, I definitely think you *also* need tests of the exported API of
a package... but small unit tests are still incredibly valuable.


On Fri, Jan 22, 2016 at 7:06 AM roger peppe 
wrote:

> On 22 January 2016 at 09:40, William Reade 
> wrote:
> > I do not want anyone to add unit tests for non-exported code, because
> those
> > tests are almost completely worthless.
>
> I'd beg to disagree with this. I think it very much depends what you are
> testing. For me tests are about giving confidence that the code works.
> Sometimes as part of the implementation of a package I'll write a
> function with a well defined contract that doesn't justify being made
> public because it's intimately related to the way that the package is
> implemented. The function might be reasonably tricky, and it may be hard
> to reach all its corner cases at arm's length through the public API.
> It might not even be possible, but that doesn't mean that it's not
> useful to test the function, as it may provide a foundation for more
> varied future functionality.
>
> In this kind of scenario, I think it makes good sense to write a unit test
> for the unexported function. If you change the implementation, you might
> throw away the function and its tests, and that's fine, but this approach,
> in my experience, can give good confidence in some tricky situations with
> significantly less effort than doing everything through the public API.
>
> As always, there's a trade-off here - we want to maximise confidence
> while minimising time and effort spent writing and maintaining the
> code. It's always going to be a judgement call and flat assertions like
> "those tests are almost completely worthless" are not that helpful IMHO.
>
> I do agree that writing external tests *when reasonable* is the way,
> and I think that export_test.go is a reasonable escape hatch for the
> times when it's useful to write internal tests.
>
> Nate, you seem to be arguing that because we can't have total separation
> between external and internal APIs, then we should not have any separation
> at all. I don't agree - I think that keeping as much as possible to
> the external API, but having a few well-defined exported objects (in
> export_test.go) works well, and reflects the nuanced situation that we
> actually live in.
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Tags and object IDs

2016-01-22 Thread Nate Finch
I have a function that is recording which unit is using a specific
resource.  I wrote the function to take a UnitTag, because that's the
closest thing we have to an ID type. However, I and others seem to remember
hearing that Tags are really only supposed to be used for the API. That
leaves me with a problem - what can I pass to this function to indicate
which unit I'm talking about?  I'd be fine passing a pointer to the unit
object itself, but we're trying to avoid direct dependencies on state.
People have suggested just passing a string (presumably
unit.Tag().String()), but then my API is too lenient - it appears to say
"give me any string you want for an id", but what it really means is "give
me a serialized UnitTag".

I think most places in the code just use a string for an ID, but this opens
up the code to abuses and developer errors.

Can someone explain why tags should only be used in the API? It seems like
the perfect type to pass around to indicate the ID of a specific object.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Defaulting tests internal to the package

2016-01-21 Thread Nate Finch
[reposting this to the wider juju-dev mailing list]

So, I hit an interesting problem a while back.  I have some unit tests that
need to be internal tests, thus they are in 'package foo'.  However, my
code needs to use some testhelper functions that someone else wrote...
which are in 'package foo_test'.  It's impossible to reference those, so I
had to move those helpers to package foo, which then either requires that
we make them exported (which is exactly like using export_test.go, which,
in the juju-core Oakland sprint, we all agreed was bad), or all tests that
use the helpers need to be in 'package foo'... which means I had to go
change a bunch of files to be in package foo, and change the calls in those
files from foo.SomeFunc() to just SomeFunc().

Given the assumption that some tests at some point will make sense to be
internal tests, and given it's likely that helper functions/types will want
to be shared across suites - should we not just always make our tests in
package foo, and avoid this whole mess in the first place?

(A note, this happened again today - I wanted to add a unit test of a
non-exported function to an existing test suite, and can't because the unit
tests are in the foo_test package)

There seems to only be two concrete benefits to putting tests in package
foo_test:
1. It avoids circular dependencies if your test needs to import something
that imports the package you're testing.
2. It makes your tests read like normal usages of your package, i.e.
calling foo.SomeFunc().
The first is obviously non-negotiable when it comes up... but I think it's
actually quite rare (and might indicate a mixture of concerns that warrants
investigation).  The second is nice to have, but not really essential (if
we want tests that are good examples, we can write example functions).

So, I propose we put all tests in package foo by default.  For those devs
that want to only test the exported API of a package, you can still do
that.  But this prevents problems where helper code can't be shared between
the two packages without ugliness and/or dumb code churnm, and it means
anyone can add unit tests for non-exported code without having to create a
whole new file and testsuite.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Copyright headers for auto-generated files?

2016-01-07 Thread Nate Finch
So, two things:
1.) We don't need to worry about this anymore, since I've removed the need
for the generated code.  But to clarify, I was using stringer (
https://github.com/golang/tools/blob/master/cmd/stringer/stringer.go) which
generates files with String() methods for enums.
2.) There's like 1000 go generate tools to embed files as resources in Go
code... we should probably just use one of those, rather than rolling our
own.  (for example, many of them gzip the contents to make them smaller,
etc).

On Thu, Jan 7, 2016 at 3:12 AM Ian Booth  wrote:

> We're already using go generate in the cloud-credentials feature branch.
> And the
> code we generate has the correct header.
>
> We have a top level generate package into which it is intended various
> generators will live.
>
> Here's a generator which reads the contents of a file and assigns that to
> a Go
> constant.
>
> https://github.com/juju/juju/blob/cloud-credentials/generate/filetoconst.go
>
>
>
> On 07/01/16 17:47, Andrew McDermott wrote:
> > Can the "thing" that is generating the content not add the header?
> >
> > On 6 January 2016 at 22:18, Katherine Cox-Buday <
> > katherine.cox-bu...@canonical.com> wrote:
> >
> >> Nate ran into an interesting problem yesterday. If we begin to use go
> >> generate directives to auto-generate code, do we care that this code
> >> doesn't have a Copyright header? We check the generated file into source
> >> control, but as the file is meant to be regenerated in the event of a
> >> change, we don't want to edit it by hand to add the header.
> >>
> >> --
> >> -
> >> Katherine
> >>
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
> >
> >
> >
> >
> >
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: 1st time builder

2015-12-17 Thread Nate Finch
Usually if there are compile time issues, it's because you haven't run
godeps yet.
Make sure you've done the steps under this section of the docs:
https://github.com/juju/juju/blob/master/CONTRIBUTING.md#godeps
go-get by default pulls from the head of every git repo, but to keep our
builds repeatable, we use godeps to pin the revision of each repo, which
can occasionally mean that HEAD of master may not compile if someone has
broken something in a dependent repo.

Also, we don't officially support go 1.5 yet, though I believe the build
works and almost all the tests work (there are a small handful of tests
that fail in go 1.5 IIRC) but that wouldn't cause the compile issues
you're getting here.

On Thu, Dec 17, 2015 at 2:10 PM Neale Ferguson  wrote:

> I am attempting to build juju for the 1st time using the instructions at
> https://github.com/juju/juju and appear to have all the dependencies
> installed. I am using golang 1.5. I have searched this month¹s mailing
> list archives (juju and juju-dev) and googled the symptoms and got
> nothing. I am getting the following when building:
>
> src/github.com/juju/juju/provider/gce/google/instance.go:212: cannot use
> value (type string) as type *string in field value
> src/github.com/juju/juju/provider/gce/google/instance.go:228: cannot use
> item.Value (type *string) as type string in assignment
>
> src/github.com/juju/juju/provider/lxd/lxdclient/client_raw.go:23: cannot
> use (*lxd.Client)(nil) (type *lxd.Client) as type rawClientWrapperFull in
> assignment:
> *lxd.Client does not implement rawClientWrapperFull (wrong type for
> MigrateFrom method)
> have MigrateFrom(string, string, map[string]string, int,
> map[string]string, shared.Devices, []string, string, bool) (*lxd.Response,
> error)
> want MigrateFrom(string, string, map[string]string,
> map[string]string,
> []string, string, bool) (*lxd.Response, error)
>
> src/github.com/juju/juju/provider/azure/environ.go:126: undefined:
> resources.Group
> src/github.com/juju/juju/provider/azure/instance.go:224: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolTCP
> src/github.com/juju/juju/provider/azure/instance.go:226: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolUDP
> src/github.com/juju/juju/provider/azure/instance.go:336: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolTCP
> src/github.com/juju/juju/provider/azure/instance.go:338: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolUDP
> src/github.com/juju/juju/provider/azure/networking.go:65: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network".SecurityRuleProtocolTCP
> src/github.com/juju/juju/provider/azure/networking.go:284: undefined:
> "github.com/Azure/azure-sdk-for-go/arm/network².SecurityRuleProtocolTCP
>
> src/github.com/juju/juju/provider/azure/utils.go:15: cannot use
>  (type **map[string]*string) as type *map[string]*string in
> return argument
>
> src/github.com/juju/juju/provider/azure/vmextension.go:68: cannot use
>  (type *map[string]*string) as type
> *map[string]interface {} in field value
>
>
> Before I go too far down the rabbit hole trying to determine the problem,
> I was wondering if either this is a known problem or something I am doing
> incorrectly for the build?
>
> I am building this on a CentOS 7.1 system.
>
> Neale
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Starting a new CI test

2015-12-17 Thread Nate Finch
That is amazing and awesome. Thanks, Martin and everyone else who helped
with this!  Lowering the barrier of entry to writing CI tests can only be a
good thing.

On Thu, Dec 17, 2015 at 1:00 PM Martin Packman 
wrote:

> I just added a cheaty way of getting started on a new CI test.
>
> $ make new-assess name=new_feature
>
> With 'new_feature' as the name you want for your test. This generates
> a new test script, and some unit tests to go along with it, which you
> can run as normal with:
>
> $ make test p=*new_feature*
>
> The QA team has done a fair bit of work of late both simplifying code
> and updating older scripts. These templates should prove to be a
> better example than looking at other random tests as we continue to
> refactor things however. Input from our (many) python experts is
> welcome.
>
> Next steps, a test runner that handles both environment setup and
> annoying problems like ssh config neatly.
>
> Martin
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Comprehensive reviews & when to do them?

2015-12-14 Thread Nate Finch
I think that as long as we make it clear what kind of review we're asking
for, and make sure that we really do follow up on review comments, that we
can have the best of both worlds.  I do think that reviewing once all the
code is done is kind of fruitless... there's a lot more impetus to just
leave the code as it is, since so much time has been invested in it, and
there's often just so much code that really reviewing it would take all day
if not multiple days.

Reviewing in chunks is a lot more reasonable, it just takes some awareness
that you can't expect everything to be perfect and finished (if you want
perfect and finished, wait til it's a 16000 line code drop from the feature
branch).

But it definitely takes discipline on the developer's part to ensure that
they actually go back and address whatever review comments were left, even
if they can't get to everything before they merge to the feature branch.

On Mon, Dec 14, 2015 at 8:22 AM roger peppe 
wrote:

> On 14 December 2015 at 15:39, Katherine Cox-Buday
>  wrote:
> > Hey All,
> >
> > I think we have a mis-alignment in how we currently do reviews and
> feature
> > branches.
> >
> > We've switched over to feature-branches which is great and has allowed
> > Moonstone to land "good enough" code into our feature branch to support a
> > bi-weekly demo and solicit feedback. At the same time, I feel like we're
> > wasting people's time asking for a +1 on code that is not intended to be
> > landed into master. Often this code will take shortcuts or stub out some
> > code, and as the lead I'll make a judgment call to circle-back later.
> > Reviewers don't necessarily know this.
> >
> > Conversely, when we go to land the feature branch into master, these PRs
> are
> > generally rubber-stamped.
> >
> > I feel like maybe we have this backwards?
>
> I feel your pain.
>
> But the other side of this coin is that feature branches can be very big,
> so it's very difficult to give them a comprehensive review when merging
> them into master.
>
> For example, this feature branch (https://github.com/juju/juju/pull/3590)
> involved more than 16000 lines of changes ("diff" prints 30616 lines).
> I wouldn't have any trust in the results of a code review of all that
> code at once.
>
> I think that if you're going to implement features that you don't want
> comprehensively reviewed, it's best to treat them as "spikes" and keep
> them out of the juju repo entirely, then try to land them in manageable
> chunks on the feature branch (or master).
>
> In my view, an important attribute of the feature branches is that they
> are solidly reviewed and thus OK to land without full review of the entire
> diff.
>
> That said, I believe that it is *also* important that people do try
> to review feature branches when they merge, if at all reasonable. It's
> often the first time that people not directly involved with the feature
> branch actually look at the implications for master.
>
>   cheers,
> rog.
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Be very careful in assumptions in tests

2015-12-03 Thread Nate Finch
I'll definitely +1 the need for gc.HasLen ... I've seen a ton of panics in
tests if the code starts erroneously returning nil slices.  Obviously this
is less bad, since the tests still fail, but they're really ugly annoying
failures.

And +1000 to making tests that fail before fixing the code (or at least,
purposely re-breaking the code if you forget and fix the bug first)... I've
caught so many invalid tests that way... it is the only way to prove to
yourself that your test actually work correctly both in proving correct
behavior, and signalling bad behavior.

On Thu, Dec 3, 2015 at 5:24 PM Tim Penhey  wrote:

> Hi folks,
>
> I came across an interesting bug yesterday and during investigation
> found that there was a very comprehensive test that covered the situation.
>
> The problem is that the asserts were not actually running.
>
> The asserts were inside a loop where the expectation was that the loop
> would run exactly once as a previous call returned an array of items
> with one value in it.
>
> Except it didn't.
>
> It returned an empty list.
>
> So, if you see something like this, a simple line like:
>
>   c.Assert(someList, gc.HasLen, 1)
>
> Is sufficient to have caught this particular issue.
>
>
> Remember a key step when writing tests is to have the test fail first,
> then make it pass.
>
> One trick I use is to assert checks against data I know is wrong. If the
> test passes then I know I have other problems.
>
> Tim
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: apiserver authorizers

2015-12-02 Thread Nate Finch
Auth on the api is a big mystery to me. Is there a document on the expected
behavior and the functions and types that back it up?

For example, you said "an environ provisioner may have wide-ranging powers,
but that's no reason to let it see or modify container machines that are
not its direct responsibility" ... I don't really know what that means.
This sounds like authorization, but how do you know who is calling your api
or what they're supposed to be allowed to do?

On Wed, Dec 2, 2015, 6:36 AM William Reade 
wrote:

> The case I spotted yesterday was here:
> http://reviews.vapour.ws/r/3243/diff/1/?file=167369#file167369line41
>
> and in general:
>
>   * seeing `_ common.Authorizer` in a parameter list is a sign you're DIW.
>   * failing to check some auth property when creating the facade is a sign
> you're DIW.
>   * failing to pass the auth on to the facade type itself *might* be OK,
> but should be viewed with some suspicion -- for example:
>   * an environ provisioner may have wide-ranging powers, but that's no
> reason to let it see or modify container machines that are not its direct
> responsibility
>   * right now, many user-facing facades don't need further
> authorization once they know it's an authenticated client, but that won't
> last forever
>
> helpful?
>
> Cheers
> William
>
> On Wed, Dec 2, 2015 at 12:26 PM, roger peppe 
> wrote:
>
>> Could you link to the offending change(s) please, so we
>> can see what doing it wrong looks like?
>>
>> On 2 December 2015 at 09:28, William Reade 
>> wrote:
>> > I just noticed that the unitassigner facade-constructor drops the
>> authorizer
>> > on the floor; and I caught a similar case in a review yesterday (that
>> had
>> > already been LGTMed by someone else).
>> >
>> > Doing that means that *any* api connection can use the thus-unprotected
>> > facade -- clients, agents, and malicious code running in a compromised
>> > machine and using the agent credentials. I don't think we have any APIs
>> > where this is actually a good idea; the best I could say about any such
>> case
>> > is that it's not *actively* harmful *right now*. But big exploits are
>> made
>> > of little holes, let's make an effort not to open them in the first
>> place.
>> >
>> > Moonstone, please fix the unitassigner facade ASAP; everyone else, be
>> told,
>> > and keep an extra eye out for this issue in reviews :).
>> >
>> > Cheers
>> > William
>> >
>> > --
>> > Juju-dev mailing list
>> > Juju-dev@lists.ubuntu.com
>> > Modify settings or unsubscribe at:
>> > https://lists.ubuntu.com/mailman/listinfo/juju-dev
>> >
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: utils/fslock needs to DIAF

2015-12-01 Thread Nate Finch
Certainly using Windows' file system lock is appropriate for locking its
files.  I thought the case we were talking about was just abusing that
ability as a generic cross-process lock.

I wasn't aware of how configstore was using fslock.

On Tue, Dec 1, 2015 at 11:58 AM roger peppe <roger.pe...@canonical.com>
wrote:

> On 1 December 2015 at 16:43, Nate Finch <nate.fi...@canonical.com> wrote:
> > I think half the problem is that someone named the package fslock and not
> > oslock, so we're stuck asking the wrong question.
> >
> > If the question is "How do I acquire an OS-level lock for a process?"
> The
> > answer on Windows is "Use a named mutex".
>
> That's not the question we need to answer here, for the configstore case
> anyway. In that case, we really do want to protect access to a shared data
> store in the filesystem, so the term "fslock" is appropriate I think.
>
> >  Dave seems to be saying that the
> > only problem with unix domain sockets is if you try to think about them
> like
> > files... which may be a result of us thinking too much about the solution
> > and not the problem. (again, I'm not an expert, you guys can duke out
> what
> > the best solution is)  I think using hackery with the filesystem locking
> on
> > Windows is a mistake.
>
> Is there something wrong with using the Windows file-exclusive-open
> semantics to implement a lock associated with a entity in the file system?
> Isn't that what it's for?
>
>
> > On Tue, Dec 1, 2015 at 4:10 AM roger peppe <roger.pe...@canonical.com>
> > wrote:
> >>
> >> So I'm with axw on this one - flock seems like it is a reasonable tool
> for
> >> the job here. FWIW a Unix domain socket also suffers from the "won't
> >> work across NFS" problem. Abstract unix domain sockets also
> >> have the problem that they won't work with long file paths (what
> >> were they thinking?)
> >>
> >> We have been using github.com/camlistore/lock and although it's not
> >> totally ideal (see https://github.com/camlistore/lock/issues/10 )
> >> it does the job. Note that it's non-blocking, so a blocking
> >> layer above it is necessary, for example see the lockFile
> >> function in
> >> https://github.com/juju/persistent-cookiejar/blob/master/serialize.go
> >>
> >> The Windows named mutex thing does sound interesting because
> >> it's a blocking API, which is actually what we need. On the other
> >> hand, under Windows, files opened for writing are locked by default
> >> anyway, so it might be easier just to leverage that property.
> >> The camlistore lock code could use some improvement for the
> >> Windows platform - we could either fork it, or bradfitz would
> >> probably be amenable to a PR.
> >>
> >>   cheers,
> >> rog.
> >>
> >> On 1 December 2015 at 04:12, Nate Finch <nate.fi...@canonical.com>
> wrote:
> >> > I'm not a linux expert, but definitely a named mutex is exactly the
> >> > correct
> >> > thing to use for Windows.  Using something else for this purpose would
> >> > be
> >> > very surprising to a Windows dev and much more likely to be buggy.  I
> >> > don't
> >> > see any reason we need to use the exact same implementation on all
> OSes
> >> > if
> >> > there is something that does exactly the right thing without any
> >> > finagling.
> >> > FWIW, mutexes do die with the process:
> >> >
> >> >
> https://msdn.microsoft.com/en-us/library/windows/desktop/ms682411(v=vs.85).aspx
> >> >
> >> > On Mon, Nov 30, 2015 at 8:29 PM Andrew Wilkins
> >> > <andrew.wilk...@canonical.com> wrote:
> >> >>
> >> >> On Tue, Dec 1, 2015 at 9:07 AM David Cheney
> >> >> <david.che...@canonical.com>
> >> >> wrote:
> >> >>>
> >> >>> http://0pointer.de/blog/projects/locking.html
> >> >>>
> >> >>> In short, opening the same file twice and asserting a lock on it
> will
> >> >>> succeed.
> >> >>
> >> >>
> >> >> Thanks. The article is a bit exasperated. Yes, there are problems to
> be
> >> >> aware of, but it doesn't make them unusable in all cases.
> >> >>  - Juju agents don't get installed onto NFS file systems, so doesn't
> >> >> matter for the agents.
> >> >>  - We're in full control

Re: utils/fslock needs to DIAF

2015-12-01 Thread Nate Finch
I think half the problem is that someone named the package fslock and not
oslock, so we're stuck asking the wrong question.

If the question is "How do I acquire an OS-level lock for a process?"  The
answer on Windows is "Use a named mutex".  Dave seems to be saying that the
only problem with unix domain sockets is if you try to think about them
like files... which may be a result of us thinking too much about the
solution and not the problem. (again, I'm not an expert, you guys can duke
out what the best solution is)  I think using hackery with the filesystem
locking on Windows is a mistake.

On Tue, Dec 1, 2015 at 4:10 AM roger peppe <roger.pe...@canonical.com>
wrote:

> So I'm with axw on this one - flock seems like it is a reasonable tool for
> the job here. FWIW a Unix domain socket also suffers from the "won't
> work across NFS" problem. Abstract unix domain sockets also
> have the problem that they won't work with long file paths (what
> were they thinking?)
>
> We have been using github.com/camlistore/lock and although it's not
> totally ideal (see https://github.com/camlistore/lock/issues/10 )
> it does the job. Note that it's non-blocking, so a blocking
> layer above it is necessary, for example see the lockFile
> function in
> https://github.com/juju/persistent-cookiejar/blob/master/serialize.go
>
> The Windows named mutex thing does sound interesting because
> it's a blocking API, which is actually what we need. On the other
> hand, under Windows, files opened for writing are locked by default
> anyway, so it might be easier just to leverage that property.
> The camlistore lock code could use some improvement for the
> Windows platform - we could either fork it, or bradfitz would
> probably be amenable to a PR.
>
>   cheers,
> rog.
>
> On 1 December 2015 at 04:12, Nate Finch <nate.fi...@canonical.com> wrote:
> > I'm not a linux expert, but definitely a named mutex is exactly the
> correct
> > thing to use for Windows.  Using something else for this purpose would be
> > very surprising to a Windows dev and much more likely to be buggy.  I
> don't
> > see any reason we need to use the exact same implementation on all OSes
> if
> > there is something that does exactly the right thing without any
> finagling.
> > FWIW, mutexes do die with the process:
> >
> https://msdn.microsoft.com/en-us/library/windows/desktop/ms682411(v=vs.85).aspx
> >
> > On Mon, Nov 30, 2015 at 8:29 PM Andrew Wilkins
> > <andrew.wilk...@canonical.com> wrote:
> >>
> >> On Tue, Dec 1, 2015 at 9:07 AM David Cheney <david.che...@canonical.com
> >
> >> wrote:
> >>>
> >>> http://0pointer.de/blog/projects/locking.html
> >>>
> >>> In short, opening the same file twice and asserting a lock on it will
> >>> succeed.
> >>
> >>
> >> Thanks. The article is a bit exasperated. Yes, there are problems to be
> >> aware of, but it doesn't make them unusable in all cases.
> >>  - Juju agents don't get installed onto NFS file systems, so doesn't
> >> matter for the agents.
> >>  - We're in full control of the files we're locking, we're not locking
> >> some file like /etc/passwd where some other random bit of code in the
> >> process is going to open/close it and release the lock by accident.
> >>  - We don't need byte-range locking.
> >>
> >> So only the original uncertainty remains: do we care about clients
> running
> >> their home directory on an NFS share, where the NFS *server* is too old
> to
> >> support flock?
> >>
> >> Maybe a named mutex on Windows and a domain socket on *NIX is the way to
> >> go. I'm not dead set on flock; I just want something that is simple,
> robust
> >> and portable.
> >>
> >> Cheers,
> >> Andrew
> >>
> >>> On Tue, Dec 1, 2015 at 12:04 PM, Andrew Wilkins
> >>> <andrew.wilk...@canonical.com> wrote:
> >>> > On Tue, Dec 1, 2015 at 8:57 AM David Cheney
> >>> > <david.che...@canonical.com>
> >>> > wrote:
> >>> >>
> >>> >> fcntl won't work in threaded go applications, it barely works in non
> >>> >> threaded applications.
> >>> >
> >>> >
> >>> > This is news to me. I've used it plenty in the past, in
> multi-threaded
> >>> > programs. Please point me at relevant literature that explains where
> it
> >>> > doesn't work.
> >>> >
> >>> >>
> >>> >> I'm 

Re: lxd provider in master

2015-11-23 Thread Nate Finch
I guess we don't error out when we see duplicate environment names?
Shouldn't we?

On Mon, Nov 23, 2015 at 6:47 PM Tim Penhey  wrote:

> >> In environments.yaml file I added a very simple clause:
> >>
> >>lxd:
> >>   type: lxd
> >
>
> found that further down the file I found this:
>
> lxd:
>type: local
>container: lxd
>
> That was causing problems :-)
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New feature for 1.26 (master), $(JUJU_HOME)/aliases

2015-10-27 Thread Nate Finch
You're not selling me on this ;)

On Tue, Oct 27, 2015, 4:01 PM Tim Penhey  wrote:

> I can't help but draw parallels to the plugin system.
>
> There was considerable resistance to adding the ideas of plugins to
> Juju. It was added complexity that no one needed and could be done
> simply using existing executable scripts, like juju-foo.
>
> Tim
>
> On 28/10/15 07:33, Cory Johns wrote:
> > You can accomplish most of this with the existing plugin system.  You
> > can't override existing commands, but you can easily create thin
> > wrappers around them with your desired default args, and given the
> > discussion around --no-aliases, it seems like this is actually a
> > benefit.  And plugins provide an easy way to add more intelligent
> > handling of args, and also come with zero additional dev burden on core.
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: On Call Reviewer Duties

2015-10-21 Thread Nate Finch
Sounds good to me.

On Wed, Oct 21, 2015, 6:39 AM Matthew Williams <
matthew.willi...@canonical.com> wrote:

> Hey Folks,
>
> I propose that on call reviewers as part of their job should also review
> the issues list on https://github.com/juju/juju/issues and attempt to
> triage anything that's there. We moved to github to improve collaboration,
> but the issues list is often left ignored. If every OCR took a look at it
> we'd get it under control very quickly
>
> Thoughts
>
> Matty
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Pointer Receivers on Error() and String() methods

2015-08-24 Thread Nate Finch
That's fair.  I wasn't sure if recovering the panic in errors would be a
good idea, since it could hide programmer error (like what I'd done).  But
I'm fine with that solution.

On Mon, Aug 24, 2015 at 3:22 AM David Cheney david.che...@canonical.com
wrote:

 Yup, I agree with Rog. This is fixing the problem in the wrong place.

 On Mon, Aug 24, 2015 at 5:14 PM, roger peppe roger.pe...@canonical.com
 wrote:
  I'm afraid I'm not convinced. Declaring the Error method on the
  pointer receiver is idiomatic (just grep for ' Error\(' in the Go source)
  and is a useful indicator that the error value is always intended
  to be a pointer.
 
  There's a much simpler fix for this: let the errors package
  recover from this itself. We can just make Err.Error call fmt.Sprint
  to get the error message (a one line change)
 
  Then a wrapped nil error will print nil just like normal nil
  errors.
 
 
  On 20 August 2015 at 03:45, Nate Finch nate.fi...@canonical.com wrote:
  tl;dr:  Don't.  Use a value receiver.  99% of the time you can just
 delete
  the * on the receiver and it'll still work.  If you really must use a
  pointer, please handle the case where you're called with a nil receiver.
 
  I spent most of today trying to understand why my new hook command was
  producing this output:
 
  error: %!v(PANIC=runtime error: invalid memory address or nil pointer
  dereference)
 
  It took me a while to figure out that this is what fmt.Printf(error:
 %v\n,
  err) outputs when err's Error() method panics.  If you're using %s or
 %v to
  print a value (or use Println which implicitly uses %v), then fmt will
 look
  for a String() method or Error() method on the value to call, and use
 the
  output of that for the value's string output.  If that method panics,
 fmt
  prints the panic in the way you see above (everything after the PANIC=).
 
  Of course, the problem here is that there's no type being written, and
 since
  it was an error interface, it could almost anything.  Using %#v skips
  calling the Error/String methods and prints out the values in a go
 format,
  which told me this was a juju/errors.Err value, wrapping an API params
 Error
  value which was a nil pointer.  When we call Error() on an errors.Err,
 we
  call Error() on the cause explicitly, which was panicking.
 
  Here's a minimal reproduction http://play.golang.org/p/ncNVrza-hn
 (you'll
  have to copy it to a local file and go run it, since the playground
 won't
  run code external to the stdlib).
 
  So what's sort of interesting is that printing the error before it gets
  Traced works fine, but after the trace it is not fine.  The errors.Err's
  Error() function looks like it's explicitly calling the Error() method
 on
  the wrapped Cause error, which is probably the problem.  fmt.Printf
 must use
  some reflection magic to avoid doing that.
 
  Now, the root cause of this particular bug is actually my own mistake.
 Line
  21 should check if orig is nil and then assign nil explicitly to err if
 it
  is.  Then errors.Trace would be able to tell that the error is nil and
 would
  just return nil itself, instead of thinking it's a valid error and
 wrapping
  it.
 
  However, you can sidestep this entirely by doing one of two things:
 either
  just make the Error() (or String()) method use a value receiver.. in
 which
  case this code would produce this output:
 
  %!v(PANIC=value method main.MyError.Error called using nil *MyError
 pointer)
 
  (you can try it with the repro code I linked to)
 
  This printout is a lot more helpful and useful and obvious than the
 other
  nil pointer printout.
 
  OR
 
  Just handle a nil receiver:
 
  func (e *MyError) Error() string {
  if e == nil {
  return nil MyError
  }
  return e.Message
  }
 
  (note that it is dereferencing the pointer to e to access the Message
 field
  which causes the panic. Calling a method on a nil pointer is totally
 fine
  and will not panic if the code inside does not try to derefence the
 pointer
  to get to a field).
 
  Grepping through our code, I see a lot of pointer receivers on Error and
  String methods (45 and 77 respectively).  I think we should at least
 change
  all of these to be value methods (unless that's not possible.   That's a
  trivial change, and gives a much more useful printout when the pointer
 is
  nil.
 
  -Nate
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Pointer Receivers on Error() and String() methods

2015-08-19 Thread Nate Finch
tl;dr:  Don't.  Use a value receiver.  99% of the time you can just delete
the * on the receiver and it'll still work.  If you really must use a
pointer, please handle the case where you're called with a nil receiver.

I spent most of today trying to understand why my new hook command was
producing this output:

error: %!v(PANIC=runtime error: invalid memory address or nil pointer
dereference)

It took me a while to figure out that this is what fmt.Printf(error:
%v\n, err) outputs when err's Error() method panics.  If you're using %s
or %v to print a value (or use Println which implicitly uses %v), then fmt
will look for a String() method or Error() method on the value to call, and
use the output of that for the value's string output.  If that method
panics, fmt prints the panic in the way you see above (everything after the
PANIC=).

Of course, the problem here is that there's no type being written, and
since it was an error interface, it could almost anything.  Using %#v skips
calling the Error/String methods and prints out the values in a go format,
which told me this was a juju/errors.Err value, wrapping an API params
Error value which was a nil pointer.  When we call Error() on an
errors.Err, we call Error() on the cause explicitly, which was panicking.

Here's a minimal reproduction http://play.golang.org/p/ncNVrza-hn (you'll
have to copy it to a local file and go run it, since the playground won't
run code external to the stdlib).

So what's sort of interesting is that printing the error before it gets
Traced works fine, but after the trace it is not fine.  The errors.Err's
Error() function looks like it's explicitly calling the Error() method on
the wrapped Cause error, which is probably the problem.  fmt.Printf must
use some reflection magic to avoid doing that.

Now, the root cause of this particular bug is actually my own mistake.
Line 21 should check if orig is nil and then assign nil explicitly to err
if it is.  Then errors.Trace would be able to tell that the error is nil
and would just return nil itself, instead of thinking it's a valid error
and wrapping it.

However, you can sidestep this entirely by doing one of two things:  either
just make the Error() (or String()) method use a value receiver.. in which
case this code would produce this output:

%!v(PANIC=value method main.MyError.Error called using nil *MyError pointer)

(you can try it with the repro code I linked to)

This printout is a lot more helpful and useful and obvious than the other
nil pointer printout.

OR

Just handle a nil receiver:

func (e *MyError) Error() string {
if e == nil {
return nil MyError
}
return e.Message
}

(note that it is dereferencing the pointer to e to access the Message field
which causes the panic. Calling a method on a nil pointer is totally fine
and will not panic if the code inside does not try to derefence the pointer
to get to a field).

Grepping through our code, I see a lot of pointer receivers on Error and
String methods (45 and 77 respectively).  I think we should at least change
all of these to be value methods (unless that's not possible.   That's a
trivial change, and gives a much more useful printout when the pointer is
nil.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New dependency

2015-08-06 Thread Nate Finch
Just some context - Ben Johnson is an active member of the Go community,
works in Go every day, and has several popular packages in use by many
third parties.  I would not worry about depending on his work from a
quality or responsiveness aspect.

As with John, I haven't looked at the exact code, so I don't know the
specifics, I've just worked with Ben and his code in the past and been very
happy with it.

On Wed, Aug 5, 2015 at 6:29 AM John Meinel j...@arbash-meinel.com wrote:

 The one caveat is that a Clock isn't particularly complex and it may be
 that the other implementation is focusing on stuff we don't care about. I
 don't have a particular stake here and haven't evaluated the specifics.
 Certainly if you do bring this in, I'd like you to make sure to replace
 what we have in place so far. The worst of both worlds is to have 2
 slightly diverging implementations.

 John
 =:-


 On Wed, Aug 5, 2015 at 4:00 AM, Andrew Wilkins 
 andrew.wilk...@canonical.com wrote:

 On Wed, Aug 5, 2015 at 9:47 AM Tim Penhey tim.pen...@canonical.com
 wrote:

 On 04/08/15 17:54, Andrew Wilkins wrote:
  Hi,
 
  I'd like to add a new third-party dependency for testing:
  github.com/benbjohnson/clock http://github.com/benbjohnson/clock. I
  intend to use this for testing forthcoming retry scheduling logic in
 the
  storage provisioner. Any objections?

 I now that William has been working on mocking time a different way for
 some of the leadership tests.

 We should make sure we have consistency across this.

 Tim


 The approach William has taken is actually pretty much identical to what
 I'm doing. The primary difference is that he has implemented a mock Clock
 himself. The mock does not implement After(), which I do need in my code. I
 can implement this myself, but I'd rather we just used something off the
 shelf.

 Cheers,
 Andrew

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Uniter tests for update-status hook - BLOCKER

2015-07-20 Thread Nate Finch
On Mon, Jul 20, 2015 at 12:42 AM Tim Penhey tim.pen...@canonical.com
wrote:


 Aside from all this work, it is becoming REALLY IMPORTANT that we stop
 writing terrible, wasteful, full integration type tests when what we
 really care about testing is some aspect of uniter internals. I know
 that it is just simpler to copy what is there and make more, but it is
 better to write smaller, targeted tests that just test what you are
 wanting to assert.


+100

If you start writing tests and you reach for JujuConnSuite, stop.  You
should be able to get to 80% code coverage (and near 100% logic coverage)
without using anything outside your package.  Just provide some stub
implementations of interfaces and assert that the logic run against those
interfaces is correct.  If you have too many dependencies on code outside
your package, start writing interfaces and refactor your code to use them
instead of explicitly relying on external types.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Uniter tests for update-status hook - BLOCKER

2015-07-20 Thread Nate Finch
On Mon, Jul 20, 2015 at 10:57 AM roger peppe rogpe...@gmail.com wrote:

 On 20 July 2015 at 14:11, Martin Packman martin.pack...@canonical.com
 wrote:
  The logs are giant,
  the actual failure lines tend to be non-informative with the real
  cause several screens up in the log, multiple tests have basically the
  same problems with common code...

 FWIW I often delete all lines containing the string [LOG] before
 looking at the output - it helps me to see the wood from the trees.


Also FWIW, this is why I wrote https://github.com/natefinch/nolog - it
filters out the spammy logging by default, and offers a few other niceties
(many that Horacio added). It's a go application, so just 'go get' it.

I know that doesn't help when reading CI output, but it is super handy when
reproducing a problem locally.  (Yes you could run some combination of
linux CLI invocations to do the same thing - this is easier).
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Uniter tests for update-status hook - BLOCKER

2015-07-20 Thread Nate Finch
On Mon, Jul 20, 2015 at 3:05 PM roger peppe roger.pe...@canonical.com
wrote:

 On 20 July 2015 at 19:41, Nate Finch nate.fi...@canonical.com wrote:
  You should be able to get to 80% code coverage (and near 100% logic
 coverage)
  without using anything outside your package.

 Shouldn't those numbers be the other way around? I don't see how
 you could possibly get to ~100% of logic coverage if only 80%
 of the code is covered.


Yes, you're right, any runnable code is logic.  But there's often some
uninteresting glue code that is not really the meat of the logic, that may
be hard to test and unlikely to break.  I guess that's really too vague as
a guideline, though.  My point is - you should be able to test most of your
code without having to spin up a server - even a fake test server.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: A cautionary tale about go test -race -cover

2015-07-16 Thread Nate Finch
Thanks for the info, Dimiter.  That is almost certainly going to bite
someone else at some point.

On Thu, Jul 16, 2015 at 6:35 AM Dimiter Naydenov 
dimiter.nayde...@canonical.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hey all,

 tl;dr; When running $ go test -race -cover with go  1.3, and you get
 data races, try adding -covermode=atomic as well to see if they are
 caused by the go coverage tool itself. In go 1.3+ -covermode defaults
 to atomic, so yet another reason to really move quickly to 1.5 across
 the board ASAP.

 I've discovered the hard way the go race detector doesn't play well
 with the go coverage tool in golang versions prior to 1.3. With -cover
 (and/or -coverprofile=filename, which assumes -cover), the code
 instrumentation generated by the coverage tool uses global variables
 to track which lines are covered and how many times. This causes the
 race detector to report data races in confusing locations (e.g. I had
 a whole bunch of them in the provider/ec2 pacakge), which are not
 happening without -cover enabled. I've discovered the issue very
 easily by adding -test.work argument to go test. This causes go
 test to print the temporary work dir used and also keep it after go
 test completes. Then it's a simple matter of inspecting the work dir
 and the actual source files in there, which include race and coverage
 instrumentation code added. The DATA RACE output includes paths in
 that work dir and line numbers exactly matching the code generated by
 - -cover.

 Apparently, adding -covermode=atomic along with -cover and -race,
 works around the problem in both golang versions  1.3 and 1.3+, by
 using atomic inc ops rather than global non-goroutine-safe vars.

 An interesting article describing the issue with examples:
 http://herman.asia/running-the-go-race-detector-with-cover

 Hopefully this will be helpful to somebody else and save them a few
 wasted hours trying to figure out why the result of go test -race
 - -cover has data races reported, while go test -race or go test
 - -cover for the same tests shows no races at all.

 - --
 Dimiter Naydenov dimiter.nayde...@canonical.com
 Juju Core Sapphire team http://juju.ubuntu.com
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.22 (GNU/Linux)

 iQEcBAEBAgAGBQJVp4kCAAoJENzxV2TbLzHwyJQIALb3MuDz9qSuLwhNNL7DfIjq
 DsEV14F82qDbdXcjS42f8I3ZIaq3ii8DQZWyR3dNoTtMOhc4thxIVs6LzHdzUR88
 GxFLWZeSgsRzeGj0OVvVGeelwIkqUPORt2Q4m2uziPgVkiczniVwkM9FAC1kdu21
 l54ibsIu1TsPwDUQzp6LPSEzP7zhv3pB9h8Qhq4sxoqSUjtQXt6ysgKn88Br/Cjl
 7EBkcI0sDkm6bglklztNBh8fQqLUXLHW+cUXhdr/7psnhsAaGXHDEC3u3hPi16X+
 Mc0yOrsyOHBNdsTV3J8aWiOI9AJFZWjwEWo1LbUc6vL1znX40LF2Uxq7tTuTJ6g=
 =DXZL
 -END PGP SIGNATURE-

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Recurring fails on windows tests and the recent CentOS failure

2015-07-16 Thread Nate Finch
I agree that this is the correct thing to do.  However, it is error prone
and non-obvious.  Having a check that can ensure we do it correctly would
definitely help avoid this problem in the future.  Not needing to do it at
all would be even better, but I understand that changing 100,000 lines of
tests is not a viable solution.  But maybe it's something we can think
about for new tests.

And as to what Ian said, that was mostly my point - if there are functions
that don't need setup and teardown, just extract them from test fixtures.


On Wed, Jul 15, 2015 at 8:46 PM Tim Penhey tim.pen...@canonical.com wrote:

 I'm in agreement with Bogdan, Roger and William on this one.

 If your test suite is composed of other suites, and you override the
 default setup or teardown of either the suite or the test, you MUST call
 the respective methods of the embedded suites.

 Roger, if it is easy to write some code to assert this, I would LOVE to
 have that as a test. It is not something I have the ability to write
 quickly (if at all).

 As a rule, you should call the setups in the order you define them in
 the struct, and call teardown in the reverse order.

 Thanks,
 Tim

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Recurring fails on windows tests and the recent CentOS failure

2015-07-15 Thread Nate Finch
I think the answer is - if there are methods on a suite that don't require
setup/teardown, maybe they should be moved to standalone functions or
methods on non-suite types, so that we can know what requires
setup/teardown and what doesn't.   That way we can use those functions
without incurring the cost of running all the setups and teardowns.  Then
we can do Roger's suggestion of ensuring that we don't forget to hook up
setup and/or teardown.

On Wed, Jul 15, 2015 at 3:31 PM William Reade william.re...@canonical.com
wrote:

 On Wed, Jul 15, 2015 at 7:19 PM, Nate Finch nate.fi...@canonical.com
 wrote:

 Or maybe we should just code teardown so that if setup isn't  called, we
 don't do teardown either.  Lots of times I see people say just include
 basesuite for X method and since I don't need whatever its setup does, I
 don't call it in my setup  I don't think that's a programmer error, per se.


 That's cavalier at best, isn't it? If I see an embedded test suite I
 expect to be able to use its full capabilities.

 It's upsetting to discover that your test modifications don't work right
 because someone didn't set the base suite up; to discover that they didn't
 set the base suite up *on purpose* adds significant insult to the
 injury. If you don't need it, don't embed it; if you do need it, please use
 it properly.

 Cheers
 William



 On Wed, Jul 15, 2015 at 7:39 AM roger peppe rogpe...@gmail.com wrote:

 It shouldn't be hard to write some code (using go/types) that
 automatically checks
 for these invariants.


 On 15 July 2015 at 07:05, John Meinel j...@arbash-meinel.com wrote:
 
 
  On Wed, Jul 15, 2015 at 2:44 AM, Bogdan Teleaga
  btele...@cloudbasesolutions.com wrote:
 
  Hello everybody,
 
  Lately I've been noticing a couple of failures regarding a new testing
  feature introduced on windows.
 
  Without going into implementation details too deep the main idea is
 that
  whenever a new suite is created
  that inherits from a base suite and a SetUpX function is defined, it
 needs
  to call the SetUpX function of the
  base suite. The main reason would be that if that's not done we will
 not
  do set up for any test, but we will do
  a tear down. To some extent it might be worth it to add some
 annotations
  to the failure message, since it might
  come up more often and it is not immideately obvious why.
 
 
  That's certainly an expectation of any sort of function that a type
  overloads from an embedded type. If people are overriding SetUp and not
  calling the embedded SetUp that's going to cause all sorts of bugs (we
 do
  test suite isolation, logging changes, HOME directory setup in base
 types).
  I'm actually surprised that things worked if people are doing so. I
 guess
  some things like Isolation are things you don't notice until late,
 because
  you end up with files littered where they shouldn't be.
 
  John
  =:-
 
 
 
  The bugs caused by this so far are:
  - https://bugs.launchpad.net/juju-core/+bug/1474382
  - https://bugs.launchpad.net/juju-core/+bug/1471941
 
  Another issue is that a recent bugfix stopped CentOS completely from
  working.
  The series could not be detected anymore because the map was changed
 *and*
  the test that
  was using actual data from /etc/os-release was modified to mirror this
  change. Until we get the CI
  for CentOS up and running, but even as a general thing for that
 matter,
  please consider the old
  content of the tests and their intention before modyfing them to fit
 the
  changes in the code.
 
  For more details on the CentOS bug:
  https://bugs.launchpad.net/juju-core/+bug/1470150
 
  And that would be all for now, I'll let you get back to the other
 hundred
  emails.
 
  Thanks,
  Bogdan
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 
 
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Blocking bugs process

2015-07-14 Thread Nate Finch
I think everyone's agreeing here, but maybe the wording just needs to be
clarified somewhere to avoid confusion.

It sounds like bugs which are assigned to a release do not block commits
unless they are marked blockers.   And blockers are determined by we
wouldn't want *anyone* to upgrade to a version with this bug in it.

I agree that if a blocker is found in an earlier minor version, that
upgrading to a new minor version with the same blocker doesn't make anyone
any worse off.  However, if we don't make it a stop-the-line, then we need
some other way to ensure that the bug actually gets fixed ASAP
otherwise it could just tag along minor after minor and not get addressed.
I don't think it's unreasonable to just make such a bug a blocker, just to
get it addressed ASAP, even if it is not strictly making things worse than
an earlier version.

On Tue, Jul 14, 2015 at 9:26 AM Aaron Bentley aaron.bent...@canonical.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 2015-07-13 07:43 PM, Ian Booth wrote:
  By the definition given
 
  If a bug must be fixed for the next minor release, it is
  considered a ‘blocker’ and will prevent all landing on that
  branch.
 
  that bug and any other that we say we must include in a release
  would block landings. That's the bit I'm having an issue with. I
  think landings need to be blocked when appropriate, but not by that
  definition.

 Here's my rationale:
 1. We have held the principle that our trunk and stable branches
 should always be releaseable.
 2. We have said we should stop-the-line when a branch becomes
 unreleasable.
 3. Therefore, I have concluded that we should stop-the-line when a bug
 is present that makes the branch unreleasable.

 Do you agree with 1 and 2?  I think 3 simply follows from 1 and 2, but
 am I wrong?

  Depends on the changes. I think we should be pragmatic and make
  considered decisions. I guess that's why we have the jfdi flag.

 It's true that the particulars of the bug may matter in deciding
 whether it should block, and that's why there's a process for
 overriding the blocking tag: Exceptions are raised to the release team.

 I think JFDI should be considered a nuclear option.  If you need it,
 it's good that it exists, but you shouldn't ever need it.  If you
 think you need it, there may be a problem with our process.

 Aaron
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJVpQ3pAAoJEK84cMOcf+9h4wYIALzMezSmErdb8Gjuq89aRVU/
 CKXZGJ7fWDrsogmsBDOdNhjmtOiIkUIQiZhd3UW5+2WlC+8eix5weJGBWKIo21gx
 1hLvR6p6SnZ4zlfxV0RV0pbnfq6RqySEV9agnXzM//H/iqDwZp74ELCgR/1mLkXh
 yr19JH1TVx35emqNgO6yFqFVUU6khLPM4JyJ47cjcrDip5f0qLj4gf0nRRE+rasa
 uL1bJc47P0HnLr9xKxBWAioo4OMMb2RAUsgApznXWlqu/P3+TVk1eMQf7Vk1XHV8
 DbqZgMLz5iJHFpI5T6IUPeeo6BOBz+zhfse6MCqOcOavpsJTzrysMLiqrCpUYt0=
 =KeYb
 -END PGP SIGNATURE-

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Blocking bugs process

2015-07-13 Thread Nate Finch
Can you put this in the wiki?

On Thu, Jul 9, 2015 at 6:33 PM Martin Packman martin.pack...@canonical.com
wrote:

 The QA team have been trying to hammer out a clearer process over
 blocking bugs, and have put together the document below for
 discussion. We'll be handling bugs as described here unless anyone has
 serious objections.

 Thanks!

 Martin


 == Definition of blocking bugs ==
 Master and all release branches should always be in a releasable
 state. If a bug must be fixed for the next minor release, it is
 considered a ‘blocker’ and will prevent all landing on that branch.

 We block for two reasons:
 * To prevent problems from becoming compounded by follow-on changes.
 * As a stop-the-line, all-hands-on-deck signal to get more eyes on the
 problem.

 A regression is a bug that is present in a version of juju that is not
 present in older juju versions.
 We are strict about regressions because our goal is to land these
 changes into Ubuntu, which is treating them as though they were
 bugfix-only releases.

 Regressions compared to juju versions going back to 1.18 prevent
 releases. This includes CLI or API incompatibility and other behaviour
 changes. Consistently failing tests will also prevent releases.
 Although ideally all regressions would block, regressions with limited
 impact, such as single test failures, do not initially need to block
 landings. To prevent branches remaining unreleasable for long periods,
 these bugs will be updated to block after a week.

 == Handling blocking bugs and regressions ==

 * File a bug
 * Mark as critical, target against next minor release.
 * Tag blocker unless it is a regression with limited impact.
 * Tag ci if it causes a CI test to fail.
 * If a particular revision introduced the issue, subscribe the author
 to the bug.
 * If trunk is blocked, alert the #juju-dev IRC channel or mailing juju-dev
 list.
 * If the bug was not tagged blocker but is not fixed within a week,
 tag it blocker.

 == Unblocking ==

 * All bugs tagged ci blocker will be marked fix-released when the
 branch has a blessed tip.
 * QA will mark all other blockers fix-released when they determine
 them to be fixed.
 * Exceptions are raised to the release team.

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


like go test, but faster

2015-06-15 Thread Nate Finch
Russ Cox has an experimental command called gt, which replaces the go test
tool, and caches test output when you run it.  The next time you run gt, if
none of the files in a package have changed and none of the files of
packages it depends on have changed, then gt will just reprint the cached
output.

You can install it with a simple go get rsc.io/gt.  Right not it seems like
it doesn't work with -race or -cover, but for running the full test suite
it's pretty amazing.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Reminder: on June 15th, Go 1.4 will stop compiling things from code.google.com.

2015-06-02 Thread Nate Finch
If you git clone the new repo into the old path on disk, that should just
work with godeps etc.
On Jun 2, 2015 8:15 AM, John Meinel j...@arbash-meinel.com wrote:

 What about old releases, etc? It seems like this is potentially an issue
 if we ever want to build an old release (for reference purposes,
 reproducing a bug, etc).
 Do we at least have a reasonable workaround (grab the code from X move it
 to Y, set the version using godeps), etc?

 John
 =:-


 On Tue, Jun 2, 2015 at 3:57 PM, Katherine Cox-Buday 
 katherine.cox-bu...@canonical.com wrote:

 I think there was an effort to remove this from our code, but I thought
 I'd send out a reminder to check and for personal projects. May the source
 be with you...

 -
 Katherine

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Writing workers

2015-06-01 Thread Nate Finch
Totally belongs on the wiki.

On Mon, Jun 1, 2015 at 8:45 AM, John Meinel j...@arbash-meinel.com wrote:

 This is one of those things that should probably end up on the Wiki.
 Thanks for writing it up.

 John
 =:-


 On Thu, May 28, 2015 at 5:34 PM, William Reade 
 william.re...@canonical.com wrote:

 Hi all

 I've noticed that there's a lot of confusion over how to write a useful
 worker. Here follow some guidelines that you should be *very* certain of
 yourself before breaking (and probably talk to me about anyway). If there's
 any uncertainty about these, I'm more than happy to expand.

   * If you really just want to run a dumb function on its own goroutine,
 use worker.NewSimpleWorker.

   * If you just want to do something every period, use
 worker.NewPeriodicWorker.

   * If you want to react to watcher events, you should probably use
 worker.NewNotifyWorker or worker.NewStringsWorker.

   * If your worker has any methods outside the Worker interface, DO NOT
 use any of the above callback-style workers. Those methods, that need to
 communicate with the main goroutine, *need* to know that goroutine's state,
 so that they don't just hang forever.

   * To restate the previous point: basically *never* do a naked channel
 send/receive. If you're building a structure that makes you think you need
 them, you're most likely building the wrong structure.

   * If you're writing a custom worker, and not using a tomb.Tomb, you are
 almost certainly doing it wrong. Read the blog post [0] or, hell, just read
 the code [1] -- it's less than 200 lines and it's about 50% comments.

   * If you're letting tomb.ErrDying leak out of your workers to any
 clients, you are definitely doing it wrong -- you risk stopping another
 worker with that same error, which will quite rightly panic (because that
 tomb is *not* yet dying).

   * If it's possible for your worker to call .tomb.Done() more than once,
 or less than once, you are *definitely* doing it very very wrong indeed.

   * If you're using .tomb.Dead(), you are very probably doing it wrong --
 the only reason (that I'm aware of) to select on that .Dead() rather than
 on .Dying() is to leak inappropriate information to your clients. They
 don't care if you're dying or dead; they care only that the component is no
 longer functioning reliably and cannot fulfil their requests. Full stop.
 Whatever started the component needs to know why it failed, but that parent
 is usually not the same entity as the client that's calling methods.

   * If you're using worker/singular, you are quite likely to be doing it
 wrong, because you've written a worker that breaks when distributed. Things
 like provisioner and firewaller only work that way because we weren't smart
 enough to write them better; but you should generally be writing workers
 that collaborate correctly with themselves, and eschewing the temptation to
 depend on the funky layer-breaking of singular.

   * If you're passing a *state.State into your worker, you are almost
 certainly doing it wrong. The layers go worker-apiserver-state, and any
 attempt to skip past the apiserver layer should be viewed with *extreme*
 suspicion.

   * Don't try to make a worker into a singleton (this isn't particularly
 related to workers, really, singleton is enough of an antipattern on its
 own [2] [3] [4]). Singletons are basically the same as global variables,
 except even worse, and if you try to make them responsible for goroutines
 they become more horrible still.

 Did I miss anything major? Probably. If so, please remind me.

 Cheers
 William


 [0] http://blog.labix.org/2011/10/09/death-of-goroutines-under-control
 [1] launchpad.net/tomb (apparently... we really ought to be using v2,
 though)
 [2] https://sites.google.com/site/steveyegge2/singleton-considered-stupid

 [3]
 http://jalf.dk/blog/2010/03/singletons-solving-problems-you-didnt-know-you-never-had-since-1995/
 [4]
 http://programmers.stackexchange.com/questions/40373/so-singletons-are-bad-then-what/

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Docs in-repo vs. github wiki

2015-05-22 Thread Nate Finch
I just created a wiki page on the juju github repo to hold the information
I typed up about running and writing CI tests, from my experience of the
last week or so: https://github.com/juju/juju/wiki/ci-tests

It occurs to me that much of the documentation that we have in files inside
the repo might be more appropriate in the wiki, such as basically
everything here: https://github.com/juju/juju/tree/master/doc/contributions

The reason that I like the documentation to be in the wiki rather than in
files in the repo is that it lowers the barrier of entry to adding and
updating the documentation... plus it makes the files a lot more casually
browseable.

Certainly, if we have documentation that is likely to change per-branch,
then living in the repo makes sense... but I think that for things that are
likely to be long-lived, like contributing docs, development how-to docs,
etc, then the wiki has many benefits.

Thoughts?

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Resolving the same windows test failures again

2015-05-04 Thread Nate Finch
Sorry for the lack of tests for that part of the code, I guess I added them
for one and not the other.   Note that I had actually been leaving that PR
unmerged because I knew master was blocked.  Sorry it got pushed in anyway,
and caused problems.

On Mon, May 4, 2015 at 7:20 PM, Martin Packman martin.pack...@canonical.com
 wrote:

 There was some confusion about the regression to the windows test
 failures on trunk.

 https://bugs.launchpad.net/juju-core/+bug/1450919

 Partly my fault, Curtis initially looked at the 1.24 branch and I
 looked at trunk, and each branch has a different issue. Here's what
 I've just done to diagnose.


 So, trunk first, I noticed the failures looked exactly the same as
 before CI fiddled with the environment variable casing to work around
 a juju bug:

 https://launchpad.net/bugs/1446871

 The fix for this bug landed on master, therefore it's easy to assume
 that change actually broke the casing behaviour in such a way as to
 invalidate the CI hack, rather than fixing the underlying issue.

 Looking at the code:

 https://github.com/juju/juju/pull/2124/files

 Two similar functions now exist for merging case, mergeEnvWin which
 works on map[string]string and mergeWindowsEnvironment which works on
 []string. Only mergeEnvWin has tests. Guess, mergeWindowsEnvironment
 is bugged. Indeed:

 -   m[strings.ToUpper(varSplit[0])] = varSplit[1]
 +   k := varSplit[0]
 +   if _, ok := uppers[strings.ToUpper(k)]; ok {
 +   uppers[k] = varSplit[1]

 It's not uppercasing the key in the assignment. So, Path is matched to
 PATH, Path is assigned to, but later only PATH is pulled out.


 Now for the 1.24 branch, no hints here. So, lets see what changed.
 Latest working 1.24 without windows test failures:

 http://reports.vapour.ws/releases/2581

 Earliest 1.24 with windows test failures:

 http://reports.vapour.ws/releases/2592

 $ git diff fffe3e4f..95e7619f

 2770 lines... Not helpful. But, error mentions cannot move the charm
 archive and:

 -gopkg.in/juju/charm.v5 git
 6b74a2771545912f8a91a544b0f28405b99386242015-04-14T14:33:47Z
 +gopkg.in/juju/charm.v5 git
 779394167ac61b02ca73ca17c3012f05a5ba316c2015-04-30T02:46:55Z

 $ pushd ~/go/src/gopkg.in/juju/charm.v5

 Try to update and branches diverged? Wha?

 $ git diff 6b74a277..77939416

 Er... this includes a revert of my licence header changes, and, the
 answer to the test failures, Gabriel's file closing fix:
 https://github.com/juju/charm/pull/119
 https://github.com/juju/charm/pull/120

 So, just renaming a variable is *not* safe (if you accidentally back
 out other changes when merging).


 Both these regressions happened due to landing 'safe' changes while
 the branch was broken, and were therefore harder to pin down than they
 would otherwise have been.

 Martin

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


  1   2   3   >