Re: Unit Tests & Integration Tests

2014-09-12 Thread Mark Ramm-Christensen (Canonical.com)
On Fri, Sep 12, 2014 at 12:25 PM, Gustavo Niemeyer 
wrote:

> On Fri, Sep 12, 2014 at 12:00 PM, Mark Ramm-Christensen
> (Canonical.com)  wrote:
> > I think the two issues ARE related because a bias against mocks, and a
> > failure to separate out functional tests, in a large project leads to a
> test
> > suite that has lots of large slow tests, and which developers can't
> easily
> > run many, many, many times a day.
>
> There are test doubles in the code base of juju since pretty much the
> start (dummy provider?). If you have large slow tests, this should be
> fixed, but that's orthogonal to having these or not.
>
> Then, having a bias against test doubles everywhere is a good thing.
> Ideally the implementation itself should be properly factored out so
> that you don't need the doubles in the first place. Michael Foord
> already described this in a better way.
>

Hmm, there seems to be some nuance missing here.  I see the argument as
originally made as saying "don't have doubles anywhere unless you
absolutely have to for performance reasons or because a non-double is the
only possible way to do a test."

I disagree with that.

I know there are good uses of doubles in the code, and bad ones.


> If you want to have a rule "Tests are slow, you should X", the best X
> is "think about what you are doing", rather than "use test doubles".
>

Agreed. I did not and would never argue otherwise.

> By allowing explicit ways to write larger functional tests as well as
> small
> > (unitish) tests you get to let the two kinds of tests be what they need
> to
> > be, without trying to have one test suite serve both purposes.  And the
> > creation of a "place" for those larger tests was just as much a part of
> the
> > point of this thread, as Roger's comments on mocking.
>
> If by "functional test" you mean "test that is necessarily slow",
> there should not be _a_ place for them, because you may want those in
> multiple places in the code base, to test local logic that is
> necessarily expensive. Roger covered that by suggesting a flag that is
> run when you want to skip those. This is a common technique in other
> projects, and tends to work well.
>

I agree with tagging.  "A place" wasn't necessarily intended to be
prescriptive.  My point, which I feel has already been well enough made is
that there needs to be a way to separate out long running tests.

--Mark Ramm
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: consensus on using separate *_test packages

2014-09-12 Thread roger peppe
+1
On 12 Sep 2014 21:18, "Gustavo Niemeyer"  wrote:

> On Fri, Sep 12, 2014 at 3:16 PM, Nate Finch 
> wrote:
> > In the thread Eric pointed to, Brad Fitzpatrick (one of the core Go
> > developers) says they prefer to keep tests in the same package unless
> forced
> > to have them in a different package to avoid circular dependencies.  I
> like
> > that.
>
> Brad is a great guy, but defending a position because someone says so
> is not good reasoning, no matter who that is. I've provided a
> different point of view on that same thread, with reasoning, and had
> no counterarguments.
>
> > I have always thought that export_test was an anti-pattern that should
> only
> > be used as a last resort.  The main problem I have with export_test is
> that
> > it makes your tests lie.  Your tests call foo.Create, but package foo
> > doesn't actually export Create, it has a non-exported create method, that
> > happens to get exported in the export_test file.  This makes your tests
> more
> > confusing than if you just called "create" from an internal test.  That's
> > even aside from the point of the busywork it creates from having to write
> > the file in the first place.
>
> On the counter side, it greatly encourages you to keep your tests
> isolated from internal details, and forces you to make it very
> explicit when you're using non-public interfaces in your test, via a
> well known convention. Both are great real benefits, that easily
> outweigh the theoretical "lie" described.
>
> I'm not saying you should never use internal tests, though, but rather
> agreeing with people that posted before you on this thread.
>
> > One argument for export_test is that it gives you a canonical place to go
> > look for all the internal stuff that is getting used in tests... but I
> don't
> > actually think that's very valuable.  I'm not actually sure why it would
> be
> > important to see what internal functions are getting exported for use
> during
> > tests.
>
> The second part of that same paragraph answers the question with a
> counter example:
>
> >  And in theory, if you're writing extensive unit tests, almost all
> > the internal methods would get exported there... so it becomes just a
> huge
> > boilerplate file for no good reason.  If you really want to see what
> > functions are getting exercised during tests, use code coverage, it's
> built
> > into the go tool.
>
> This clearly describes one of the important reasons for the pattern.
> If every internal function is imported in tests, and assuming good
> code-writing practices for when to use a new function, it means the
> test is extremely tightly bound to the implementation.
>
> > I agree with Gustavo's point that tests can be good examples of how to
> use a
> > package.  And, in fact, Go supports example functions that are run like
> > tests, but also get displayed in generated docs.  These example tests can
> > exist in the package_test package to make them very accurate
> representations
> > of how to use the package.
>
> That was just one side I mentioned, in a long list of reasons, and in
> practice examples are pretty much always written well after the code
> is ready. How many public functions do you have in the juju code base?
>  How many of them are covered by those examples you mention?  How many
> of them are covered in tests?
>
> > Most tests that are written to test the functionality of a package are
> not
> > actually good examples of how to use the package.  Most tests are just
> > isolating one part of the logic and testing that.  No one using the
> package
> > for its intended purpose would ever do that.  This is doubly true of unit
> > tests, where you may just be testing the input and output of a single
> > internal function.
>
> That's very far from being true. My own tests exercise the logic I'm
> writing, with the API I designed and documented, and they reflect how
> people use that code. I commonly even copy & paste snippets out of my
> own tests into mailing lists as examples of API usage. Perhaps you
> write some sort of test that I'm not used to.
>
> > I think our current trend of doing most tests in an external package has
> > significantly contributed to the poor quality of our tests.  Because
> we're
> > running from outside the package, we generally only have access to the
> > exported API of the package, so we try to "make it work" by mocking out
> > large portions of the code in order to be able to call the external
> > function, rather than writing real unit tests that just test one small
> > portion of the internal functionality of the package.
>
> This insight fails to account for the fact that in practice "exported
> API" is precisely about the API that was made public in one particular
> isolated abstraction. Several of these "public APIs" are in fact very
> internal details to the implementation of juju, but that make sense in
> isolation. When you're testing these via their public API, it means
> you're actually exercising and fo

Re: consensus on using separate *_test packages

2014-09-12 Thread Gustavo Niemeyer
On Fri, Sep 12, 2014 at 3:16 PM, Nate Finch  wrote:
> In the thread Eric pointed to, Brad Fitzpatrick (one of the core Go
> developers) says they prefer to keep tests in the same package unless forced
> to have them in a different package to avoid circular dependencies.  I like
> that.

Brad is a great guy, but defending a position because someone says so
is not good reasoning, no matter who that is. I've provided a
different point of view on that same thread, with reasoning, and had
no counterarguments.

> I have always thought that export_test was an anti-pattern that should only
> be used as a last resort.  The main problem I have with export_test is that
> it makes your tests lie.  Your tests call foo.Create, but package foo
> doesn't actually export Create, it has a non-exported create method, that
> happens to get exported in the export_test file.  This makes your tests more
> confusing than if you just called "create" from an internal test.  That's
> even aside from the point of the busywork it creates from having to write
> the file in the first place.

On the counter side, it greatly encourages you to keep your tests
isolated from internal details, and forces you to make it very
explicit when you're using non-public interfaces in your test, via a
well known convention. Both are great real benefits, that easily
outweigh the theoretical "lie" described.

I'm not saying you should never use internal tests, though, but rather
agreeing with people that posted before you on this thread.

> One argument for export_test is that it gives you a canonical place to go
> look for all the internal stuff that is getting used in tests... but I don't
> actually think that's very valuable.  I'm not actually sure why it would be
> important to see what internal functions are getting exported for use during
> tests.

The second part of that same paragraph answers the question with a
counter example:

>  And in theory, if you're writing extensive unit tests, almost all
> the internal methods would get exported there... so it becomes just a huge
> boilerplate file for no good reason.  If you really want to see what
> functions are getting exercised during tests, use code coverage, it's built
> into the go tool.

This clearly describes one of the important reasons for the pattern.
If every internal function is imported in tests, and assuming good
code-writing practices for when to use a new function, it means the
test is extremely tightly bound to the implementation.

> I agree with Gustavo's point that tests can be good examples of how to use a
> package.  And, in fact, Go supports example functions that are run like
> tests, but also get displayed in generated docs.  These example tests can
> exist in the package_test package to make them very accurate representations
> of how to use the package.

That was just one side I mentioned, in a long list of reasons, and in
practice examples are pretty much always written well after the code
is ready. How many public functions do you have in the juju code base?
 How many of them are covered by those examples you mention?  How many
of them are covered in tests?

> Most tests that are written to test the functionality of a package are not
> actually good examples of how to use the package.  Most tests are just
> isolating one part of the logic and testing that.  No one using the package
> for its intended purpose would ever do that.  This is doubly true of unit
> tests, where you may just be testing the input and output of a single
> internal function.

That's very far from being true. My own tests exercise the logic I'm
writing, with the API I designed and documented, and they reflect how
people use that code. I commonly even copy & paste snippets out of my
own tests into mailing lists as examples of API usage. Perhaps you
write some sort of test that I'm not used to.

> I think our current trend of doing most tests in an external package has
> significantly contributed to the poor quality of our tests.  Because we're
> running from outside the package, we generally only have access to the
> exported API of the package, so we try to "make it work" by mocking out
> large portions of the code in order to be able to call the external
> function, rather than writing real unit tests that just test one small
> portion of the internal functionality of the package.

This insight fails to account for the fact that in practice "exported
API" is precisely about the API that was made public in one particular
isolated abstraction. Several of these "public APIs" are in fact very
internal details to the implementation of juju, but that make sense in
isolation. When you're testing these via their public API, it means
you're actually exercising and focusing on the promises that were made
to the outside implementation of juju itself, which are the most
valuable guarantees to encode into test form. Developers should be
free to refactor the implementation as necessary to improve clarity,
performa

Re: repos in reviewboard

2014-09-12 Thread Nate Finch
My opinion is: reviewboard all the things.  But I'm not sure we've
officially talked about that.

On Fri, Sep 12, 2014 at 2:43 PM, Eric Snow  wrote:

> Which of the repositories listed at https://github.com/juju should be
> set up in reviewboard?  I'm pretty sure on most of them, but a more
> authoritative list would help me out.  In the meantime I'm adding the
> ones I'm sure on.
>
> -eric
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests & Integration Tests

2014-09-12 Thread Nate Finch
I like Gustavo's division - slow tests and fast tests, not unit tests and
integration tests.  Certainly, integration tests are often also slow tests,
but that's not the division that really matters.

*I want* go test github.com/juju/juju/... *to finish in 5 seconds or less.
 I want the landing bot to reject commits that cause this to no longer be
true.*

This is totally doable even on a codebase the size of juju's.  Most tests
that don't bring up a server or start mongo finish in milliseconds.

There are many strategies we can use deal with slower tests.  One of those
may be "don't run slow tests unless you ask for them".  Another is
refactoring code and tests so they don't have to bring up a server/mongo.
 Both are good and valid.

This would make developers more productive.  You can run the fast tests
trivially whenever you make a change.  When you're ready to commit, run the
long tests to pick up anything the short tests don't cover.

Right now, I cringe before starting to run the tests because they take so
long.

I don't personally care if it's a test flag or an environment variable,
hell, why not both? It's trivial either way.  Let's just do it.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


repos in reviewboard

2014-09-12 Thread Eric Snow
Which of the repositories listed at https://github.com/juju should be
set up in reviewboard?  I'm pretty sure on most of them, but a more
authoritative list would help me out.  In the meantime I'm adding the
ones I'm sure on.

-eric

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: consensus on using separate *_test packages

2014-09-12 Thread Nate Finch
In the thread Eric pointed to, Brad Fitzpatrick (one of the core Go
developers) says they prefer to keep tests in the same package unless
forced to have them in a different package to avoid circular dependencies.
 I like that.

I have always thought that export_test was an anti-pattern that should only
be used as a last resort.  The main problem I have with export_test is that
it makes your tests lie.  Your tests call foo.Create, but package foo
doesn't actually export Create, it has a non-exported create method, that
happens to get exported in the export_test file.  This makes your tests
more confusing than if you just called "create" from an internal test.
 That's even aside from the point of the busywork it creates from having to
write the file in the first place.

One argument for export_test is that it gives you a canonical place to go
look for all the internal stuff that is getting used in tests... but I
don't actually think that's very valuable.  I'm not actually sure why it
would be important to see what internal functions are getting exported for
use during tests.  And in theory, if you're writing extensive unit tests,
almost all the internal methods would get exported there... so it becomes
just a huge boilerplate file for no good reason.  If you really want to see
what functions are getting exercised during tests, use code coverage, it's
built into the go tool.

I agree with Gustavo's point that tests can be good examples of how to use
a package.  And, in fact, Go supports example functions
 that are
run like tests, but also get displayed in generated docs.  These example
tests can exist in the package_test package to make them very accurate
representations of how to use the package.

Most tests that are written to test the functionality of a package are not
actually good examples of how to use the package.  Most tests are just
isolating one part of the logic and testing that.  No one using the package
for its intended purpose would ever do that.  This is doubly true of unit
tests, where you may just be testing the input and output of a single
internal function.

I think our current trend of doing most tests in an external package has
significantly contributed to the poor quality of our tests.  Because we're
running from outside the package, we generally only have access to the
exported API of the package, so we try to "make it work" by mocking out
large portions of the code in order to be able to call the external
function, rather than writing real unit tests that just test one small
portion of the internal functionality of the package.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: consensus on using separate *_test packages

2014-09-12 Thread Katherine Cox-Buday
Exactly what Roger said.

I think this is nicely paired with the ongoing discussion of tests. If we
want the code-base to have a nice mix of unit and functional/integration
tests, then I feel that necessitates a mixture of "package foo" and "import
package foo_test". If you're not trying to stick to the public API (unit
tests), then not having access to the non-public API without manually
exporting things just leads to a lot of code-churn and tests which are more
difficult to understand.

-
Katherine

On Fri, Sep 12, 2014 at 11:49 AM, roger peppe  wrote:

> On 12 September 2014 16:55, Eric Snow  wrote:
> > In Go we put tests in *_test.go files that are only built when
> > testing.  There are two conventions for what package to declare in
> > those files (relative to the corresponding non-test package):
> > "" and "_test".  In our code we have a mix of both.
> > This can be confusing.  We should come to a consensus on which to use
> > and stick with it, both for new code and when given the opportunity
> > while refactoring.  (As usual, a whole-sale refactor wouldn't be a
> > good idea).
> >
> > This came up on IRC a couple months back. [1]  At the time I referred
> > to a post from Gustavo on golang-nuts where he stated a preference for
> > using a separate package for *_test.go files. [2]  Also note that
> > there are some cases where test files must be in a separate package
> > (e.g. to break circular imports).  So unless we always use separate
> > test packages, we will almost certainly end up with a mix (which is
> > exactly the issue at hand!).
> >
> > From my perspective, I agree with Gustavo that using a separate _test
> > package is a good idea.  So I would favor us being consistent in doing
> > that. :)
>
> I think there's a place for both approaches. All else being equal, I
> prefer to have
> tests in an external package, testing using the public API and using as few
> internal details as possible.
>
> But sometimes it's good to write tests for functions that are intimately
> connected to the internals of a package, and in that case I think it's
> better to write internal tests (with the _test.go file in the same package)
> rather than jumping through hoops to try to make them external.
>
>   cheers,
> rog.
>
> >
> > -eric
> >
> > [1] http://irclogs.ubuntu.com/2014/07/22/%23juju-dev.html#t15:36
> > [2]
> https://groups.google.com/forum/#!msg/Golang-nuts/dkk0X1tIs6k/nO3CKFqbIxYJ
> >
> > --
> > Juju-dev mailing list
> > Juju-dev@lists.ubuntu.com
> > Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: consensus on using separate *_test packages

2014-09-12 Thread roger peppe
On 12 September 2014 16:55, Eric Snow  wrote:
> In Go we put tests in *_test.go files that are only built when
> testing.  There are two conventions for what package to declare in
> those files (relative to the corresponding non-test package):
> "" and "_test".  In our code we have a mix of both.
> This can be confusing.  We should come to a consensus on which to use
> and stick with it, both for new code and when given the opportunity
> while refactoring.  (As usual, a whole-sale refactor wouldn't be a
> good idea).
>
> This came up on IRC a couple months back. [1]  At the time I referred
> to a post from Gustavo on golang-nuts where he stated a preference for
> using a separate package for *_test.go files. [2]  Also note that
> there are some cases where test files must be in a separate package
> (e.g. to break circular imports).  So unless we always use separate
> test packages, we will almost certainly end up with a mix (which is
> exactly the issue at hand!).
>
> From my perspective, I agree with Gustavo that using a separate _test
> package is a good idea.  So I would favor us being consistent in doing
> that. :)

I think there's a place for both approaches. All else being equal, I
prefer to have
tests in an external package, testing using the public API and using as few
internal details as possible.

But sometimes it's good to write tests for functions that are intimately
connected to the internals of a package, and in that case I think it's
better to write internal tests (with the _test.go file in the same package)
rather than jumping through hoops to try to make them external.

  cheers,
rog.

>
> -eric
>
> [1] http://irclogs.ubuntu.com/2014/07/22/%23juju-dev.html#t15:36
> [2] https://groups.google.com/forum/#!msg/Golang-nuts/dkk0X1tIs6k/nO3CKFqbIxYJ
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests & Integration Tests

2014-09-12 Thread Gustavo Niemeyer
On Fri, Sep 12, 2014 at 12:00 PM, Mark Ramm-Christensen
(Canonical.com)  wrote:
> I think the two issues ARE related because a bias against mocks, and a
> failure to separate out functional tests, in a large project leads to a test
> suite that has lots of large slow tests, and which developers can't easily
> run many, many, many times a day.

There are test doubles in the code base of juju since pretty much the
start (dummy provider?). If you have large slow tests, this should be
fixed, but that's orthogonal to having these or not.

Then, having a bias against test doubles everywhere is a good thing.
Ideally the implementation itself should be properly factored out so
that you don't need the doubles in the first place. Michael Foord
already described this in a better way.

If you want to have a rule "Tests are slow, you should X", the best X
is "think about what you are doing", rather than "use test doubles".

> By allowing explicit ways to write larger functional tests as well as small
> (unitish) tests you get to let the two kinds of tests be what they need to
> be, without trying to have one test suite serve both purposes.  And the
> creation of a "place" for those larger tests was just as much a part of the
> point of this thread, as Roger's comments on mocking.

If by "functional test" you mean "test that is necessarily slow",
there should not be _a_ place for them, because you may want those in
multiple places in the code base, to test local logic that is
necessarily expensive. Roger covered that by suggesting a flag that is
run when you want to skip those. This is a common technique in other
projects, and tends to work well.


gustavo @ http://niemeyer.net

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


consensus on using separate *_test packages

2014-09-12 Thread Eric Snow
In Go we put tests in *_test.go files that are only built when
testing.  There are two conventions for what package to declare in
those files (relative to the corresponding non-test package):
"" and "_test".  In our code we have a mix of both.
This can be confusing.  We should come to a consensus on which to use
and stick with it, both for new code and when given the opportunity
while refactoring.  (As usual, a whole-sale refactor wouldn't be a
good idea).

This came up on IRC a couple months back. [1]  At the time I referred
to a post from Gustavo on golang-nuts where he stated a preference for
using a separate package for *_test.go files. [2]  Also note that
there are some cases where test files must be in a separate package
(e.g. to break circular imports).  So unless we always use separate
test packages, we will almost certainly end up with a mix (which is
exactly the issue at hand!).

>From my perspective, I agree with Gustavo that using a separate _test
package is a good idea.  So I would favor us being consistent in doing
that. :)

-eric

[1] http://irclogs.ubuntu.com/2014/07/22/%23juju-dev.html#t15:36
[2] https://groups.google.com/forum/#!msg/Golang-nuts/dkk0X1tIs6k/nO3CKFqbIxYJ

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests & Integration Tests

2014-09-12 Thread Mark Ramm-Christensen (Canonical.com)
On Thu, Sep 11, 2014 at 3:41 PM, Gustavo Niemeyer 
wrote:

> Performance is the second reason Roger described, and I disagree that
> mocking code is cleaner.. these are two orthogonal properties, and
> it's actually pretty easy to have mocked code being extremely
> confusing and tightly bound to the implementation. It doesn't _have_
> to be like that, but this is not a reason to use it.
>

It is easy to do that, though often that is a sign of not having clean
separations of concerns.   Messy mocking can (though does not always)
reflect messiness in the code itself.  Messy, poorly isolated code is bad
and messy mocks, often mean you have not one but two messes to clean up.

> Like any tools, developers can over-use, or mis-use them.   But, if you
> > don't use them at all,



> That's not what Roger suggested either. A good conversation requires
> properly reflecting the position held by participants.


You are right, I wasn't precise about the details of his suggestion to not
use them, but he did suggest not using mocks unless there is *no other
choice.* And it is that rule against them that I was trying to make a case
against.

With that said, I definitely agree with the experience that both of you are
trying to highlight about the dangers of over-reliance on mocks.  I think
everybody who has written a significant amount of test code knows that
passing a test against a mock is not the same thing as actually working
against the mocked out library/function/interface.


> > you often end up with what I call "the binary test suite" in which one
> > coding error somewhere creates massive test failures.
>
> A coding error that creates massive test failures is not a problem, in
> my experience using both heavily mocking and heavily non-mocking code
> bases.


It's not a problem for new code, but it makes refactorings and cleanup
harder because you change a method, and rather than the test suite telling
you which things depend on that and therefore need to be updated, and how
far you need to go, you get 100% test failures and you're not quite sure
how many changes are needed, or where they are needed -- until suddenly you
fix the last thing and *everything* passes again.

> My belief is that you need both small, fast, targeted tests (call them
> unit
> > tests) and large, realistic, full-stack tests (call them integration
> tests)
> > and that we should have infrastructure support for both.
>
> Yep, but that's besides the point being made. You can do unit tests
> which are small, fast, and targeted, both with or without mocking, and
> without mocking they can be realistic, which is a good thing. If you
> haven't had a chance to see tests falsely passing with mocking, that's
> a good thing too.. you haven't abused mocking too much yet.
>

Sorry, I was transitioning back to the main point of the thread, raised by
Matty at the beginning.  And I was agreeing that there are two very
different *kinds of tests* and we should have a place for "large" tests to
go.

I think the two issues ARE related because a bias against mocks, and a
failure to separate out functional tests, in a large project leads to a
test suite that has lots of large slow tests, and which developers can't
easily run many, many, many times a day.

By allowing explicit ways to write larger functional tests as well as small
(unitish) tests you get to let the two kinds of tests be what they need to
be, without trying to have one test suite serve both purposes.  And the
creation of a "place" for those larger tests was just as much a part of the
point of this thread, as Roger's comments on mocking.

--Mark Ramm

PS, if you want to fit this into the Martin Fowler terminology I'm just
using mocks as a shorthand for all of the kinds of doubles he describes.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests & Integration Tests

2014-09-12 Thread Gustavo Niemeyer
On Fri, Sep 12, 2014 at 7:42 AM, Michael Foord
 wrote:
> I agree. I tend to see the need for stubs (I dislike Martin Fowler's
> terminology and prefer the term mock - as it really is by common parlance
> just a mock object) as a failure of the code. Just sometimes a necessary
> failure.

I don't particularly mind either way. It's a bit like a game.. we can
make our own rules, as long as we're playing by the same rules.

> Code, as you say, should be written as much as possible in decoupled units
> that can be tested in isolation. This is why test first is helpful, because
> it makes you think about "how am I going to test this unit" before your
> write it - and you're less likely to code in hard to test dependencies.

That's a good way to put it.

> Where dependencies are impossible to avoid, typically at the boundaries of
> layers, stubs can be useful to isolate units - but the need for them often
> indicates excessive coupling.

Indeed.

> Being able to test business logic without having to start a state server and
> mongo will make our tests s much faster and more reliable. The more we
> can do this *without* stubs the better, but I'm sure that's not entirely
> possible.

The tests will definitely be much faster and more reliable. The only
point to consider is whether the implementation will also be faster
and more reliable. I'm sure you all can do the latter.. just something
to keep in mind.


gustavo @ http://niemeyer.net

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Versioning juju run (juju-run)?

2014-09-12 Thread Wayne Witzel
Hi all,

During the review of my PR https://github.com/juju/juju/pull/705 to add
--relation to juju run, it was suggested that the cmd/juju be versioned.

I spoke to a couple folks and was directed to
https://github.com/juju/juju/pull/746 as an example of versioning an API.
This makes sense, but I don't see how it applies to the cmd/juju parts of
the code?

I'm just failing to understand how to version the command-line options.
Also the juju run client doesn't seem to have gone through the Facade
refactoring that the other APIs have gone through.

I'd appreciate some hand holding here as I'm failing to grok what needs to
be done.

Thanks,

-- 
Wayne Witzel III
wayne.wit...@canonical.com
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests & Integration Tests

2014-09-12 Thread Katherine Cox-Buday
I have been trying to digest the following series of talks between Martin
Fowler, Kent Beck, and David Heinemeier, called "Is TDD Dead?
". The topic is a bit
inflammatory, but there's some good stuff here.

Some major points discussed:

   - Is there such a thing as test-induced damage to the architecture?
   - Should you expect your developers to practice TDD (i.e. does that
   workflow work for everyone)
   - Mocks vs. Integration (inside-out vs. outside-in testing)

There's a lot to chew on here. Personally, I agree with what others have
said in this thread: I think like most things in life, you need to find the
middle-way and have both integration and unit tests. To know how healthy
your testing ecosystem is, ask yourself these 3 questions:

   1. Which do I spend more time on: my changes, or writing tests for my
   changes?
   2. Do I need to learn an entirely orthogonal way of interacting with my
   code for testing?
   3. How many defects are getting through?

Enjoying following this thread.

-
Katherine


On Fri, Sep 12, 2014 at 5:42 AM, Michael Foord 
wrote:

>
> On 12/09/14 06:05, Ian Booth wrote:
>
>>
>> On 12/09/14 01:59, roger peppe wrote:
>>
>>> On 11 September 2014 16:29, Matthew Williams
>>>  wrote:
>>>
 Hi Folks,

 There seems to be a general push in the direction of having more
 mocking in
 unit tests. Obviously this is generally a good thing but there is still
 value in having integration tests that test a number of packages
 together.
 That's the subject of this mail - I'd like to start discussing how we
 want
 to do this. Some ideas to get the ball rolling:

>>> Personally, I don't believe this is "obviously" a good thing.
>>> The less mocking, the better, in my view, because it gives
>>> better assurance that the code will actually work in practice.
>>>
>>> Mocking also implies that you know exactly what the
>>> code is doing internally - this means that tests written
>>> using mocking are less useful as regression tests, as
>>> they will often need to be changed when the implementation
>>> changes.
>>>
>>>  Let's assume that the term stub was meant to be used instead of
>> mocking. Well
>> written unit tests do not involve dependencies outside of the code being
>> tested,
>> and to achieve this, stubs are typically used. As others have stated
>> already in
>> this thread, unit tests are meant to be fast. Our Juju "unit" tests are
>> in many
>> cases not unit tests at all - they involve bringing up the whole stack,
>> including mongo in replicaset mode for goodness sake, all to test a single
>> component. This approach is flawed and goes against what would be
>> considered as
>> best practice by most software engineers. I hope we can all agree on that
>> point.
>>
>
> I agree. I tend to see the need for stubs (I dislike Martin Fowler's
> terminology and prefer the term mock - as it really is by common parlance
> just a mock object) as a failure of the code. Just sometimes a necessary
> failure.
>
> Code, as you say, should be written as much as possible in decoupled units
> that can be tested in isolation. This is why test first is helpful, because
> it makes you think about "how am I going to test this unit" before your
> write it - and you're less likely to code in hard to test dependencies.
>
> Where dependencies are impossible to avoid, typically at the boundaries of
> layers, stubs can be useful to isolate units - but the need for them often
> indicates excessive coupling.
>
>
>> To bring up but one of many concrete examples - we have a set of Juju CLI
>> commands which use a Juju client API layer to talk to an API service
>> running on
>> the state server. We "unit" test Juju commands by starting a full state
>> server
>> and ensuring the whole system behaves as expected, end to end. This is
>> expensive, slow, and unnecessary. What we should be doing here is
>> stubbing out
>> the client API layer and validating that:
>> 1. the command passes the correct parameters to the correct API call
>> 2. the command responds the correct way when results are returned
>>
>> Anything more than that is unnecessary and wasteful. Yes, we do need
>> end-end
>> integration tests as well, but these are in addition to, not in place of,
>> unit
>> tests. And integration tests tend to be fewer in number, and run less
>> frequently
>> than, unit tests; the unit tests have already covered all the detailed
>> functionality and edge cases; the integration tests conform the moving
>> pieces
>> mesh together as expected.
>>
>> As per other recent threads to juju-dev, we have already started to
>> introduce
>> infrastructure to allow us to start unit testing various Juju components
>> the
>> correct way, starting with the commands, the API client layer, and the API
>> server layer. Hopefully we will also get to the point where we can unit
>> test
>> core business logic like adding and placing machines, deploying unit

Re: Unit Tests & Integration Tests

2014-09-12 Thread Michael Foord


On 12/09/14 06:05, Ian Booth wrote:


On 12/09/14 01:59, roger peppe wrote:

On 11 September 2014 16:29, Matthew Williams
 wrote:

Hi Folks,

There seems to be a general push in the direction of having more mocking in
unit tests. Obviously this is generally a good thing but there is still
value in having integration tests that test a number of packages together.
That's the subject of this mail - I'd like to start discussing how we want
to do this. Some ideas to get the ball rolling:

Personally, I don't believe this is "obviously" a good thing.
The less mocking, the better, in my view, because it gives
better assurance that the code will actually work in practice.

Mocking also implies that you know exactly what the
code is doing internally - this means that tests written
using mocking are less useful as regression tests, as
they will often need to be changed when the implementation
changes.


Let's assume that the term stub was meant to be used instead of mocking. Well
written unit tests do not involve dependencies outside of the code being tested,
and to achieve this, stubs are typically used. As others have stated already in
this thread, unit tests are meant to be fast. Our Juju "unit" tests are in many
cases not unit tests at all - they involve bringing up the whole stack,
including mongo in replicaset mode for goodness sake, all to test a single
component. This approach is flawed and goes against what would be considered as
best practice by most software engineers. I hope we can all agree on that point.


I agree. I tend to see the need for stubs (I dislike Martin Fowler's 
terminology and prefer the term mock - as it really is by common 
parlance just a mock object) as a failure of the code. Just sometimes a 
necessary failure.


Code, as you say, should be written as much as possible in decoupled 
units that can be tested in isolation. This is why test first is 
helpful, because it makes you think about "how am I going to test this 
unit" before your write it - and you're less likely to code in hard to 
test dependencies.


Where dependencies are impossible to avoid, typically at the boundaries 
of layers, stubs can be useful to isolate units - but the need for them 
often indicates excessive coupling.




To bring up but one of many concrete examples - we have a set of Juju CLI
commands which use a Juju client API layer to talk to an API service running on
the state server. We "unit" test Juju commands by starting a full state server
and ensuring the whole system behaves as expected, end to end. This is
expensive, slow, and unnecessary. What we should be doing here is stubbing out
the client API layer and validating that:
1. the command passes the correct parameters to the correct API call
2. the command responds the correct way when results are returned

Anything more than that is unnecessary and wasteful. Yes, we do need end-end
integration tests as well, but these are in addition to, not in place of, unit
tests. And integration tests tend to be fewer in number, and run less frequently
than, unit tests; the unit tests have already covered all the detailed
functionality and edge cases; the integration tests conform the moving pieces
mesh together as expected.

As per other recent threads to juju-dev, we have already started to introduce
infrastructure to allow us to start unit testing various Juju components the
correct way, starting with the commands, the API client layer, and the API
server layer. Hopefully we will also get to the point where we can unit test
core business logic like adding and placing machines, deploying units etc,
without having to have a state server and mongo. But that's a way off given we
first need to unpick the persistence logic from our business logic and address
cross pollination between our architectural layers.



+1

Being able to test business logic without having to start a state server 
and mongo will make our tests s much faster and more reliable. The 
more we can do this *without* stubs the better, but I'm sure that's not 
entirely possible.


All the best,

Michael

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev