On Thu, Sep 11, 2014 at 3:41 PM, Gustavo Niemeyer <[email protected]> wrote:
> Performance is the second reason Roger described, and I disagree that > mocking code is cleaner.. these are two orthogonal properties, and > it's actually pretty easy to have mocked code being extremely > confusing and tightly bound to the implementation. It doesn't _have_ > to be like that, but this is not a reason to use it. > It is easy to do that, though often that is a sign of not having clean separations of concerns. Messy mocking can (though does not always) reflect messiness in the code itself. Messy, poorly isolated code is bad and messy mocks, often mean you have not one but two messes to clean up. > Like any tools, developers can over-use, or mis-use them. But, if you > > don't use them at all, > That's not what Roger suggested either. A good conversation requires > properly reflecting the position held by participants. You are right, I wasn't precise about the details of his suggestion to not use them, but he did suggest not using mocks unless there is *no other choice.* And it is that rule against them that I was trying to make a case against. With that said, I definitely agree with the experience that both of you are trying to highlight about the dangers of over-reliance on mocks. I think everybody who has written a significant amount of test code knows that passing a test against a mock is not the same thing as actually working against the mocked out library/function/interface. > > you often end up with what I call "the binary test suite" in which one > > coding error somewhere creates massive test failures. > > A coding error that creates massive test failures is not a problem, in > my experience using both heavily mocking and heavily non-mocking code > bases. It's not a problem for new code, but it makes refactorings and cleanup harder because you change a method, and rather than the test suite telling you which things depend on that and therefore need to be updated, and how far you need to go, you get 100% test failures and you're not quite sure how many changes are needed, or where they are needed -- until suddenly you fix the last thing and *everything* passes again. > My belief is that you need both small, fast, targeted tests (call them > unit > > tests) and large, realistic, full-stack tests (call them integration > tests) > > and that we should have infrastructure support for both. > > Yep, but that's besides the point being made. You can do unit tests > which are small, fast, and targeted, both with or without mocking, and > without mocking they can be realistic, which is a good thing. If you > haven't had a chance to see tests falsely passing with mocking, that's > a good thing too.. you haven't abused mocking too much yet. > Sorry, I was transitioning back to the main point of the thread, raised by Matty at the beginning. And I was agreeing that there are two very different *kinds of tests* and we should have a place for "large" tests to go. I think the two issues ARE related because a bias against mocks, and a failure to separate out functional tests, in a large project leads to a test suite that has lots of large slow tests, and which developers can't easily run many, many, many times a day. By allowing explicit ways to write larger functional tests as well as small (unitish) tests you get to let the two kinds of tests be what they need to be, without trying to have one test suite serve both purposes. And the creation of a "place" for those larger tests was just as much a part of the point of this thread, as Roger's comments on mocking. --Mark Ramm PS, if you want to fit this into the Martin Fowler terminology I'm just using mocks as a shorthand for all of the kinds of doubles he describes.
-- Juju-dev mailing list [email protected] Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
