Forgive me if I completely misunderstood your question. Replies inline.

One way is to use `with-redefs`. This technique would be used at every
> level along the pyramid (except the bottom level which doesn't call
> anything else).
>
>
This would be similar to mocking, correct? If so, what'd be wrong with it?


> This feels very fragile. For one thing, if any of the function signatures
> change, any tests that reference that function have to change too, or
> they'll become false positives, no longer representing the real world, just
> the imaginary world that was setup in the test.
>
>
You could have pre-conditions in your fns to check for this. Considering
that this scenario is bad in general, not just in testing, this might be
the thing to do anyway.


> Another way is to test state changes and return values at every level in
> the pyramid. This represents the real world more, because if any feature
> function changes, all the tests above it will fail. But there's two
> problems with this. First, there's going to be a lot of redundancy in test
> data in the vertical slice of the pyramid for a single feature. I'm on the
> fence of whether that's really a problem. Secondly and more importantly,
> each feature function's tests will also have to cover all the behaviors of
> the functions it calls, besides testing its own behaviors.
>
>
It's a trade-off. How much testing is enough testing?


> For example, X calls Y. X has 5 behaviors and Y has 5 behaviors. So Y will
> have 5 tests, but X will have 25. This is because, from a high level, we
> only care about the feature as a whole (a.k.a. X) and Y is only an
> auxiliary function to assist X. We, the stakeholders of feature X, don't
> know or care that Y exists. So Y's tests are irrelevant to us, who just
> want to know that X does its job. So we want to see all of X and Y's
> behaviors tested, without knowing about Y, to give us confidence that X
> does all 25 things. But we still need to be responsible and test Y,
> especially because X delegates half his work to Y in the name of healthy
> abstraction.
>
>
I'd probably say that if you've tested Y and you're happy with those tests
then you can just mock it in X's calls.


> A third way is to sprinkle small amounts of Y's behavior into some or all
> of X's tests. This relies heavily on probability, suggesting that X is
> probably using Y because he shares at least one of Y's behaviors. If it's
> true, then we know that X shares all of Y's behaviors. It becomes a sort of
> "linked list" of features, with the sprinkled-in behavior of Y acting like
> a "next" pointer. The downside is that there's a pretty fair chance that I
> didn't really call Y, but I just copied/pasted some of his behavior. Sure,
> not immediately after writing the test. But 6 months in the future, when
> I've forgotten why I wrote the test this way, and I see that X is calling Y
> when the test says he just wants to use a fifth of Y's behavior, I'll think
> it's overkill and rip out the call to Y, replacing it with just whatever
> X's test specifies. Then it just became another deadly false positive.
>
>
This doesn't strike me like a good idea :)

U

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to