Thanks for the thoughtful reply, Stephen -- perhaps I owe a bit of an apology if my last response to you came off as a bit harsh; your suggestions/ input is certainly appreciated. I always find posing these types of questions over email to be a bit difficult. From the questioners perspective, it's hard to figure out how to present the problematic piece of code with enough detail to adequately define the problem and domain, but not so much detail that it becomes confusing to quickly look through, or that it risks having to think about issues of releasing non-public code. From the responder's perspective, it's hard to get a sense of whether or not the person asking the question already knows anything about the fundamental principles or not (i.e., whether or not the response should be a big-picture "here is the general theory that you should understand" or "here is a response related to this special case, assuming you know the general theory"). Oh well, so it goes.
Regardless, now that I know that it is possible to specify consecutive message expectations in rSpec, I've found a simple way to deal with the specific issue I was having. Beyond that, without spending more time going over the details of this particular method, I'll just say that I think we are in fact in agreement about the overall testing strategy and considerations you described :) On Tue, Jul 21, 2009 at 10:22 AM, Stephen Eley <sfe...@gmail.com> wrote: > On Tue, Jul 21, 2009 at 3:18 AM, Barun Singh<baru...@gmail.com> wrote: > > I get your point, and I already understood it well before my original > > email. We all know that generic advice isn't always applicable in every > > instance, however, and this is a case where the number of distinct specs > > required to test all input combinations that are of interest is simply > too > > large to make it worth doing if I don't use stubs at all. > > That's the kind of problem I like to avoid. My experience is that it > can _usually_ be avoided by reexamining something up the chain. > Sometimes the method can be turned into a class, or the data model can > be restructured, or the business rules that make it overly complex > aren't as inflexible as I first thought they were. Not always -- > sometimes we really do have to work with what we're given, and what > we're given is a mess. That may be your case, but you didn't give > enough information for me _not_ to raise "Look at the big picture" as > a possibility. > > If that's really the case, though, then the kind of testing you need > to do on this algorithm changes. "A bunch of pieces with complex > states need to work together" feels more like an integration test to > me than a unit test. And I don't understand how stubbing helps you > get better integration test coverage. Stubbing means locking down the > state of something. If you stub something internal to your code, > you're no longer testing cases for states your stub doesn't return. > If those cases matter during 'black box' testing, but you don't hit > them when you abstract something out, you've just created a gap. (And > if you *do* hit them despite abstracting out, then why do you need the > code that got stubbed?) > > > > > It's obviously hyperbolic and a bit > > silly to suggest that stubbing out a public method from one of my models > in > > order to simplify a spec would lead to stubbing "every method call inside > > every method". You might as well argue that a person should never stub > > anything in any spec. > > You're right that I was hyperbolic. But I treat it as a red flag if I > feel a need to stub code I've already written in the same class or > module as what I'm testing. I usually stub for two reasons: > > 1.) As a design placeholder for stuff I haven't written yet (and I > take those stubs out after I've written them); or > 2.) As a placeholder for *external* interface points to services, > libraries, or functional spheres that I need to couple with but don't > want to get highly entangled with. Stubbing improves speed for slow > stuff, but it also compels me to keep my interactions simple. > (Examples: stubbing out results from a SOAP service, or stubbing out > "current_user" on controller specs.) > > I do a lot more mocking, but usually just to confirm that particular > side effects were triggered. (And again: I take having to do it as a > cue to at least consider whether I can write the code in a way that > minimizes side effects.) > > I'm not trying to be an ass about this. I think this is a good > conversation. You know your problem domain better than I do, and > you're clearly not a novice at this stuff. But telling me "Thanks, > but you're wrong" when you didn't say a lot about the problem domain > makes it hard to be helpful. > > > > My statement of the refactored methods being public was predicated on the > > assumption that I would be stubbing them out (I don't stub private > methods), > > since there's no other useful reason to refactor in this instance (the > code > > isn't hard to read, I don't need to reuse any of it elsewhere, and if I > > don't stub them out in testing the outer method then refactoring hasn't > made > > any tests any easier either). > > But a useful reason to refactor *did* come up. Your subject line for > this thread was about how to spec a recursive method. You've had > several demonstrations of ways to make spec'ing easier by not > recursing. Your second block of code, with "add_something" and > "find_something," is more readable *and* more testable, whether or not > you stub. > > That's a win, right? > > -- > Have Fun, > Steve Eley (sfe...@gmail.com) > ESCAPE POD - The Science Fiction Podcast Magazine > http://www.escapepod.org > _______________________________________________ > rspec-users mailing list > rspec-users@rubyforge.org > http://rubyforge.org/mailman/listinfo/rspec-users >
_______________________________________________ rspec-users mailing list rspec-users@rubyforge.org http://rubyforge.org/mailman/listinfo/rspec-users