On 10 Apr 2009, at 02:12, Matt Wynne wrote:
What does it mean for Cucumber to be lazy? It will only run a feature if it needs to.

While I have yet to do more than skim the full articles, I wondered if you'd seen "Integration Tests Are A Scam" on InfoQ[1]? It was the following that caught my attention:

the hypothetical programmers with the integration-based test suite choose to worry only about "the most important [x]% of the tests"

Towards the end of the second article he seems to say it's more excessive integration testing that is the problem. (Even if he does end with "Stop writing them.") Strikes as me as rather along the lines of, albeit in the opposite direction to, "we don't mock because it makes the tests fragile".

I was just idly thinking, could a code-coverage based system could be combined with some sort of failure (fragility) history to balance the time cost of heavy feature runs with the benefits of having something run end-to-end? We've had reverse-modification-time spec ordering for ages which is a useful start.

On a more ranty note - I have very little time for these "XXX BDD/ development technique is always bad, don't do it" articles. (But hey, maybe I was guilty of this myself and have forgotten since...) Pretty much every technique I've seen has some benefit - if you use it selectively. I wish people would stop writing these inflammatory articles, and (a) figure out how to use these techniques like razors not shotguns and (b) go and improve the tools that apply them. Otherwise they're just making everyone's life harder. Gah!!! </rant>

Ashley

[1] http://www.infoq.com/news/2009/04/jbrains-integration-test-scam

--
http://www.patchspace.co.uk/
http://www.linkedin.com/in/ashleymoran
http://aviewfromafar.net/
http://twitter.com/ashleymoran






_______________________________________________
rspec-users mailing list
rspec-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/rspec-users

Reply via email to