I'd suggest adding a coverage ratchet to your build. It's the most
effective (if occasionally annoying) tool when dealing with such
situations. Some assumptions:

* You need a CI server and everyone's using CCMenu/Buildnotify so the
team knows as soon as the build breaks
* You don't have a brittle build that's red a non-trivial portion of
the time (not unusual when you inherit a test-less codebase and start
adding Cukes to introduce sanity)
* Your team is now used to the "drop everything if the build is red
and fix it" rule

I should really extract the ratchet I've hand-rolled for RSpec on
Goldberg into something reusable.

Best,
Sidu.
http://c42.in
http://sidu.in


On 25 July 2012 09:01, Adam Sroka <adam.sr...@gmail.com> wrote:
> AFAIK, there is no framework or tool that can prevent people from doing
> stupid things.
>
> I actually only use pending for one thing: in the morning it reminds me
> where I was heading the prior evening.
>
> On Jul 24, 2012 1:57 PM, "James Cox" <ja...@imaj.es> wrote:
>>
>> Yeah, I love pending too. but it doesn't help me get a sense of the
>> state of a suite before I start. now it's part of my practice to go in
>> and find out how much is commented out.
>>
>> David,
>>
>> three concerns with pending as an option:
>>
>> a. it won't help the people who think it's ok to comment out whole
>> tests. If you make that choice it's not a good thing (™). I don't
>> think there's enough evangelism in the world to change them.
>>
>> b. how do you distinguish between a pending and a broken-but-fixing
>> test? one means, "i've got no coverage here, and i haven't thought
>> about it' whereas the other says, 'i used to have coverage, and i need
>> to fix it'. I know it's semantics, but that's important here: i need
>> to know where no effort for testing has been made vs where testing
>> existed (which may imply some domain knowledge which was at one point
>> true).
>>
>> c. if you see # it 'should …' or similar, that's a commented test, not
>> a test comment. This metric is always going to be loose… but it may
>> give an indication, a sniff test. some kind of idea of what the state
>> of a test is. It's the same as running rake stats - you and I know
>> it's a bullshit metric (it can't possibly tell me how good the tests
>> are), but it tells me at least if any effort to test has happened.
>> Then, I run coverage and figure out how exercised the code is..
>> somewhere along that line, it'd be good to know if there used to be
>> tests but they are commented out.
>>
>>
>> an anecdote… i experienced this recently with a project, and a
>> significant majority of the tests were just commented out. They used
>> to work, and a lot of it modeled the domain reasonably well - but
>> either due to a breaking gem upgrade or a refactor or something - the
>> original dev just didn't move/fix the test. So, on the face of it, the
>> test scenario looked horrible, but in the end, for the key components,
>> fixing the tests wasn't that painful. Regardless, I got a pretty fast
>> sense of how much water he was treading at the time (or, how much he
>> was under it :/)
>>
>> so yes, pending is ok, but a second keyword "broken" might be nicer,
>> which would act the same but output different info.
>>
>> -james
>>
>> On Mon, Jul 23, 2012 at 10:58 PM, Adam Sroka <adam.sr...@gmail.com> wrote:
>> > I haven't posted in a while, but I want to say that as someone who
>> > spends a
>> > significant portion of his time teaching (T/B)DD I am totally in love
>> > with
>> > pending specs. There are analogous concepts in nearly every xUnit/xSpec,
>> > but
>> > pending is by far the best. Kudos.
>> >
>> > On Jul 23, 2012 9:57 PM, "David Chelimsky" <dchelim...@gmail.com> wrote:
>> >>
>> >> On Mon, Jul 23, 2012 at 11:19 AM, James Cox <ja...@imaj.es> wrote:
>> >> > Hey,
>> >> >
>> >> > in a bunch of the rescues i've recently done, I see a pretty big anti
>> >> > pattern: tests don't work, and so rather than making them work, the
>> >> > dev team just comments them out till 'later'.
>> >> >
>> >> > Does anyone think it'd be useful/interesting to get a flag for rspec
>> >> > which would compare lines vs lines-commented, and if the percentage
>> >> > was higher than xx, it'd issue some kind of warning?
>> >>
>> >> The pending feature is designed to help with this problem by allowing
>> >> you to disable an example while still keeping it visible.
>> >>
>> >> If we were to do what you propose, we'd need to offer an opt-out
>> >> and/or the ability to configure the percentage. Consider a suite that
>> >> uses a lot of comments to annotate the specs. The problem with making
>> >> it configurable is that the folks who's priorities lead them to
>> >> comment out examples instead of fixing them will likely just disable
>> >> this feature.
>> >>
>> >> I'd say, let's encourage people to use 'pending' correctly. WDYT?
>> >>
>> >> Cheers,
>> >> David
>> >> _______________________________________________
>> >> rspec-users mailing list
>> >> rspec-users@rubyforge.org
>> >> http://rubyforge.org/mailman/listinfo/rspec-users
>> >
>> >
>> > _______________________________________________
>> > rspec-users mailing list
>> > rspec-users@rubyforge.org
>> > http://rubyforge.org/mailman/listinfo/rspec-users
>>
>>
>>
>> --
>> James Cox,
>> Consultant, Raconteur, Photographer, Entrepreneur
>> t: +1 347 433 0567  e: ja...@imaj.es w: http://imaj.es/
>> talk: http://twitter.com/imajes photos: http://500px.com/imajes
>> _______________________________________________
>> rspec-users mailing list
>> rspec-users@rubyforge.org
>> http://rubyforge.org/mailman/listinfo/rspec-users
>
>
> _______________________________________________
> rspec-users mailing list
> rspec-users@rubyforge.org
> http://rubyforge.org/mailman/listinfo/rspec-users
_______________________________________________
rspec-users mailing list
rspec-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/rspec-users

Reply via email to