----- Original Message -----
> From: "Tim Flink" <tfl...@redhat.com>
> To: qa-devel@lists.fedoraproject.org
> Sent: Wednesday, March 5, 2014 8:23:31 PM
> Subject: Re: D19 Comments and Diff
> 
> I'm generally of the mind that folks shouldn't have to dive into
> docstrings on tests in order to understand what is being tested. It is
> unavoidable in some cases where you have complex tests but that's
> usually a test or code smell which should at least be acknowledged.

Sure, I do agree. On the other hand, I believe that the developer running the 
tests should have at least an overall idea of what the "tested" code does 
(since he probably made some changes, and that triggered the need for running 
unit tests). I do not know why, but sometimes it seems like people (and I'm 
refering to my previous employments here) tend to believe (and I share this 
belief to some extent) that the "production" code can be complex (and by that I 
do not mean smelly), and the people reading/maintaining it will be able to 
understand it (with the possible help of comments). But at the same time, the 
(unit)tests must be written in a way, that first year high school student must 
be able to instantly understand them. Maybe it is a residue from the usual 
corporate "testing is done by dummies, programming is done by the geniuses" 
approach, I don't know. But I kind of tend to treat the tests as a helper tool 
for _the developer_.

> One of my counter-points here is that if the tests are so trivial, do
> we really need to have them? After a certain point, we can add tests
> but if they aren't incredibly meaningful, we'd just be adding
> maintenance burden and not gaining much from the additional coverage.

Sure, but the way I write tests is bottom up - I start from the simple ones, 
and traverse the way up into the "more complex" tests. I'm not saying that this 
is good/best approach, it just makes sense for me to know, that the basics are 
coverede before diving into the more "tricky" stuff.
 
> Overreact much? :-P

Yup, I sometimes tend to :D But once you see my head-desks you'll understand :D
 
> I may have gone a little too far and not spent enough time on the
> tests. I agree that some of the test names could be better and that
> there's not a whole lot of benefit to being rigid about "one assert per
> test" regardless of whether or not it's generally accepted as a good
> thing.

I know that the "One assert to rule them all" (hyperbole here) is usually 
considered _the_ approach, but all the time I see the "Have one assert per 
tests" guideline (which tends to be interpreted as _the rule_), there is this 
other guideline saying "Test one logical concept per test". And this is what I 
tend to do, and what (IMHO) Kamil did in his de-coupled patch. So not all tests 
with more than one assert are necessarily test smell.

And finding the right balance between the two guidelines is IMHO the goal we 
should aim for. So yes, having method names longer that the actual test code is 
something I consider... Not really that great :) But I understand that you 
wanted to show Kamil (and the rest of us) what can be done, and what the 
general guidelines behind the unit testing are, so I'm not trying to disregard 
the overall benefit.

> I also want to avoid increasing the maintenance burden of having a
> bunch of tests that look for things which don't really need to be
> tested (setting data members, checking default values etc.)

I agree, there is stuff that kind of can be taken for granted.

J.
_______________________________________________
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel

Reply via email to