What we've tended to do, internally, is to name the testcases after the
type they are testing, so all unit tests for java.io.File are put into a
tests package ending with java.io in a type called FileTest that extends
the junit.framework.TestCase.
We would have written it as java.io.tests, but the java.<whatever>
namespace is reserved, so the formula is simply
<package>.<type> -> org.apache.harmony.tests.<package>.<type>Test
This makes it clear what is being tested, and where to add new tests etc.
Then within the test class itself the methods are named after the method
under test, with a familar JNI-style encoding, so we have things like:
org.apache.harmony.tests.java.io.FileTest contains
public void test_ConstructorLjava_io_FileLjava_lang_String() {
...
}
and
org.apache.harmony.tests.java.lang.FloatTest contains
public void test_compareToLjava_lang_Float() {
...
}
If the test is added due to a regression, then it is put into the right
place in the test suite, and flagged with a comment (i.e. a reference to
the Harmony JIRA number).
Regards,
Tim
George Harley1 wrote:
> Hi,
>
>
>>I think that regression tests should be marked in some way.
>
>
> Agreed. But can we please *resist* the temptation to do this by
> incorporating JIRA issue numbers into test case names (e.g. calling unit
> test methods test_26() or test_JIRA_26()). I've seen this kind of approach
> adopted in a couple of projects and, in my experience, it often leads to
> the scattering of duplicated test code around the test harness.
>
> Better, methinks, to either create a new test method with a meaningful
> name or else augment an existing method - whatever makes more sense for
> the particular issue. Then marking certain code as being for regression
> test purposes could be done in comments that include the URL of the JIRA
> issue. Perhaps an agreed tag like "JIRA" or "BUG" etc could be used as an
> eye-catcher as well ?
> e.g.
>
> // BUG http://issues.apache.org/jira/browse/HARMONY-26
>
>
> My 2 Euro Cents.
>
> Best regards,
> George
> ________________________________________
> George C. Harley
>
>
>
>
> "Mishura, Stepan M" <[EMAIL PROTECTED]>
> 12/01/2006 04:56
> Please respond to
> [email protected]
>
>
> To
> <[email protected]>
> cc
>
> Subject
> RE: regression test suite
>
>
>
>
>
>
> Hello,
>
> Tim Ellison wrote:
>
> [snip]
>
>>What is the useful distinction for regression tests being kept
>
> separate?
>
>>I can see that you may distinguish unit and 'system-level' tests just
>>because of the difference in frameworks etc. required, but why do I
>
> care
>
>>if the test was written due to a JIRA issue or test-based development
>
> or
>
>>someone who get's kicks out of writing tests to break the code?
>>
>
>
> I agree that separating regression tests doesn't make sense.
> However I think that regression tests should be marked in some way.
> This will signal a developer that a test was created to track already
> known issue. IMHO, a regression test should point out to a bug report
> and a bug report (after resolving a bug) should contain a reference to
> corresponding regression test in repository.
>
> Thanks,
> Stepan Mishura
> Intel Middleware Products Division
>
>
>
--
Tim Ellison ([EMAIL PROTECTED])
IBM Java technology centre, UK.